entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
14
193
authors
sequencelengths
1
1.14k
primary_category
stringclasses
125 values
categories
sequencelengths
1
6
text
stringlengths
12
495k
http://arxiv.org/abs/2409.02960v1
20240903184116
Managing multiple agents by automatically adjusting incentives
[ "Shunichi Akatsuka", "Yaemi Teramoto", "Aaron Courville" ]
cs.MA
[ "cs.MA", "cs.AI", "cs.GT" ]
Managing multiple agents by automatically adjusting incentives Shunichi Akatsuka Corresponding author, https://orcid.org/0000-0002-1681-6405 Hitachi, Ltd., R&D Group Kokubunji, Tokyo, Japan Yaemi Teramoto Hitachi, Ltd., R&D Group Kokubunji, Tokyo, Japan Aaron Courville Mila / Université de Montréal, Montréal, Canada ======================================================================================================================================================================================================================================================================================== § ABSTRACT In the coming years, AI agents will be used for making more complex decisions, including in situations involving many different groups of people. One big challenge is that AI agent tends to act in its own interest, unlike humans who often think about what will be the best for everyone in the long run. In this paper, we explore a method to get self-interested agents to work towards goals that benefit society as a whole. We propose a method to add a manager agent to mediate agent interactions by assigning incentives to certain actions. We tested our method with a supply-chain management problem and showed that this framework (1) increases the raw reward by 22.2%, (2) increases the agents' reward by 23.8%, and (3) increases the manager's reward by 20.1%. keywords: reinforcement learning, general-sum games, multi agent, incentive design, supply chain management § INTRODUCTION The rise of AI and machine learning techniques is changing how society operates, as we see more scenarios where AI and machine learning agents play important roles. From automating routine tasks to enabling sophisticated data analysis, schedule optimization, interactive chatbots, and even robotic manipulations, AI is reshaping the landscape of work and everyday life. In the coming years, these AI agents will be used for making more complex decisions, including in situations involving many different groups of people. One big challenge is that AI agent tends to act in its own interest, unlike humans who often think about what will be the best for everyone in the long run. This raises the question: How can we get self-interested agent to work towards goals that benefit society as a whole? Game theory offers a powerful framework to analyze these dynamics. At its core, the challenge is to foster cooperation among self-interested agents in a scenario that has general-sum payoff structure. Our goal is to get self-interested agents to cooperate in general-sum games, in a scalable way. We believe that deep reinforcement learning (RL) is one of the candidates, as it has been shown to reach the human expert level in complex decision-making such as Atari <cit.> and Go <cit.>. Multi-agent RL methods have been successful in some cooperation games like StarCraft <cit.>. However, relatively little effort has been made to apply RL in general-sum games. One recent approach is to this is to make the agent aware that other agents are also learning at the same time. LOLA <cit.> agent optimizes its policy with the assumption that the other agent is a naïve-learning agent. COLA <cit.> fixes the consistency problem that LOLA has, and POLA <cit.> further develops the method by adding proximal objectives. These methods work well in simple games like iterated prisoner's dilemma or the Coin Game <cit.> but have not been shown to scale up to large and complex problems. Our approach is to modify the game by adding another agent, a manager, which is inspired by the idea of Token Economy. The objective of the manager is to maximize the sum of the rewards of all agents. The manager mediates the agent interactions by showing auxiliary states and giving incentives to take certain actions. This approach is applicable in the real world when there is an independent organization that benefits from the agents' profits and is able to pay back the profit to the agents. We can view our method as a type of Automated Mechanism Design <cit.>. However, it differs from previous works in that our approach deals with multi-step optimization problems and that it assumes the agents dynamically learn as the manager learns the policy (mechanism). § PROPOSED METHOD §.§ General Multi-Agent Reinforcement Learning (MARL) A reinforcement learning agent is modeled to take sequential actions in an environment formulated as a Markov Decision Process (MDP). An MDP is defined by the tuple <S, A, P, r, γ>, where S is the state space, A is the action space, P is the transition probability, r is the reward function, and γ∈ [0,1) is the discount factor. In an MDP environment, an agent observes a state s_t and executes action a_t at timestep t. In the next timestep t+1, the environment shifts to a new state s_t+1 at a probability of P = (s_t+1| s_t, a_t) and the agent receives a reward r(s_t, a_t, s_t+1). The goal of the agent is to find a policy π = (a_t | s_t) that maximizes the discounted total reward J, defined by J = Σ_t γ^tr(s_t, a_t, s_t+1). In a multi-agent problem, the environment is a multi-agent MDP, which is defined similarly to the MDP with multiple agents taking actions. In this paper, we focus on a Markov Game <cit.>, where all the agents take actions simultaneously at each step. A Markov Game is defined by the tuple <N, S, {A^i}, P, {r^i}, γ> , where N is the number of agents in the environment, S is the joint state space for all the agents, and A^i is the action space for agent i. The transition function P and the reward function r^i are defined for the joint state and action spaces. The goal of agent i is to find a policy π^i=(a^i_t | s^i_t) that maximizes its own discounted reward J^i, defined by J^i = Σ_t γ^tr^i(s^i_t, a^i_t, s^i_t+1). §.§ MARL with a manager Figure <ref> shows the overall concept of our method. In our framework, we add another agent called the manager, which is shown at the top of the figure. At timestep t, the manager observes the state s^M_t and selects an action a^M_t according to its policy π^M(s^M_t). This action works as an auxiliary state element ŝ^i_t and supplies auxiliary rewards r̂^i_t for agent i, such that a^M_t = [ ŝ^0_t, ... , ŝ^N_t, r̂^0_t, ..., r̂^N_t ] ∼π^M(s^M_t), where N is the number of agents. In other words, the manager tries to control the agents by showing auxiliary states ŝ^i_t and paying incentives r̂^i_t to the agents. The manager ultimately wants to maximize the sum of the (raw) rewards of all the agents while keeping the paid-out incentive as low as possible, thus the objective function of the manager J^M will become J^M = Σ_t γ^t{Σ_i (r^i_t - r̂^i_t)}, where γ is the discount factor. From the agent's point of view, the state and the reward are modified by the manager's action. For agent i, we can convert the state and the reward definition as s^i_t ← s^i_t⊕ŝ^i_t, r^i_t ← r^i_t + r̂^i_t, and the objective function for agent i, J^i, is defined as an ordinary MARL problem defined in Equation <ref>. § EXPERIMENTS §.§ The supply-chain optimization problem We tested our method with a supply-chain optimization problem. We used a simple supply chain shown in Figure <ref> with two suppliers, three factories, and three retailers. The factories can purchase parts from either of the two suppliers, but the retailers can only purchase items from a specific factory. An environment step consists of seven days, as shown in Figure <ref>. At the beginning of each step (DAY1), the factories place orders to the suppliers. The suppliers produce the parts and deliver them to the factories after several days, depending on the number of orders and the capacity of the suppliers. The factories assemble the parts to create items, which will take another day. At every step on DAY2, the factories receive orders from the retailers. The factories fulfill the orders by shipping the items to the retailers as early as possible (DAY3∼DAY7), and only the items shipped on the earliest possible day (DAY3) are considered as on time. When the number of items that can be shipped is smaller than the number of orders, the remaining orders will be pooled as back orders, which need to be fulfilled in the subsequent steps. If the number of produced items is larger than the number of orders, then the remaining items will be stored at each factory as stock, which can be used to fulfill future orders. The players in this environment are the factories. They decide how many parts to buy from each supplier, to maximize their rewards. The factory has two objectives, to maximize its own profit and to fulfill the retailer's orders on time as a whole. We assume that each supplier has a different price and production capacity per day. The factories want to buy the parts from cheaper suppliers, but if they all buy from the same supplier, the number of orders surpasses its capacity and causes delays in parts delivery. This will decrease the number of items shipped to the retailers on time. This is where the general-sum-game characteristic shows up – some factories should order more from the expensive supplier for timely shipping, no factory would want to do so as this reduces profit. §.§ RL formalization We set up an RL problem with one agent assigned to each of the three factories. The action of agent i is to place orders with the two suppliers, thus a_t^i is a two-dimensional integer vector. We defined the reward of agent i as r^i = w^p r^p, i_t + w^OFRr^OFR_t, where r^p, i_t is the profit of agent i, r^OFR_t is the Order Fulfillment Ratio defined in the next paragraph, and w^p, w^OFR are the weight factors. The profit reward is defined as r^p, i =1/C^p, norm{ N^ship, i_t· p^item - Σ_sa_t^i, s· p^parts, s - I^i_t· p^inventory - C^p, offset}, where N^ship, i_t is the number of items shipped, I^i_t is the number of inventories at time t, a_t^i, s is the s-th component of the agent action vector a_t^i, p^item is the selling price of the item, p^parts, s is the price of the parts from supplier s, and p^inventory is the penalty fee imposed to each inventory item. The index s runs through all the suppliers, i.e. s=0,1. The constants N^p and C^p are the normalization factor and offset parameter, respectively, to normalize the reward range in to approximately [0,1] per step. The Order Fulfillment Ratio (OFR) is defined by [Number of orders on time]/[Number of total orders]. This is an important metric that affects customer satisfaction and efficiency of the supply chain. We consider a scenario where the OFR is calculated for the supply chain as a whole, and there is a target value T^OFR for the OFR. Then we define the OFR reward r^OFR as r^OFR_t = 1, if OFR≥ T^OFR 0, otherwise. We defined the auxiliary state ŝ^i_t as a two-dimensional vector with values in range [0, 1], and the auxiliary reward as r̂^i_t = ŝ^i_t-1· a^i_t-1, which is the inner product of the auxiliary state and the agent's action at the previous step. This means that the incentives are given to factory i for buying the items from suppliers, proportional to the auxiliary state ŝ^i_t. With this definition, the auxiliary reward is calculated from the auxiliary state and thus the manager's action is only to select the auxiliary states, â^M_t = ŝ^0_t⊕ŝ^1_t⊕ŝ^2_t∼π^M_t(s^M_t). The raw observation for an agent is a 175-dimension vector, which includes 105 variables on the suppliers' status, 45 variables on the factory's status, 25 variables on future demand estimates, and 1 variable that indicates the current timestep. The observation space for the manager is a 531-dimension vector, which includes the observation of all three factories plus a 6-dimensional vector of the agent's actions in the previous step. The environmental parameters are shown in Table <ref>. We set up the parameters so that supplier 0 is cheaper but has limited capacity, and supplier 1 is more expensive but has a larger capacity. The normalization and the offset constants for the profit reward are tuned carefully so that the reward range would be [0.0, 1.0] per step. §.§ Training and Evaluation Both the agents and the manager are trained with DDPG <cit.>. The actor and the critic networks for the agents and the manager have two fully-connected hidden layers with 128 nodes each. The outputs of the actor networks are converted to the range [0,1] with a tanh function. The agents' actions are further converted to integers in the range [0, 99] by multiplying the output by 100 and rounding down. We train the agents with two frameworks: a naïve MARL, where there is no manager, and our proposed framework with the manager. In both cases, we train the agents and the manager with 500 episodes and with 10 different random seeds. We take the final 25 episodes of the training to evaluate the performance. We evaluate the mean scores of the agents and the manager, as well as their standard deviations. § RESULTS Figure <ref> shows the average reward of the factories during the training. Figure <ref> shows the plot for the setup without the manager taking action, and Figure <ref> shows the plot with the manager. These plots show that, with the manager, the profit reward shown in blue decreases but the OFR reward increases more than that, which improves the final score, shown in red. Note that the decreasing score at the beginning of the training in Figure <ref> is caused by the manager randomly incentivizing the factories, and quickly decreasing the incentives. The mean scores of the last 25 training episodes are shown in Table <ref>. Without the manager, the average reward of the factories was 0.505, and with the manager, it improved to 0.625, which is a 23.8% improvement. The manager's reward increased to 0.610 ± 0.051, which is a 20.1% improvement. The raw rewards without considering the auxiliary rewards improved to 0.617, which is a 22.2% improvement on average. We further investigate how adding the manager influences the factories' actions. Figures <ref>, <ref> show the average factory's actions during the training without and with the manager. Without the manager, the factories mostly buy from supplier 0, but after adding the manager, the factories buy more from supplier 1. This contributes to improving the OFR significantly. § CONCLUSION AND DISCUSSION In this paper, we tackled the problem of making self-interested agents cooperate in a multi-agent general-sum game environment. We proposed a method to add an agent called a manager in a multi-agent environment to mediate agent interactions by assigning incentives to certain actions. We tested our method with a supply-chain management problem and showed that it increases the total profit of all the players. Although our experiments show that the manager's policy can be obtained with a simple reinforcement learning framework, one limitation is that we assume the agents are naïve learning RL agents. This is a strong constraint, because in reality, these agents may be more clever and try to exploit the manager, or sometimes can be less reasonable and stick to their original policy. It is important to consider how the agent architectures impact the performance of our method. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the authors. Requests for data should be addressed to [Author Name] at [Author's Email Address]. § CONFLICT OF INETEST STATEMENT On behalf of all authors, the corresponding author states that there is no conflict of interest. colm
http://arxiv.org/abs/2409.02086v2
20240903173938
Noise-free comparison of stochastic agent-based simulations using common random numbers
[ "Daniel J. Klein", "Romesh G. Abeysuriya", "Robyn M. Stuart", "Cliff C. Kerr" ]
q-bio.QM
[ "q-bio.QM", "q-bio.PE", "stat.CO", "stat.ME" ]
Noise-free comparison of stochastic agent-based simulations using common random numbers Daniel J. Klein1*, Romesh G. Abeysuriya2, Robyn M. Stuart1, Cliff C. Kerr1 1 Institute for Disease Modeling, Bill & Melinda Gates Foundation, Seattle, WA, USA 2 Burnet Institute, Melbourne, Victoria, Australia Contractor on assignment * [email protected] § ABSTRACT Random numbers are at the heart of every agent-based model (ABM) of health and disease. By representing each individual in a synthetic population, agent-based models enable detailed analysis of intervention impact and parameter sensitivity. Yet agent-based modeling has a fundamental signal-to-noise problem, in which small changes between simulations cannot be reliably differentiated from stochastic noise resulting from misaligned random number realizations. We introduce a novel methodology that eliminates noise due to misaligned random numbers, a first for agent-based modeling. Our approach enables meaningful individual-level analysis between ABM scenarios because all differences are driven by mechanistic effects rather than random number noise. We demonstrate the benefits of our approach on three disparate examples. Results consistently show reductions in the number of simulations required to achieve a given standard error with levels exceeding 10-fold for some applications. § AUTHOR SUMMARY We present new computational methodology that addresses a longstanding signal-to-noise problem in agent-based modeling that arises when comparing simulation outcomes. With the traditional approach that we and other modelers have used for decades, random draw misalignment between simulations results in high variance and implausible differences, complicating impact evaluation, parametric sensitivity, and scenario analysis. Our new method achieve perfect alignment of random draws between simulations, thereby preventing stochastic branching entirely. Similar ideas have been demonstrated for simple cohort models, but those techniques did not work for key aspects we need in disease modeling like dynamic populations, births and in-migration specifically, and agent-to-agent interactions, as needed for pathogen transmission. We tested our new methodology on three use cases and found it has many benefits including dramatic reductions in the number of simulation replicates required for some applications. We believe that practitioners both within and beyond the field of computational epidemiology will benefit considerably from this improved approach to agent-based modeling. § INTRODUCTION Within the field of computational epidemiology, computer models are used to guide decision making by predicting the future course of disease burden, assessing data gaps and the value of new information, and quantifying the potential impact of a diverse suite of possible interventions. The structure and level of detail represented within a disease model should be fit for purpose based on the motivating questions and available data. To this end, numerous modeling paradigms have been developed and leveraged ranging from deterministic compartmental models to complex agent-based models (ABMs), which are inherently stochastic. Simulation-based analysis quantifying the impact of interventions or the sensitivity of key outcomes to input parameters comes from evaluating the difference between simulations across two or more scenarios. These differences are straightforward to calculate for deterministic models, but significant challenges arise when evaluating differences between outputs of stochastic models, including agent-based models specifically. The fundamental problem is that the difference between two simulations is composed of real (mechanistic) effects stemming from the change in model configuration plus stochastic noise. Configuration changes that result in small but meaningful differences in outcomes can be very challenging to quantify as the stochastic random number noise dominates the signal. Additionally, purely beneficial interventions and directional parameter shifts, like introducing a vaccine, can appear to increase disease burden, challenging scientific communication. While such increases are certainly possible due to chaos-like mechanisms, traditional agent-based models over-estimate the frequency of such outcomes due to random number noise. Several variance reduction techniques have been proposed in the literature to address this fundamental signal-to-noise problem in agent-based modeling and Monte Carlo simulation more generally <cit.>. To understand these approaches, consider a disease model configured with two different inputs yielding outcomes X and Y. The variance of the difference, Z, can be expressed as var[Z] = var[X] + var[Y] - 2cov(X,Y). To reduce the variance in the difference, it is possible to induce positive correlation between X and Y through the use of common random number seeds, a classic approach in simulation methodology <cit.>. In practice, however, the magnitude of the covariance term tends to be small relative to the variance terms despite the common random number seeds. While X and Y may be identical initially, the first difference causes a significant loss of correlation due to stochastic random number noise. for example before an intervention takes effect in the counterfactual, the outcomes quickly lose covariance following the first difference due to stochastic random number noise. The noise observed in differences is a result of a model design flaw that is challenging to overcome. Specifically, most agent-based simulations in epidemiology use a single centralized pseudo-random number generator (PRNG) for all stochastic realizations. A PRNG outputs a stream of random numbers that is deterministic and reproducible given the seed. However, even if two simulations with slightly different inputs use the same random number seed, as soon as one simulation uses a random number that the other does not, all subsequent stochastic realizations could be different. The sequence of random numbers is the same, but the realizations are going to different purposes within the model. These differences manifest as noise when computing results. As a consequence, the impact of interventions and parameter changes can only be evaluated at the population level, even though the model representation is at the individual agent level. Overcoming this fundamental limitation for general purpose agent-based simulation modeling has proven to be challenging; however, one promising approach is to use common random numbers (CRN) <cit.>. In a CRN-based approach, random numbers are still random, but the draw for each decision, by each agent, at each time is perfectly matched between simulations. In theory, the random number alignment in CRN eliminates stochastic noise from the difference of simulations, leaving only real (mechanistic) effects. CRN techniques date back to Monte Carlo simulation in the 1950s, often applied to simple systems for which reusing a common random number seed was sufficient to achieve “full” common random number coherence <cit.>. More recent applications of CRN include modeling of breast cancer <cit.>, health care systems and policy analysis <cit.>, and cost-effectiveness modeling of diarrheal disease control <cit.>. While modeling applications using CRN consistently demonstrate benefits, no literature we could find solves CRN in general for agent-based simulation. The examples above suffer from two major limitations. First, the populations are closed in the sense that new agents cannot be born or otherwise added to the simulation. Second, agents are not able to interact with each other. Common random numbers enable counterfactuals to be matched perfectly to baseline simulations. Perfectly matched counterfactuals have been demonstrated for compartmental models in epidemiology <cit.>, but the methods do not apply to agent-based models. We introduce new methodology for general purpose agent-based disease modeling that completely eliminates unwanted stochastic noise due to misaligned random numbers from the difference between simulations, thereby dramatically revealing the real signal. Our approach is based on common random numbers, and results in meaningful reductions in the variance of the difference between simulations. To the best of our knowledge, this is the first time that perfectly matched counterfactuals have been achieved in general-purpose agent-based modeling. § MATERIALS AND METHODS We achieve common random number alignment in an agent-based multi-pathogen co-transmission framework called using a number of innovations. This framework follows design patterns from specific disease-vertical models we have developed previously, including <cit.>, <cit.>, and <cit.>, but is intended to enable rapid composition of one or more health and/or disease modules and transmission networks. Importantly, is a fixed time step agent-based simulation framework that conceptually represents the agent population as a matrix. The matrix is composed of one row for each agent and has columns representing properties like age, sex, unique identifier (UID), pathogen-specific infection status, and much more. The matrix initially has N rows, but will change dynamically over time as agents are born and die. The framework is written in Python and available as open-source software <cit.>. Here we describe each component of our approach. While a full implementation is available in , the approach can be adapted to any agent-based model. §.§ Separate pseudo-random number streams for each decision Within agent-based modeling, a decision is any step that requires a random number to be drawn from a distribution. Typically these decisions address questions like: Does the agent get infected? How long is the incubation period? Will the infection be severe? Does the individual receive a vaccine on this time step? Etc. The results of these decisions govern the evolution of the simulation. We assign independent PRNG streams to each and every decision, but note that each stream can be used to sample values for many/all agents. These decision-specific PRNGs have a unique name string that is hashed to create an integer offset to the single user-supplied random number seed. Thus, decisions that are shared between two simulations will receive the same seed, provided the overall random number seed is shared. Within , each independent PRNG stream is implemented as a NumPy random generator of type  <cit.>. §.§ Time step-dependent PRNG stream jumping PRNG stream “jumping” efficiently advances the state of the generator as if a large number of draws had been sampled. On each time step, k, we begin by resetting each PRNG stream to its initial state. Then, each PRNG stream is jumped k times. These jumps ensure that each decision-specific stream is in a new unique state that depends only on the initial state and simulation time step. §.§ At most one call to each PRNG per time step On each time step, k, our approach allows at most one call to each decision-specific PRNG stream. The call may request a large sample size, for example one realization for every agent in the simulation. Limiting the number of calls to at most one on each time step ensures that draws come from a stream that starts from a known state that will be matched between simulations. §.§ Slot-based assignment of random number draws Each agent in the population is assigned a “slot” that is used to index into an array of random numbers drawn from each decision-specific PRNG stream on each time step. Because the slot is used as an index into an array, it must be a non-negative integer. During initialization of the population of N agents, a column vector of unique identifiers (UIDs) is created and forms the index of the agent matrix. Initial UID values are assigned linearly, 0, 1, …, N-1. The slot vector for this initial population is simply a copy of the UID vector, so that agent i will receive slot i. On time step k, let 𝒮_d be a PRNG stream associated with decision d. The stream 𝒮_d has been initialized and jumped according to the rules described above. If any agents are faced with decision d on time step k, a vector of M random draws, r⃗, will be sampled from the stream. The random draws are assigned to individual agents by indexing so that the agent with UID i receives draw r_i = r⃗[slot_i]. We set M = max(slot_i) for i in the set of agent UIDs faced with decision d on this time step. We make a distinction between UIDs and slots because new agents may be born into the simulation. While UIDs are assigned sequentially as new agents are added to the population, slots cannot be assigned sequentially because two simulations for which we aspire to achieve CRN may have differing numbers of births. Instead, we determine the slot for newborn agents based on a random number generated by one or both of the biological parents. Specifically, one of the decisions for which we allocate a PRNG stream is, “What will be the slot for agents born on this time step?” Using the slot of the selected parent, we sample a new slot for each linked newborn from a discrete uniform distribution with a lower bound equal to the N, the initial population size, and an upper bound equal to int(qN), where q>1 is a user-configurable scalar multiplier with a typical value in the range of 2 to 10. Because slots are drawn from a discrete uniform distribution, there is a chance that two or more agents could receive the same slot. Two agents with the same slot, facing the same decision, on the same time step will receive the same random realization. The chances of such a collision can be reduced to near-zero by increasing q. However, increasing the number of available slots comes at the cost of increasingly large draw sizes because the number of random numbers drawn must be large enough to accommodate all requested slots. When creating newborn agents, all properties beyond the slot, such as birth sex and any other user-configured covariates, are determined using separate decision streams with stochastic realizations indexed by the slot of the newborn. Thus, two agents born on the same time step who happen to receive the same slot will receive identical properties at birth. Some outcomes experienced by these “twin” agents will be identical, like the timing of demographic events, but other outcomes like network edge formation and disease acquisition, sequelae, and onward transmission will differ. 0.9 Example: Consider a population of N=10 agents in which agents 0, 5, and 8 (these numbers refer to UIDs) have assigned slots 0, 5, and 21. These agents are newly infected, and now we seek to determine the prognosis for each from a Weibull distribution. We draw 22 Weibull-distributed random numbers and only use the draws at the 1st, 6th, and 22nd positions, corresponding to the zero-based slots associated with these agents. §.§ Pairwise random numbers It is often necessary in disease modeling to have a random draw that acts on a pair of individuals, rather than for each agent individually, for example to determine whether one agent infects another. For N agents, there are N(N-1)/2 possible pairwise interactions (i.e. edges), although in networks in practice tend to be sparse. Naïvely, one would sample an independent random number for each edge in the network. However, such an approach is not CRN safe because the addition or loss of a single agent (or interaction) will cause the random numbers for all subsequent pairs to change. The innovation we make here is to calculate a uniformly distributed random number u_ij, used for each pair of agents i and j, based on random numbers drawn by agents i and j. Specifically, let u_i and u_j be random 64-bit unsigned integers sampled for agents i and j using the techniques described above. We then apply a deterministic transformation, f(u_i, u_j) to yield a uniformly distributed random realization u_ij∈ [0,1). After exploring several alternatives and checking for bias, see a:pairwise, we settled on the following transformation, u_ij = xor(u_i * u_j, u_i-u_j) / M_64, where u_i, u_j ∼ U(0,M_64) are random 64-bit integers and M_64 is the largest 64-bit integer. §.§ Network edge formation Pathogen transmission within occurs on edges of a multi-layer dynamic transmission network. Nodes in this network represent individual agents and edges represent contacts. Edges are dynamic and therefore may form and dissolve over time. The network is multi-layer in the sense that users can group edges into “layers” representing place (e.g. home, school, work, community), transmission route (e.g. airborne, sexual, environmental), relationship type (e.g. marital, casual, commercial), or other factors. One of the most challenging aspects of achieving CRN in a dynamic transmission model is maintaining coherence in network connections. Many common network algorithms are not “CRN safe” in the sense that the presence or absence of even just one additional agent can cause all new connections formed on that time step to differ. We have identified three network algorithms that maintain common random number coherence despite possible changes in the number of agents (nodes) available for connections due to birth, death, or other reasons. The three methods we describe here differ significantly in their capabilities and performance scaling. §.§.§ Erdős-Rényi Dynamic Random Network In an Erdős-Rényi graph, each pair of nodes is connected with probability p. Ordinarily, N(N-1)/2 random numbers would be used in assessing the existence of an undirected edge between each pair of agents. To create a CRN-safe Erdős-Rényi network, we instead use one of the pairwise random number methods described in s:pairwise (other than Modulo, which is biased). With this approach, edges can have a defined duration or be can recreated on each time step. The loss or addition of agents will not affect other network edges. We note that this approach could be generalized. For example, the probability, p, of an edge could depend on agent properties, simulation time, or other factors. §.§.§ Dynamic Disk Graph To create a dynamic disk graph, we initially place each agent randomly on a two-dimensional unit square using the techniques described above. Edges are created between agents that are separated by a distance of r or less, where r is a scalar radius to be determined by the user. Such a network can be made dynamic by moving the agents on each time step. Here, we have explored two simple approaches. The first is a random walk in which each agent samples a new position from a normal distribution centered at the current position, again using the techniques above. [ x_t+dt; y_t+dt ]∼𝒩( μ=[ x_t; y_t ], Σ_t ), for any 2D covariance matrix Σ_t. The second approach ascribes a constant velocity, v, and orientation, θ, to each agent and calculates new positions as a forward step of length v * dt in direction θ, [ x_t+dt; y_t+dt ] = [ x_t + v cos(θ_t) * dt; y_t + v sin(θ_t) * dt ], where dt is the time step of the model. With each approach, agents wandering outside of the unit square are reflected back in. With the constant-velocity method, the orientation is updated so that agents “bounce” off walls. Other motion updates are possible, each creating different dynamic disk networks that will not be altered by the addition, loss, or other agent-network participation changes. We note that this gerenal concept could be generalized in many ways, while retaining the CRN-safe property. For example, the user could change away from a two-dimensional square or use a different distance function. §.§.§ Topological Embedding While the previous approaches create simple random networks that are safe for use with CRN, they lack the capability to create detailed assortative edges, a limitation the “topological embedding” approach seeks to overcome. Our approach begins by embedding agents seeking an additional connection in a d-dimensional normed vector space. Denote by x⃗_i ∈ℝ^d the position of agent i within this space. After embedding, we form a distance matrix D with entry i, j computed as the distance between respective agents, D_ij = ‖x⃗_j - x⃗_i ‖. Pairs are assigned by solving the linear sum assignment problem <cit.> using D as the cost matrix. As is written in Python, we are using the function from the library <cit.>. The process of embedding each agent may be deterministic. For example, each agent may embed at a position corresponding to their current age, sex, and/or other property like geolocation. Alternatively, a user may employ a stochastic embedding, for example each agent could embed at a position determined by a random draw. When the embedding is stochastic, a purpose-specific pseudo random stream is used in combination with slotting, as described above, to ensure the resulting draws are consistent between realizations. The linear sum assignment forms a maximal pairing that minimizes the sum of the costs. Here, a maximal pairing ensures that the cardinality of the match is as high as possible; no agents that could be matched are left unmatched. The cost minimization means that nearby pairs of agents are more likely to be matched than distant agents. These properties create an ideal situation for common random number alignment between two simulations as changes, like the addition or removal of agents seeking connections on any given time step, will create a perturbation that is not global, but rather local with respect to the embedding. Finally, note that bipartite networks can be generated using a distance matrix with rows representing agents of one type (e.g. women) and columns representing agents of the other type (e.g. men). We use this approach in an example below to simulate a heterosexual HIV transmission network. The downside of this approach is performance as the linear sum agreement algorithm is O(N^3) in N, the number of agents seeking a new contact. §.§.§ Static network Any static network will be safe for use with common random numbers. These networks are static in the sense that edges between nodes do not change over time, and can therefore be created in advance of running the simulation. While agents can be removed from the simulation simply by removing adjacent edges, newborn agents will not be connected to any other agents and therefore would not acquire new infections. §.§.§ Complete network Another CRN-safe option is the complete graph, with edges added and removed as agents enter and leave the simulation. Again, no random numbers are used in producing this network. §.§ Pathogen transmission A second significant challenge in achieving CRN in epidemiological models comes at the stage of pathogen transmission. Naïvely, a network consisting of e edges would use 2e random draws to determine pathogen incidence, one for each possible direction of each edge. However, this approach is clearly not CRN safe as the loss of any one edge would shift the random draw realizations for all subsequent edges. We have identified several solutions to overcome this challenge. We describe an acquisition-based approach in a:acquisition. The primary approach we have implemented takes advantage of the pair-specific random numbers, as described in s:pairwise. This method works just like the naïve approach, but substitutes a pair-specific random realization instead of a random draw from a centralized generator. § RESULTS To identify use cases within computational epidemiology for which common random numbers provide a meaningful advantage over the traditional centralized approach, we present results from three examples. First, maternal postpartum hemorrhage prevention in a model with births and deaths, but no transmission. Second, vaccination in a susceptible-infected-recovered (SIR) transmission model with a closed population and static network. Finally, voluntary medical male circumcision (VMMC) impact on human immunodeficiency virus (HIV), demonstrating the combination of an open population and dynamic transmission networks. All results come from <cit.> (v1.0), an open-source general-purpose health and disease modeling framework. The methods to achieve CRN, as described in s:methods, have been designed into the framework. Simultaneously, the framework features the ability to disable CRN, reverting back to a single centralized random number generator for comparison purposes. We present results comparing the following two approaches to generating random numbers. * Centralized: All random numbers used during the simulation come from a single (centralized) random number generator, as is typical in modern agent-based simulation modeling in epidemiology. The stream is NumPy's default Mersenne Twister. We use the same random number seeds for each scenario to reduce variance. * CRN: Random numbers for each decision, by each agent, at each time step use the techniques presented in this paper to achieve common random number alignment. The slot scale parameter, q, is set to its default value of 5 for the PPH example and 10 for the SIR and VMMC examples. Transmission is achieved using the XOR method of pairwise pseudo-random numbers, as described in Table <ref>. §.§ Maternal & Child Health: Prevention of postpartum hemorrhage In Sub-Saharan Africa, the maternal mortality ratio is estimated to be around 500 deaths for every 100,000 live births <cit.>. Postpartum hemorrhage (PPH), defined as losing at least half a liter of blood within 24 hours of delivery, is a leading cause accounting for about 25% of maternal deaths in this region <cit.>. A recent clinical trial has demonstrated that a package of interventions can reduce a composite measure of severe outcomes from PPH by 60% <cit.>. Further, when a mother dies, her newborn baby has a significantly reduced chance of surviving the first 42 days <cit.>. Estimates of infant mortality vary, but sources indicate that roughly half of infants without a mother will die within this period <cit.>. To explore the potential benefits of averting PPH, we apply our approach to common random numbers on a synthetic population resembling sub-Saharan Africa. These results demonstrate the ability of the CRN-based approach to simulate vital dynamics, births and deaths, as made possible by the “slots” described in s:slots. Each simulation begins in 2015 and ends in 2030, corresponding to the end-year of the Sustainable Development Goals. The initial population age structure, age- and year-specific fertility rates, and age-, year-, and sex-specific mortality rates are based on data from UN World Population Prospects <cit.>. For demonstration purposes, we suppose a 60% effective PPH-prevention intervention package was delivered at 10% or 90% coverage starting in 2020, 5 years after the beginning of the simulation. We assume the baseline rate of maternal mortality due to PPH is 1 per 1,000 live births and that infants who lose their mother to PPH experience a one-time 50% chance of death that acts in addition to the baseline mortality rate. Mothers saved by the PPH-prevention package have a knock-on effect of increasing the survival of their newborn. Each simulation contains 100,000 synthetic agents and we use 250 replicates by sweeping the random number seed from 0 to 249. The simulation time step is set to one year. We compare centralized and CRN approaches to random number generation. Results displayed in the top row of Fig <ref> show time series trends of cumulative maternal deaths for intervention coverage levels of 10% and 90% in addition to the reference, which does not contain any PPH-prevention. The results for the two approaches to random number generation are indistinguishable, suggesting that pseudo random numbers generated by both centralized and CRN approaches are indeed random. The second row of Fig <ref> shows the differences between the indicated PPH-prevention coverage level and the reference for each of the 250 replicates. In computing these differences over time, we have paired up common random number seeds so that seed s of each coverage level is compared against seed s of the reference. We observe that between-simulation differences are significantly less variable using the CRN approach as compared to the traditional centralized approach. The CRN simulations show a clear decrease in deaths, with larger magnitude for the higher coverage level. Differences using the centralized approach are much more variable, and the trend at 10% coverage is challenging to discern. In addition to a significant reduction in variance, the CRN approach demonstrates another advantage. Because differences are realized mechanistically at the individual level instead of in aggregate the population level, a purely beneficial intervention like this PPH-prevention package always results in fewer maternal deaths and more live births, as illunstrated in the bottom row of the results fiture. The same cannot be observed for the centralized approach because real differences are masked by stochastic noise. The variance of the difference between simulation configurations can be reduced by increasing the covariance term in Eq <ref>. The third row of Fig <ref> shows how the Pearson correlation coefficient (PCC), a measure of covariance, varies as a function of time for several output channels; higher values indicate greater correlation. The CRN approach yields correlation that is significantly higher than the centralized approach at all time points. The drop in PCC for the centralized approach begins early in the simulation, following the first random draw difference between baseline and counterfactual scenarios. Comparing 10% and 90% coverage levels, Pearson correlation with the reference scenario is higher in the 10% coverage scenario than the 90% coverage scenario with CRN. In contrast, the centralized approach appears to be insensitive to coverage. Viewing postpartum hemorrhage as a relatively rare event (1 per 1,000 live births), 10% coverage of the PPH prevention package with 60% efficacy affects relatively few women and their children, and therefore simulation results for this low-coverage scenario should closely resemble the reference scenario. In other words, the mechanistic signal is small and thus correlation between the 10% and reference coverage scenarios should be high. In contrast, the 90% coverage level affects many more women and their babies, and thus correlation between baseline and counterfactual simulations should be lower, as demonstrated in the CRN results. The CRN approach shows lower correlation for maternal deaths than births, especially at 90% coverage. This too makes sense in light of the fact that the intervention directly averts maternal deaths whereas any change in births are an indirect consequence of women receiving the PPH prevention package to survive to a subsequent pregnancy. Over longer periods of time, newborns that survived because their mother received the PPH intervention could as well contribute new births into the population. Finally, we can consider the standard error (SE) and the potential number of simulations saved by CRN in achieving a given SE. When considering maternal deaths at the final time, we find that the CRN approach yields 6.2- and 1.75-fold reductions in standard error for 10% and 90% coverage levels, respectively. Due to the square-root relationship between standard error and the number of simulations, this reduction amounts to 38- and 3-fold savings in the number of simulations that would need to be run to achieve a specified level of standard error. These savings are much more substantial when considering the number of births that occur. Here we find 14,000- and 1,800-fold reductions in the number of simulations for a given standard error, again for 10% and 90% coverage levels. Additional correlation results for this PPH example can be found in a:PPHcor. §.§ Infectious disease transmission While simulations using CRN will always have less stochastic pseudo-random number noise compared to the traditional centralized approach, the actual benefits of CRN may not be meaningful in situations where in-simulation mixing is large and/or the real signal is small compared to the between-replicate variation. To investigate the limits of the advantages of CRN, we present results on a susceptible-infectious-recovered (SIR) infection process evolving on a static population with static network connections, and consider the impact of introducing a vaccine. Simulation results presented in this section use SIR dynamics with an exponentially-distributed duration of infection with a mean of 30 days. The infection fatality ratio is set at 5%. We default to a “power law” network topology with parameter m=1. Static networks are CRN-safe, recall s:static_net. We select a time step of one day and set the default transmissibility parameter, β, so that there is a 20% chance of transmission per day between each connected pair of infected an susceptible agents. The size of the simulated population is varied. Simulations start on the first day of 2020 and end 6-months later. The simulated vaccine, which acts to reduce susceptibility to infection acquisition, is distributed on day five at 5% or 90% coverage and is assumed to have a constant efficacy of 70%. Results comparing cumulative incidence (attack) of centralized and CRN approaches for 10, 100, 1,000, and 10,000 agents for the reference scenario are shown in Fig <ref> (top row). As in the PPH example, outputs for both approaches to random number generation yield similar aggregate results. Cumulative incidence of infection is characterized by a high level of quantization and variance for 10 agents, with increasing resolution and decreasing variance as the number of agents is increased, as expected. The middle row of of Fig <ref> show differences between baseline and 90% coverage, matched by random number seed. The CRN approach appears to have lower variance differences, and differences shrink with increasing number of agents. To better understand the impact of the population size on the value of CRN, we again turn to the time evolution of the Pearson correlation coefficient, see the bottom row of Fig <ref>. These panels clearly show the benefits of the CRN approach. At all time points, the CRN approach results in higher correlation than the traditional centralized approach. Correlation increases with the number of agents. The benefit of CRN over centralized, as quantified by the difference in PCC, narrows as the number of agents increases. But benefits are still apparent at 10,000 agents, especially for the 5% coverage scenario. As with the PPH example above, correlation is higher at 5% coverage than 90% coverage, particularly for CRN, due to the smaller overall perturbation to the system and ability of the CRN approach to avoid loss of correlation due to stochastic noise. The use of common random numbers results in variance reduction, as can be quantified by the fold-reduction in the number of simulations that need to be run to achieve a given standard error level. Here we find that at 5% coverage, the use of CRN saves over 10-fold simulations, with nearly 30x savings realized for the smallest population size. The reduction in the number of simulations is smaller for the 90% coverage level due to the larger signal produced by the intervention. Here we find a modest savings of about 20%. Additional results generated by varying the topology of the SIR network are presented in a:SIR_topology. §.§ HIV prevention via voluntary medical male circumcision Previous results have explored a dynamic population without transmission (PPH) and a static population with transmission (SIR). We now present results leveraging all aspects of our CRN solution through a simulation of Human Immunodeficiency Virus (HIV) that involves births, natural- and disease-cause deaths, and a dynamic heterosexual disease transmission network. A recent study conducted the HIV Modelling Consortium evaluated the cost effectiveness of a 5-year continuation of voluntary medical male circumcision (VMMC) as compared to discontinuation of the service over 5-, 20-, and 50-year horizons <cit.>. VMMC is estimated to have approximately 60% efficacy in reducing HIV acquisition in men, but funding for VMMC programming is varied. Motivated by this real-world scenario analysis, we implemented a simple VMMC scenario analysis based on a simple HIV module complete with an age-stratified heterosexual transmission network, mother-to-child transmission, age/sex/year-specific demographics, and temporal scale-up of antiretroviral treatment (ART) and VMMC. HIV prevalence, ART, and VMMC trends were roughly calibrated reflect the epidemiological context in sub-Saharan Africa. Simulated circumcision typically occurs around time of sexual debut and coverage increases to 45% linearly from 2007 to 2020. The baseline scenario discontinues VMMC in year 2020 whereas the intervention scenario continues VMMC services through 2025. Results come from simulations with an initial population of 10,000 agents spanning from 1980 to 2070 with a one-month time step and 500 replicates. Model outputs showing HIV prevalence, coverage of VMMC, fold reduction in the number of simulations that would be required to achieve a given standard error, and the distribution of infections averted at 5, 20, and 50-year time horizons are presented in Fig <ref>. As with previous examples, absolute outputs like HIV prevalence and the coverage of VMMC do not differ visually between centralized and CRN approaches, to the top panels display results from the CRN simulations. The benefit of the CRN approach is most significant over short time horizons. We find that the fold reduction in the number of simulation replicates needed to achieve a given standard error peaks approximately 3-years after the beginning of the intervention. The peak values are approximately 75- and 125-fold reductions in the number of simulations required. Benefits of using CRN fall from these peak levels over time as the impact of the VMMC intervention eventually affects most agents in the system. At 5, 20, and 50-year horizons, the fold reductions are 24, 2.3, and 1.4 for infections averted and 95, 5.7, and 1.6 for deaths averted. As a final note, we observe in the bottom row of panels in Fig <ref> that a significant number of replicates generated using the centralized generator have negative infections averted. This result would suggest that continuation of the VMMC intervention is somehow resulting in more infections that would have occurred in the scenario where VMMC is discontinued. Of course, this result is spurious due to random number noise. Indeed, results from the CRN generator show very few replicates with negative infections averted. Unlike the PPH example above in which agents did not mix, here it is possible for the VMMC continuation scenario to result in more infections, even with CRN, due to chains of events that can occur mechanistically as a result of the intervention. § DISCUSSION We present a new methodology for agent-based disease modeling that achieves common random numbers (CRN) to enable precise individual-level comparison between simulations, a critical step towards eliminating unwanted noise when evaluating intervention impact and parameter sensitivity. With our CRN-based approach, two simulations on the same population are comparable at the individual level, and all differences are due to the mechanistic action of the difference. Results show that our approach achieves CRN and always outperforms the traditional centralized approach to random number generation. The postpartum hemorrhage prevention example demonstrated dramatic increases in signal to noise in an open, but non-interacting, population. We observed up to a four order of magnitude reduction in the number of simulations required to achieve a given standard error as compared to the centralized approach, a huge savings. Results are also easier to communicate as this purely-beneficial intervention cannot possibly result in increased deaths when using the CRN approach. Transmission examples with SIR dynamics on a static population confirmed that the CRN-based approach is never worse, and revealed that the greatest gains come when the “signal” (e.g. the effect size of the intervention) is small compared to between-run “noise” (variance). Gains were more significant for smaller population sizes, but persisted even in a well-mixed simulation of 10,000 agents, a situation where careful use of random numbers might not be expected to yield benefits. When exploring different network topologies with SIR dynamics, we were surprised to see larger gains in Pearson correlation with the faster-mixing topology compared to three other network structures. This power-law network produces high-variance outputs as a result of a the influence of a relatively small number of high-degree nodes. Individual-level alignment between simulations, as enabled by CRN, ensures better alignment of infection reaching high-degree nodes, and therefore more significant variance reduction. For decades, agent-based models have suffered a signal-to-noise problem. It has been challenging to quantify the impact of interventions reaching select populations and small parameter changes as used in sensitivity analysis. These challenges may have led cost-effective interventions being overlooked. Further, modeling results confused stakeholders with counter-intuitive results showing purely beneficial interventions resulting in worse outcomes for some simulations. Negative and near-zero impact results also complicate cost effectiveness calculations because the incremental impact appears in the denominator. Our approach is the first to achieve common-random number alignment in agent-based health and disease modeling. Our approach fully eliminates stochastic noise due to misaligned random number realizations. What remains is purely mechanistic signal that can be audited in the sense that every difference in model outputs can be traced back to a physical change in process or parameter value. Our approach has several limitations. First, model re-engineering may be required to retrofit an existing agent-based model with the methods described in s:methods. Second, the resulting model code could be more challenging to use and modify. In traditional modeling, a user could call the system “rand” function, which accesses a centralized generator; here additional care must be taken by users when evaluating stochastic decisions. While random number generation typically takes a small percentage of overall simulation time, our slot-based approach to random numbers draws many more random numbers than are actually used. While this seems wasteful and does reduce model performance, random number generation is not a significant performance bottleneck in our experience. Instead, we have found the embedding network has O(N^3) scaling that dramatically affects performance with large N. Users seeking more performance or larger population sizes could use the disk or algorithms we described. The performance of these algorithms is assessed in a:perf. Another possible limitation may come from how “slots” are assigned to agents. Slots are used to map random number realizations to agents, and there is no guarantee that slots are unique. Agents sharing a slot will receive the same random realizations, possibly leading to undesirable correlations. Our approach to choosing slots allows users to trade off the probability of repeated slots with the number of random numbers that are generated for each decision. Users of these methods should conduct sensitivity and validity analysis to balance this trade off. Not all agent-based modeling applications benefit equally from our approach. Interventions with large effect sizes or large changes to input parameters generate a large signal for which the benefits of careful random number alignment are less meaningful in practice. Users should weigh the benefits of CRN against performance and complexity considerations. Finally, while we conducted a variety of simulation experiments, our results explore a relatively small corner of the space of all agent-based modeling applications. We expect our approach to outperform the traditional centralized approach in all applications, but acknowledge our results are limited in this regard. § SUPPORTING INFORMATION *Software and analysis code availability Methods described in this article have been implemented in , which is available as open source code on GitHub <cit.>. Results were generated using v1.0.1 of the framework. Analysis code is available GitHub repository available online at <https://github.com/starsimhub/crn_paper>. *S1 Appendix Pairwise random numbers We explored several transformation functions designed to create a single uniform random number, u_ij from random numbers produced by each agent in a pair, u_i and u_j. The approaches we considered are summarized in Table <ref>. Here, M_64 and M_32 are the maximum 64- and 32-bit unsigned integer values. The Modulo method computes the modulus of the sum of two uniformly distributed random numbers with 1, the result of which is uniformly distributed. This simple function was a natural starting point. The Middle Square method is based on John von Neumann’s middle-square random number generator <cit.>, which takes the middle bits after squaring a “seed” number. Here, instead of squaring a single number, we take the the middle 32 bits from the product of u_i and u_j, and normalize by the maximum possible unsigned 32-bit value. Finally, the Bitwise XOR method computes a bitwise exclusive-or between the product and the difference of u_i and u_j before normalizing the result. For each algorithm, we created 2-million random graphs on N=4 and again for N=6 nodes. The probability of each edge was set to 50%. We compared the frequency of resulting graphs against reference frequencies generated using an independent pseudo-random number per edge, which is not CRN safe, from NumPy's default Mersenne Twister implementation. Each resulting random graph was hashed for ease of comparison. Results are presented in Fig <ref>. Already from this bar plot it is clear that the Modulo approach creates a bias. The transmission tree with hash e3b0c4, corresponding to no transmissions, is over-represented compared to results generated using non-CRN-safe centralized pseudo-random number generator. We next employ a statistical test to detect if any of the methods produce biased results. Specifically, we apply a Chi-Squared test of independence to the contingency table between the True Random result and each RNG-safe approach. Results presented in Table <ref> show that the Modulo approach can be rejected, but results from the other approaches cannot be rejected as different from True Random. *S2 Appendix Acquisition-based disease transmission Here, we describe an alternate approach to CRN-safe disease transmission. In this “acquisition-based” approach, each node first computes the probability of acquiring infection from any neighbor as, p_i = 1 - ∏_j∈𝒩_i (1-p_ij), where p_i is the probability of node i acquiring infection from any neighbors, 𝒩_i, and p_ij is the probability of transmission on the edge between agents i and j. With these probabilities in hand, we then use a single random number from agent i, applying the techniques described above, to determine if transmission occurs. The source of each infection can be determined by inverse cumulative transform sampling, again using the techniques described above. While this method is effective and technically sound, we have found it to be slow for some applications, depending on the density of the network. *S3 Appendix Network performance characterization A main performance bottleneck in implementing a common random number safe model is the transmission network. In this article, we have proposed three network algorithms that maintain random number alignment between simulations. These three networks vary in their flexibility and performance. While the “Embedding” network allows for user-specified assortative mixing, the use of the linear sum assignment algorithm is a potential performance bottleneck. In comparison, the disk and implementations should scale better with increasing number of agents. To find out, we timed the “update” step of each network algorithm for 9 logarithmically spaced population sizes ranging from 10 to 32,000. For each population size, we computed the average time per update for 5 sequential updates, and repeated the experiment with 3 different random number seeds. The experiment was conducted with CRN enabled, but results were not different when using random numbers from a single centralized stream. Results presented in Fig <ref> illustrate that the Disk and algorithms have similar performance and scaling characteristics. Both are an order of magnitude faster than the Embedding network by 10,000 agents. For comparison, we have included performance scaling results from the Random network algorithm, which is not CRN safe. This algorithm is highly performant, primarily relying on an array shuffle operation to create random pairings. Shuffle-based approaches are not safe for use with common random numbers because the addition or removal of even just one agent would cause all subsequent pairings to differ. Finally, please note that in many applications, edges in the network representing contacts persist for several consecutive time steps, representing lasting relationships, and therefore it is typical for much fewer than the full population N to be seeking additional contacts on any one timestep. Also, users can configure the model so that only some agents are eligible for new network connections on each step. But here, for performance evaluation, we test the networks by ensuring that all agents are eligible for edges on each and every network update, in part by discarding any edges created on previous updates. Thus, this test represents a worst-case scenario as all agents are seeking new edges on every update. Results were computed on an M1 Macbook Pro. Absolute times will vary with computing hardware, but the relative values and scaling trends will be consistent. *S4 Appendix Additional PPH correlation results To visualize the enhanced correlation due to reduced random number noise, Fig <ref> shows additional results from the PPH example. In this figure, each dot represents one pair of simulation results at the final time. The value on the X-axis is the number of births (top) and maternal deaths (bottom) in the reference scenario, which does not include any PPH prevention, and the value on the Y-axis is the corresponding number in an intervention simulation generated using the same random number seed. All results demonstrate a clear correlation, thanks to shared seeds used with the centralized approach and common random numbers for the CRN approach.. Simulations that happen to result in high (low) values without PPH prevention also have high (low) values with PPH prevention. Results generated using the CRN approach (red, “x” markers) demonstrate higher correlation than those generated with the centralized approach (black, “+” markers) due to removal of unwanted random number noise. Correlation is higher with 10% coverage of the PPH intervention (left) because results are more similar to the reference scenario than with 90% coverage (right). Finally, results show higher correlation for births than maternal deaths due to the fact that births are quickly corrupted by random number noise when using the centralized approach. *S5 Appendix Varying the SIR network topology Understanding that network topology affects mixing, we hypothesize networks inducing slower mixing will favor our approach based on common random numbers. Here, we test this hypothesis using SIR disease dynamics on four static network topologies: with m=1, with p=0.004, Watts-Strogatz with parameters k=4 and p=0.2, and a 2D planar grid using 1,000 agents. Simulations run for two years beginning in 2020. To ensure comparability of results across differing network topologies, we have calibrated the transmissibility parameter, β, for each network to achieve a final attack of 60% (600 agents) in the reference scenario at the final time. The Pearson correlation coefficient is higher for CRN than centralized at all time points for all network topologies considered, see Fig <ref>. The benefit of CRN over the traditional centralized approach to agent-based disease modeling, as quantified by the difference in PCC, is largest for the network. Smaller values are observed for the Watts-Strogatz and Grid 2D networks, with the topology in the middle. This result is counter to our hypothesis because the largest benefit is observed in the topology with the fastest mixing. The structure of the “power law” network results in high variance, so there is more opportunity early in the spread for common random numbers to reduce variance by eliminating unwanted noise from misaligned random realizations. Consistent with previous findings, correlation with the reference scenario is higher when the system is less perturbed, as is the case with 5% compared to 90% vaccine coverage. § ACKNOWLEDGMENTS This advance would not have been possible without contributions from many to the modeling framework, in which our methods have been implemented and tested. We would also like to acknowledge and thank Edward Wenger and Philip Welkhoff for their support and thoughtful feedback, and Jen Schripsema for helpful edits. 10 botev_variance_2017 Botev Z, Ridder A. Variance reduction. Wiley statsRef: Statistics reference online. 2017; p. 1–6. kahn_methods_1953 Kahn H, Marshall AW. Methods of Reducing Sample Size in Monte Carlo Computations. Journal of the Operations Research Society of America. 1953;1(5):263–278. doi:10.1287/opre.1.5.263. kleijnen1974statistical Kleijnen JP. Statistical techniques in simulation. 1974;. heikes_using_1976 Heikes RG, Montgomery DC, Rardin RL. Using common random numbers in simulation experiments — an approach to statistical analysis. SIMULATION. 1976;27(3):81–85. doi:10.1177/003754977602700301. conway_tactical_1963 Conway RW. Some Tactical Problems in Digital Simulation. Management Science. 1963;10(1):47–61. doi:10.1287/mnsc.10.1.47. stout_keeping_2008 Stout NK, Goldie SJ. Keeping the noise down: common random numbers for disease simulation modeling. Health Care Management Science. 2008;11(4):399–406. doi:10.1007/s10729-008-9067-6. murphy_using_2013 Murphy DR, Klein RW, Smolen LJ, Klein TM, Roberts SD. Using Common Random Numbers in Health Care Cost-Effectiveness Simulation Modeling. Health Services Research. 2013;48(4):1508–1525. doi:10.1111/1475-6773.12044. cornejo_creating_2014 Cornejo D, Mayorga ME, Lich KH. Creating common patients and evaluating indiviual results: Issues in indivual simulation for health policy analysis; 2014. flaxman_untangling_2017 Flaxman AD, Deason AW, Dolgert AJ, Mumford JE, Sorensen RJ, Eldrenkamp E, et al.. Untangling uncertainty with common random numbers: a simulation study; 2017. kaminsky_perfect_2019 Kaminsky J, Keegan LT, Metcalf CJE, Lessler J. Perfect counterfactuals for epidemic simulations. Philosophical Transactions of the Royal Society B: Biological Sciences. 2019;374(1776):20180279. doi:10.1098/rstb.2018.0279. kerr2021covasim Kerr CC, Stuart RM, Mistry D, Abeysuriya RG, Rosenfeld K, Hart GR, et al. Covasim: an agent-based model of COVID-19 dynamics and interventions. PLOS Computational Biology. 2021;17(7):e1009149. fpsim O’Brien ML, Valente A, Kerr CC, Proctor JL, Noori N, Root ED, et al. FPsim: An agent-based model of family planning. npj Women's Health. 2023;1(1):1. hpvsim Stuart RM, Cohen JA, Kerr CC, Mathur P, Abeysuriya RG, Zimmermann M, et al. HPVsim: An agent-based model of HPV transmission and cervical cancer. PLOS Computational Biology. 2024;. starsim Cliff Kerr, Robyn Stuart, Romesh Abeysuriya, Paula Sanz-Leon, Jamie Cohen, and Daniel Klein. Starsim;. Available from: <https://github.com/starsimhub/starsim>. pcg O’neill ME. PCG: A family of simple fast space-efficient statistically good algorithms for random number generation. ACM Transactions on Mathematical Software. 2014;. crouse2016implementing Crouse DF. On implementing 2D rectangular assignment algorithms. IEEE Transactions on Aerospace and Electronic Systems. 2016;52(4):1679–1696. 2020SciPy-NMeth Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods. 2020;17:261–272. doi:10.1038/s41592-019-0686-2. world2023trends Organization WH, et al. Trends in maternal mortality 2000 to 2020: estimates by WHO, UNICEF, UNFPA, World Bank Group and UNDESA/Population Division: executive summary. 2023;. say_global_2014 Say L, Chou D, Gemmill A, Tuncalp O, Moller AB, Daniels J, et al. Global causes of maternal death: a WHO systematic analysis. The Lancet Global Health. 2014;2(6):e323–e333. doi:10.1016/S2214-109X(14)70227-X. gallos_ioannis_randomized_2023 Gallos Ioannis, Devall Adam, Martin James, Middleton Lee, Beeson Leanne, Galadanci Hadiza, et al. Randomized Trial of Early Detection and Treatment of Postpartum Hemorrhage. New England Journal of Medicine. 2023;389(1):11–21. doi:10.1056/NEJMoa2303966. nguyen_risk_2019 Nguyen DTN, Hughes S, Egger S, LaMontagne DS, Simms K, Castle PE, et al. Risk of childhood mortality associated with death of a mother in low-and-middle-income countries: a systematic review and meta-analysis. BMC Public Health. 2019;19(1):1281. doi:10.1186/s12889-019-7316-x. finlay_effects_2015 Finlay JE, Moucheraud C, Goshev S, Levira F, Mrema S, Canning D, et al. The Effects of Maternal Mortality on Infant and Child Survival in Rural Tanzania: A Cohort Study. Maternal and Child Health Journal. 2015;19(11):2393–2402. doi:10.1007/s10995-015-1758-2. wpp of Economic UND, Social Affairs PD. World Population Prospects 2024; 2024. Available from: <https://population.un.org/wpp/>. bansi2023cost Bansi-Matharu L, Mudimu E, Martin-Hughes R, Hamilton M, Johnson L, Ten Brink D, et al. Cost-effectiveness of voluntary medical male circumcision for HIV prevention across sub-Saharan Africa: results from five independent models. The Lancet Global Health. 2023;11(2):e244–e255. neumann1951various Neumann V. Various techniques used in connection with random digits. Notes by GE Forsythe. 1951; p. 36–38.
http://arxiv.org/abs/2409.03198v1
20240905024118
RoomDiffusion: A Specialized Diffusion Model in the Interior Design Industry
[ "Zhaowei Wang", "Ying Hao", "Hao Wei", "Qing Xiao", "Lulu Chen", "Yulong Li", "Yue Yang", "Tianyi Li" ]
cs.CV
[ "cs.CV" ]
Beike RoomDiffusion: A Specialized Diffusion Model in the Interior Design Industry Zhaowei Wang, Ying Hao, Hao Wei, Qing Xiao, Lulu Chen, Yulong Li, Yue YangCorresponding author: [email protected], Tianyi Li ================================================================================================================================= § ABSTRACT Recent advancements in text-to-image diffusion models have significantly transformed visual content generation, yet their application in specialized fields such as interior design remains underexplored. In this paper, we present RoomDiffusion, a pioneering diffusion model meticulously tailored for the interior design industry. To begin with, we build from scratch a whole data pipeline to update and evaluate data for iterative model optimization. Subsequently, techniques such as multi-aspect training, multi-stage fine-tune and model fusion are applied to enhance both the visual appeal and precision of the generated results. Lastly, leveraging the latent consistency Distillation method, we distill and expedite the model for optimal efficiency. Unlike existing models optimized for general scenarios, RoomDiffusion addresses specific challenges in interior design, such as lack of fashion, high furniture duplication rate, and inaccurate style. Through our holistic human evaluation protocol with more than 20 professional human evaluators, RoomDiffusion demonstrates industry-leading performance in terms of aesthetics, accuracy, and efficiency, surpassing all existing open source models such as stable diffusion and SDXL. § INTRODUCTION For text-to-image diffusion models <cit.>, the realm of interior design stands as a splendid arena for application. In traditional interior design processes, people often find themselves in prolonged and costly exchanges with professional designers; another potential challenge is that many individuals often lack clarity about their needs, leading to a disorganized and inefficient design process. With the growing popularity of diffusion models, these issues appear to be effectively resolved. People can leverage text-to-image diffusion models to quickly explore a vast array of design ideas, thereby gaining the inspiration they seek. Additionally, such models can assist professional designers in rapidly generating designs, thus enhancing their efficiency in the workflow. However, existing open-source text-to-image diffusion models like stable diffusion <cit.> and SDXL <cit.> are primarily designed to cater to general applications, and their performance in specialized fields is somewhat lacking. Due to the low signal-to-noise ratio in training data and the limited quantity of indoor scene data, open-source models often exhibit issues such as repetitive furniture, outdated styles, imbalanced furniture proportions, and disjointed compositions, as illustrated in Figure. <ref>. In this report, we introduce RoomDiffusion to address the aforementioned issues and present the entire process of building RoomDiffusion: (1) we created a massive dataset comprising tens of millions of indoor scene images sourced from various channels. Building upon this foundation, we established a comprehensive system for evaluating the quality of indoor images, assigning 19 labels to each image. Some of these labels were utilized to filter out low-quality images, while others formed the textual components of the dataset. (2) we divide the filtered images into several buckets based on their resolution, and randomly extract data from different buckets during model training. At the same time, we using image resolution as a conditional input to control the training process. (3) we selected high-quality images to create a high-precision dataset, then further fine-tuned the model obtained in the previous step based on this dataset, as done in <cit.>. (4) we selected several outstanding open-source text-to-image diffusion models for model fusion, such as EpicRealism <cit.> and Realistic Vision <cit.>. (5) we applied latent consistency Distillation(LCD) <cit.> to enhance the model's inference speed. To comprehensively evaluate the performance of RoomDiffusion, we combined automated metrics with human assessment. For automated metrics, we assessed aesthetic quality, CLIP score <cit.>, Frechet Inception Distance(FID) <cit.>, and several other indicators. As for human evaluation, we collaborated with over 20 professional evaluators to establish a rational evaluation framework, focusing on aesthetic appeal, image-text alignment, and spatial coherence through good-same-bad(GSB) assessment. RoomDiffusion demonstrated a leading advantage across all automated metrics. In human evaluations, RoomDiffusion surpassed the best open-source models with an 70% win rate across multiple dimensions, demonstrating its superior performance. § METHOD In this chapter, we will provide a comprehensive overview of the entire construction process of RoomDiffusion. In Section 2.1, we will introduce our data pipeline, with its overall structure illustrated in Figure. <ref>. Then, in Section 2.2, we will describe all the technologies used in RoomDiffusion, such as Multi-aspect training and LCD, with the complete workflow depicted Figure. <ref>. §.§ Data Pipeline §.§.§ Raw Data Acquisition In order to build a leading-edge and high-performance text-to-image diffusion model, a comprehensive and well-curated dataset is indispensable. Leveraging years of experience in the residential sector, we have amassed a substantial collection of high-quality interior design renderings. Additionally, we have augmented the diversity of our training data through external data procurement and open-source downloads. These efforts have culminated in the creation of a dataset comprising tens of millions of decoration renderings. §.§.§ Image Quality Assessment After obtaining the raw data, we have identified common issues in the images and developed a set of image quality labeling system to evaluate the quality of the images. Currently, this system consists of 5 primary labels and 19 secondary labels. The specific primary labels are explained below: * Low Quality: Assessing whether the image is usable, including criteria such as whether the image is not an indoor rendering, stitched, or watermarked. * Basic Attributes: Including image resolution, clarity, brightness and saturation. * Aesthetics: Evaluating the aesthetic quality of the image, including assessing whether there are issues such as color mismatches, outdated styles, or a lack of realism. * Composition: Assessing whether the proportions of the main objects in the image are reasonable and identifying issues such as excessive focus on a specific area, occlusion, or incorrect shooting angles. * Content: Evaluating whether the content of the image is reasonable, such as whether the number of key furniture items is appropriate, or whether it includes people or animals. To obtain accurate image labels as described above, we have developed over ten domain models, including watermark detection, stitching classification, aesthetic scoring, indoor segmentation, and indoor detection. These models have demonstrated superior performance compared to existing open-source models <cit.>, as shown in Figure. <ref>. By applying the models, we can set corresponding discrimination rules for each label. Therefore, we can utilize these models to perform preliminary screening on large-scale datasets, eliminating all low-quality data, and selecting high-quality images for model training. §.§.§ Image Captioning We have constructed two forms of image descriptions, one based on label systems and the other based on natural language text. * Labeling system: We have constructed a labeling system for home decoration scenes, consisting of 5 primary labels, 98 secondary labels, and over three hundred tertiary labels. The primary labels encompass five aspects: room, style, color scheme, soft decoration elements, and hard decoration elements. In order to provide comprehensive labels for each image, we trained a classification model to categorize the room, style, and color scheme of the images. Additionally, we employed a detection model to detect the presence of soft and hard decoration elements within the images. As a result, we are able to accurately interpret the elements depicted in the images. * Natural language text Since label-based image descriptions lack details such as furniture material, color, and spatial relationships, we further enhance the image descriptions using natural language text. To ensure the accuracy and richness of the image descriptions, we conducted a detailed comparison of the image caption capabilities of different language models. Ultimately, we chose to utilize CogVLM-chat <cit.> and GPT-4V(ision) <cit.> for batch production of text descriptions for images. We designed prompts to guide the models in describing various aspects of the images, including room, style, walls, ceilings, floors, decoration status, furniture, and layout, in order to obtain comprehensive descriptions of the images. In conclusion, we combine the image quality labels, home decoration labels, and natural language text to obtain the most accurate and detailed descriptions of the images. The format of the textual descriptions is as follows: "[room] + [style] + [quality labels (watermark, clarity, etc.)] + [furniture] + [natural language text]." Notably, CLIP can typically only accept up to 77 tokens, making it unable to handle most image captions. We split long captions into multiple short captions, encode them separately with CLIP, and then concatenate the results before feeding them into the UNet. This approach enables RoomDiffusion to understand long texts. §.§.§ Data Layering To fully exploit the value of large-scale datasets, we layer the data based on different qualities and quantities and apply them at different stages of model training. For instance, we employ the aforementioned image quality indicators to conduct preliminary screening of the images, and combining them with text generated by the CogVLM-chat model to create a dataset comprising millions of image-text pairs. This dataset is utilized for training the generation model, aiming to enhance the model's ability to generate outputs with improved aesthetics, semantic control, and coherence. Our objective is to maximize the expansion of the model's generative boundaries and diversity. Furthermore, we utilize a dataset of hundreds of thousands of images manually selected through screening, combined with text generated by the GPT-4V(ision) model. This combined dataset is employed to fine-tune the generated distribution, thereby narrowing down the model's generated distribution to a sample space of higher quality. §.§ Model Training We have designed a new training pipeline to enhance the model's generation performance, improving its aesthetic quality, rationality, and semantic control capability. This pipeline consists of four main components: Multi-aspect training, Multi-stage fine-tune, Model Fusion, and LCD, the entire workflow is shown in the figure. <ref>. §.§.§ Multi-aspect training In the context of indoor scene, the resolution plays a crucial role in the results generated by models. Currently, most open-source models generate images at a single resolution, which falls far short of meeting the practical demand for larger and a variety of image sizes. Additionally, in the domain of interior decoration characterized by complexity and a plethora of elements, various unreasonable issues are prone to arise, significantly affecting the reference value of generated images in practical applications. For instance, if we train the model at a specific resolution, it performs relatively well when inferring at or near that resolution. However, when the inference resolution is larger than the trained resolution, it is prone to issues such as distorted or missing furniture. Conversely, when the inference resolution is smaller, the generated image may be blurry and may exhibit insufficient or distorted furniture details. To ensure the model performs exceptionally well across various resolutions, we employed a multi-aspect training approach. This method not only enhances the model's performance at different resolutions but also increases its stability and flexibility in generative tasks. We partition the data into multiple buckets based on different aspect ratios, assigning each training image to the bucket with the closest aspect ratio. In each iteration, data is sampled from a randomly selected bucket for training. Additionally, the model receives the resolution of the target bucket as a condition to control image generation. To ensure that the semantic alignment between images and text is maximized, we only perform resizing in the image preprocessing step during training and eliminate all cropping operations. Since the images are placed in buckets with similar aspect ratios, they are not excessively compressed or stretched, and the proportions of objects remain relatively normal. §.§.§ Multi-stage fine-tune After training on a large interior design dataset, our model has developed strong generative capabilities for indoor scenes. However, its aesthetic quality still requires improvement. To address this, we fine-tuned our model on a smaller, but higher-quality image dataset, as done in <cit.>. To obtain exceptionally high-quality data, we built a data cleaning pipeline: (i) the raw data is initially filtered by the powerful image quality system, aiming to automatically filter out high-quality images from massive data and greatly reduce labor costs. Data volume reduced from tens of millions to 100k. (ii) We will divide the image beauty into 5 levels from multiple perspectives such as decoration colors, furniture selection, hard decoration design, etc., ranging from 1 to 5. The larger the number, the higher the standard beauty. A number of designers with interior design experience are trained to rate the beauty of images. Each picture will pass through multiple professional designers to finally obtain the average beauty value, and the top 10% of the pictures with the highest score will be retained. (iii) To obtain corresponding high-quality prompts, we use GPT-4V(ision) to produce detailed prompts for top 10% images, and then manually checked the detailed prompts. Finally, we obtained 5,000 exceptionally high-quality image-text pairs. After training the model as described in the previous section, we continued fine-tuning it on these image-text pairs for 10,000 steps with a learning rate of 1e-6. Through experimental observation, this method significantly enhances the aesthetic quality and texture of the images generated by the model. §.§.§ Model Fusion Although our model has shown significant improvements in aesthetics, semantic control, and coherence, it still has some issues. Due to the high proportion of rendered images in our training data, the results generated by our model lack realism. To address this issue, we employed model fusion techniques widely used in the open-source community. By integrating our model with those renowned for realism in the community, we can enhance the authenticity and detail of the generated images. However, since most open-source models are primarily designed for single-resolution image inference, the fusion process, while enhancing aesthetics and realism, still introduces issues such as duplicated furniture and image stitching artifacts. To address this, we initially conducted bucket fine-tuning on the open-source models using a small dataset and a low learning rate. This fine-tuning process ensured that the models maintained their realism while adapting to the generation of multi-scale images. Finally, we fused the fine-tuned models to achieve a balance between realism and coherence. §.§.§ LCD Latent Consistency Distillation (LCD) aims to efficiently distill the pre-trained classifier-free <cit.> guided diffusion models. LCD directly predicts the solution of such ODE in latent space, requiring only a few iterations, resulting in rapid, high-fidelity sampling. We attempt to apply LCD to accelerate our RoomDiffusion and observe some inspiring properties: (i). The teacher model and data quality jointly affect the performance of student models. (ii). The LoRA-based <cit.> distillation scheme performs worse than the model-based scheme. (iii). CFG scale and batch size are the two critical aspects of the LCD distilling process. (iv). The distilled model may lose few high-frequency components during the sampling process. To alleviate these properties of the LCD method, we select the best-performing teacher model and the high quality data. We carefully tune the CFG scale, batch size and optimizer in the training process to encourage the student model to accurately imitate the teacher model. Furthermore, in order to solve the problem of missing high-frequency components, we use high-quality data to distill the open source photorealistic stable diffusion models and perform model fusion with our student model in proportion. Finally, we successfully reduced the model inference time to one-third of the original while maintaining consistent performance across all metrics. § EVALUATION PROTOCOL Traditional evaluation criteria for text-to-image generation generally include aspects such as text-image consistency, aesthetic quality of images, and content coherence. However, these metrics are insufficient for providing a comprehensive assessment of models in decoration scenario. On the one hand, they fail to identify key issues such as repetitive furniture, mixed styles, and poor fidelity of generated images; on the other hand, certain commonly used metrics may exhibit distortion. For instance, some existing aesthetic models tend to assign higher scores to images with abundant furniture and intricate patterns, while assigning lower scores to images with minimalist interior design styles. Hence, it is essential to revamp the evaluation process and develop additional metrics to ensure the credibility of the results. §.§ Evaluation Process We have designed a dual evaluation mechanism comprising two steps.The first step is to use automated evaluation metrics to quickly assess the model's performance during the iterative process. If over 70% of the metrics show improvement, the evaluation moves to the next step, which involves human evaluation using the GSB method. This approach not only conserves human resources but also ensures more reliable results. §.§ Automated Evaluation Metrics We measure the performance of our model from multiple dimensions, which can be mainly divided into visual appeal and image-text consistency. Visual appeal includes Fréchet inception distance (FID), and aesthetic score (AS). Image-text consistency include CLIP score (CS) and fine-grained metrics, such as soft-decoration follow rate (SFR), style accuracy (SA), hard-decoration follow rate (HFR) and furniture repetition rate(FRR). •Fréchet inception distance: FID is used to assess the quality of images created by a generative model. It has been used to measure the quality of many SOTA generative models. •Aesthetic score: The common aesthetic score model is inconsistent with human subjective judgment in the decoration scenario. We train an aesthetic scoring model based on images annotated by professional designers to evaluate the image aesthetic scores. •CLIP Score: is a reference free metric that can be used to evaluate the correlation between a generated caption for an image and the actual content of the image. •Soft-decoration follow rate: Soft-decoration follow rate statistics the generation accuracy of 49 important furniture. •Style accuracy: SA statistics the generation accuracy of 8 popular styles. •Hard-decoration follow rate: Hard-decoration follow rate statistics the generation accuracy of 10 common floors and ceilings. •Furniture repetition rate: In the results generated by diffusion model, there are often unreasonable repetitions of furniture, such as two toilets in a bathroom or two double beds in a bedroom. Therefore, it is necessary to quantify this phenomenon. §.§ Manual evaluation To further enhance the confidence in the evaluation process, we conducted a manual assessment. We used the same 1,000 prompts to generate results from both RoomDiffusion and several open-source models, then had evaluators perform GSB evaluations on the generated results. Over 20 evaluators participated in the assessment, covering three dimensions: aesthetic evaluation, text-image alignment, and layout rationality. •Aesthetic Evaluation: Evaluators assessed the images from multiple perspectives, including color coordination, stylistic harmony, and lighting, to select what they considered the best image. •Text-Image Alignment: Evaluators compared the text with the corresponding generated images and selected the one with the highest suitability. •Layout Rationality: Evaluators chose the image with the most reasonable spatial and furniture layout. Each image was evaluated by at least three evaluators, and the final conclusion was based on the majority opinion. If a majority opinion could not be reached, the image was excluded from the final statistical results. § RESULTS §.§ Machine Indicator Evaluation We compared RoomDiffusion with some of the most excellent and widely-used models in the open-source community, such as EpicRealism, Realistic Vision, and SDXL. The test set consisted of 1,000 randomly selected interior decoration images, with corresponding text descriptions generated by GPT-4V(ision). We followed the seven metrics mentioned in Section 3.2, which include aesthetics, object generation success rate, wall-ceiling-floor accuracy, style accuracy, furniture repetition rate, FID, and CLIP score, the comparison results are shown in Table <ref>. Our model achieved the best performance across all machine evaluation metrics; particularly, it excelled in style accuracy and aesthetics scores, while significantly reducing the furniture repetition rate to the lowest level. §.§ Manual Evaluation For manual evaluation, we conducted a comparative assessment based on three dimensions: aesthetics, semantic control, and layout rationality. A total of 20 evaluators participated, comparing the results of RoomDiffusion with those of all the open-source models, and providing their answers from three options: good, same, or bad. After removing the part of "same", the evaluation results are shown in Figure. <ref>, where our model demonstrated significant superiority across all dimensions. § CONCLUSION In this report, we introduce RoomDiffusion, an industry model applied to interior decoration design scenarios, which outperforms all existing open-source models. Our report details the construction process of the RoomDiffusion model, the evaluation methods used, and the performance comparison with open-source models. We also hope that our technical report can provide a reference for the open-source community and foster more rapid and valuable development in the field of interior decoration design. splncs04
http://arxiv.org/abs/2409.03089v1
20240904212352
Generative Manufacturing: A requirements and resource-driven approach to part making
[ "Hongrui Chen", "Aditya Joglekar", "Zack Rubinstein", "Bradley Schmerl", "Gary Fedder", "Jan de Nijs", "David Garlan", "Stephen Smith", "Levent Burak Kara" ]
cs.CE
[ "cs.CE" ]
inst1]Hongrui Chenlabel2 [label2]These authors contributed equally to this work. [inst1]organization=Carnegie Mellon University, addressline=5000 Forbes Ave, city=Pittsburgh, postcode=15213, state=Pennsylvania, country=USA inst1]Aditya Joglekarlabel2 inst1]Zack Rubinstein inst1]Bradley Schmerl inst1]Gary Fedder inst2]Jan de Nijs inst1]David Garlan inst1]Stephen Smith inst1]Levent Burak Karacor1 [email protected] [cor1]Corresponding author [inst2]organization=Lockheed Martin Corporation, addressline=1 Lockheed Blvd, city=Fort Worth, postcode=76101, state=Texas, country=USA § ABSTRACT Advances in CAD and CAM have enabled engineers and design teams to digitally design parts with unprecedented ease. Software solutions now come with a range of modules for optimizing designs for performance requirements, generating instructions for manufacturing, and digitally tracking the entire process from design to procurement in the form of product life-cycle management tools. However, existing solutions force design teams and corporations to take a primarily serial approach where manufacturing and procurement decisions are largely contingent on design, rather than being an integral part of the design process. In this work, we propose a new approach to part making where design, manufacturing, and supply chain requirements and resources can be jointly considered and optimized. We present the Generative Manufacturing compiler that accepts as input the following: 1) An engineering part requirements specification that includes quantities such as loads, domain envelope, mass, and compliance, 2) A business part requirements specification that includes production volume, cost, and lead time, 3) Contextual knowledge about the current manufacturing state such as availability of relevant manufacturing equipment, materials, and workforce, both locally and through the supply chain. Based on these factors, the compiler generates and evaluates manufacturing process alternatives and the optimal derivative designs that are implied by each process, and enables a user guided iterative exploration of the design space. As part of our initial implementation of this compiler, we demonstrate the effectiveness of our approach on examples of a cantilever beam problem and a rocket engine mount problem and showcase its utility in creating and selecting optimal solutions according to the requirements and resources. Requirements-driven part design Resource-driven part design § INTRODUCTION Numerous approaches have been created to facilitate and improve the complex process of part-making. CAD/CAE/CAM software helps in the design, engineering analysis, and manufacturing simulation of a part, while approaches to PLM (Product Lifecycle Management) and PDM (Product Data Management) allows engineering teams and corporations to digitally capture the design and utility of parts and systems and track changes through version control <cit.>. However, the original conceptualization of parts is largely performed by humans, focusing most prominently on the engineering requirements. While manufacturing processes and materials may be considered as guidelines and heuristics within DfM (Design for Manufacturing) or, more generally, DfX (Design for X) modules, there is still an unmet need for deploying systems that can take into account the business requirements and prevailing supply chain conditions and accordingly optimize a part. Design, manufacturing, and procurement must be made as seamless as possible to truly optimize a part to the given requirements and resources. Researchers have begun studying these areas. Recently launched `Generative Design' tools from Autodesk <cit.>, nTopology <cit.>, Altair <cit.>, Dassault <cit.>, and several others significantly improve the design optimization process by incorporating various constraints within the optimization that was previously not possible. However, the barrier to the supply chain remains. The design that is output may be feasible but expensive, where this expensive nature of the design can be attributed to the state of the supply chain network (for example, expensive machining equipment and materials). The optimizer can also produce a design that has a large lead time because factors in the supply chain, such as the unavailability of manufacturing equipment required or the need to reorder materials, are not considered by the optimizer. Moreover, trade-offs may exist when comparing different suppliers. In short, there is a need for a system that produces optimal parts, where the topology and other design parameters of each part have been informed by not only the engineering requirements but also the business requirements, such as cost and lead time, which depend on the available suppliers and their capabilities. In this work, building towards the goal of removing barriers and combining the design, engineering, manufacturing, and supply chain teams and tasks, we propose a new approach to mechanical part making, which we call Generative Manufacturing (GM) (Figure <ref>). GM enables requirements and resource-driven part-making by informing the part design with real-time supply chain information. In the part-making process for a particular problem, in the first design creation stage itself, our system can provide answers to several questions: * Which manufacturing method (e.g., additive or subtractive) will result in the shortest lead time for the product? * What constraints are active (impacting solution) vs. inactive (not influential)? * Why a particular manufacturing method (e.g., 3-axis CNC) is infeasible given the constraints? * What are the trade-offs between choosing different materials (e.g., Al6061 and Ti6Al4V) for this product? * How will the best solution change if the mass and cost constraints become stricter? * How will the best solution change if the state of the suppliers changes? Our approach works by first probing the current manufacturing state of each potential supplier for a given set of part designs to create models that estimate the current relationships between parts, cost, and lead time for each supplier. This is accomplished via a distributed framework that situates a finite capacity scheduler at each supplier site and utilizes current knowledge of previously accepted supplier orders, machine availability, and material inventories to project supplier-specific cost and lead-time for particular part design requests. At the heart of the approach is a neural network-based differentiable design generator that can incorporate these supply network models as well as other requirements and resources that it receives as input for creating optimized part designs. As part of our initial implementation of these ideas, we demonstrate our approach in the design of a rocket engine mount and a cantilever beam. We showcase different requirements and resources with different supply chain scenarios and how they inform the part design. We show that our approach enables a user-guided iterative exploration of the design space where requirements can naturally evolve in response to design ideas suggested by our system. By providing a portfolio of competitive but often times surprising solutions to a given problem, our method also helps end-users ‘discover’ requirements that were either ill-posed or under-constrained, bringing design optimization closer to a dialogue between human and the machine rather than treating generative part-making as the black-box solution to an optimization problem. Our main contributions include: * Presenting the concept of generative manufacturing: requirements and resource driven part making. * A design generator that performs topology optimization with manufacturing, time, and cost constraints. * An incremental, finite-capacity scheduler that imports constraints characterizing the current manufacturing state of a given supplier and uses this knowledge to produce manufacturing cost and lead-time options for the design generator to bias future part design decisions. * An interactive tool for exploring critical decision variables and thresholds to understand the design space of generated manufacturing options. The implementation of this work is available at <https://github.com/AdityaJoglekar/Generative_Manufacturing>. § BACKGROUND §.§ Generative Design and Manufacturing The primary motivation behind embracing generative design systems involves leveraging computational power to assist human designers and potentially automate aspects of the design process. Alongside achieving efficiency, cost savings, optimization, accuracy, and consistency, an essential goal is to expand exploration within the design realm and facilitate design creation <cit.>. On the manufacturing side, the industrial Internet of Things devices consist of manufacturing data that feeds into the operation decision <cit.>. It is important to design a framework to integrate design and operation decisions<cit.>. One of the main claimed advantages of generative design is the incorporation of constraints and offering design candidates <cit.>. However, at its core, generative design relies on topology optimization to satisfy the structural performance. Topology optimization approaches such as SIMP (Solid Isotropic Material with Penalisation) (<cit.>) and the level-set method (<cit.>) help solve the highly complex and non-convex problem of optimum material layout for different engineering objectives. Several extensions, both in research <cit.> and commercial software <cit.>, to include manufacturing constraints have also seen success. To incorporate business requirements like lead time and cost into the optimization process, it is essential to use computational models that estimate these quantities. While there exist supply chain-dependent lead time and cost estimation models <cit.>, we were not able to find frameworks that integrate them into design optimization. Our proposed framework offers a first step in this direction. Extensive work has been done on creating theoretical and empirical models to predict the manufacturing time and cost, given the features of the part. Noting that our final goal is to integrate the model with topology optimization, we require a computationally light model (as this model will need to be called during the optimization iterations) and a differentiable model (gradients to inform the design need to exist). These criteria rule out CAM simulation software as a possible model. While empirical models, such as using supervised machine learning for prediction, can be very effective, they require a large amount of data, in the absence of which they can face generalization issues. Different parametric models have been developed that are computationally inexpensive and can be considered sufficiently accurate, especially for the design optimization phase. This indicates their suitability over other models for achieving our goal. In the next subsection, we review the related parametric models for additive and subtractive manufacturing methods. §.§.§ Additive manufacturing oriented topology optimization The need for integrating additive manufacturing considerations in topology optimization is highlighted in <cit.>. A substantial body of research has delved into the detection and mitigation of overhang edges to minimize the need for support structures <cit.>. Qian et al. <cit.> employed linear interpolation between nodes of a finite element mesh to achieve a more accurate density gradient. The incorporation of density gradient has facilitated the inclusion of self-supporting structures, boundary slope control, and print angle optimization for simultaneous optimization with topology <cit.>. Our overhang detection method builds upon the work of Wang and Qian and the work by Chen et al. <cit.>, integrating the print angle through the vector dot product of the print angle with the filtered density gradient. This method distinguishes itself with (1) an accurate and differentiable density gradient derived directly from the neural network, enabling topology optimization without the need for filtering, and (2) support structure modeling from the overhang. §.§.§ Subtractive manufacturing oriented topology optimization Subtractive machining is a widely used method in manufacturing. Subtractive machining refers to manipulating the cutting tool to remove material until the desired geometry is reached. Two instances of subtractive machining include milling and 2D cutting. The tool head can be manipulated in 3, 4, or 5 degrees of freedom in milling operation. Whereas 2D cutting methods (laser, water jet, electrical discharge machining) perform through cut and the final design can be considered a 2D extrusion of a shape profile. Langelaar <cit.> proposed a machining filter to optimize topology for the multi-axis machining process. The machining filter can be applied to 2.5D and 4-axis machining. We adopt the machining filter to 3-axis machining where the user can specify a combination of 6 possible machining orientations along the principal axis. Other prior works related to subtractive machining include projection-based approach <cit.>, feature-based approach with level set <cit.>. Further adaption of projection-based approach has been applied to casting as well <cit.>. On the other hand, 2D cutting can be directly formulated as a 3D optimization of a 2D profile extrusion, and a neural network-based direct topology optimization can be developed to do so <cit.>. §.§ Supply Chain Scheduling Manufacturing scheduling problems have been extensively studied for over 60 years <cit.>. Traditionally, they have been approached within the Operations Research community through the use of mathematical modeling techniques (c.f., <cit.>), but more recent advances in constraint reasoning and heuristic search from the AI community (e.g., <cit.>) have also established the power of constraint-based search and optimization techniques as practical solving techniques for this class of problems. Consideration of the broader scoped problem of supply chain scheduling has a much more recent history. As summarized in <cit.>, this merger of two disciplines - scheduling and supply chain management - focuses principally on solution of larger coupled optimization problems (e.g., integrated production and distribution scheduling, joint scheduling, and supplier pricing) and on coordinated decision-making by multiple decision makers (in both centralized and decentralized settings). However, business enterprises have been slow to exploit this more recent research. Enterprise Resource Planning (ERP) systems, which are fundamentally driven by estimates of predicted performance and often have little connection to the enterprise's actual current manufacturing state, still dominate the operational landscape and present an important challenge to the goal of generative manufacturing. Our approach in this paper is to exploit available information on currently booked orders, material inventories and replenishment constraints, and machine maintenance requirements (much of which is already available in existing ERP systems) to enable accurate projection of current cost and lead time for taking on a new manufacturing request. §.§ Explainability for manufacturing With the increase in options that a designer has in a DfX context, it becomes difficult for the designer to understand and explore the nuances of the design space. In particular, the designer needs to understand the following aspects of the design space: (1) the tradeoffs that are happening between the different qualities of a design (cost, lead time, rigidity, strength, etc.) and how they are related, and (2) which of these qualities have the most impact on the design. Design space exploration has been broadly studied in many areas of software, including product lines <cit.>, model-based performance prediction <cit.> and formal verification <cit.>. However, explanation of design spaces remains a challenge. Recent work <cit.> has explored the use of dimensionality reduction techniques, traditionally used in areas such as biology and machine learning <cit.>, to identify and facilitate the understanding to a human designer of the main design decisions and tradeoffs in a design space. In <cit.>, Decision Tree Learning (DTL) <cit.> is used to explain how concrete choices associated with specific design decisions influence the qualities across the design space (among other techniques). These techniques are generalized in <cit.>, which describes a design space explanation process and lessons learned from experience in those domains, as well as the generative manufacturing case. § PROPOSED SOLUTION: GENERATIVE MANUFACTURING For generating the most effective part possible given the requirements and resources, we propose a system that integrates a novel Design Generator, Supply Chain Scheduler, and an Explainable AI and Results Interface as shown in Figure <ref>. The engineering domain and boundary conditions, mass, structural rigidity, lead time, cost, materials, manufacturing methods, and suppliers are the engineering and business requirements and resources we consider in our current system. Note that our system is flexible enough to include other requirements, such as maximum stress or thermal considerations, but we leave its implementation for future work. Details of the system modules are presented in the following sections. §.§ Supply Network Scheduler To evaluate the cost and lead time implications of a candidate design in light of the current manufacturing state of the supply network, the system incorporates a supply-side scheduler. The scheduler takes as input from the design generator a manufacturing request consisting of a part type, the quantity required, the date by which the manufactured parts are needed, and a set of candidate designs with associated process plans. It also receives information from each supplier capable of producing the candidate part designs relating to the supplier's current operating state, including the existing set of accepted orders, the types and numbers of manufacturing machines and processes available along with their operating characteristics and costs, current material inventories and costs, and other availability constraints. To address privacy concerns with respect to supplier business information, candidate designs are evaluated in a decentralized manner, where an instance of the scheduler is situated with each supplier, and the design generator independently queries each supplier to obtain cost and lead time estimates, which are formulated as bids, for a given input request and set of candidate designs. Subsequent analysis of the set of bids returned is then used to adjust constraints for the next iteration of design candidates. Upon receipt of a new request, the scheduler produces bids for its associated supplier by generating a finite capacity production schedule that includes both the existing set of accepted orders, which are either in-process in the factory or planned, and the new request. The scheduler, which is designed to accept and schedule requests incrementally over time, first allocates machine capacity and materials required to execute the process plans associated with all existing orders over time while respecting any other known constraints on resource availability, which can include planned downtime for machine maintenance and material resupply times. Each process plan specifies a sequence of manufacturing tasks, such as `printing → sintering → ...', with each task designating its required capabilities, such as 3-axis machining, its nominal duration, and its nominal cost. Once this “current” schedule has been created, it is extended to include the process plan associated with the new request, utilizing whatever available machine and material capacity remains over the scheduling horizon. This hypothetical schedule is then used to provide the lead time and cost estimates for this supplier bid. For those input requests that provide multiple part design options, a different hypothetical production schedule is generated for each corresponding process plan, and separate bids are returned for each option. §.§.§ Basic Generation of Supplier Options In basic bid generation mode, the scheduler attempts to integrate the tasks associated with manufacturing a candidate design into the production schedule so as to minimize lead time, subject to the constraint that existing accepted orders have priority and are not delayed to accommodate this “due date quote”. The associated process plan is “instantiated” by the scheduler to create a network of tasks, splitting the total number of parts ordered into a set of manufacturing lots that can be produced in parallel if sufficient manufacturing resources, which include the machines and materials, are available. Each instantiated task is then interrogated to determine the set of supplier machines that could be used to carry out this manufacturing step. Given the determined machine alternatives and the design's material requirement, a search is performed to determine the choice of machine assignments to tasks that yield the best result. In this case, it is the set of assignments that produces the minimal lead time. If there is insufficient material on-hand to produce the full quantity of parts requested, then the scheduler adds a resupply time constraint to delay production of the candidate design option until the material required to produce it is available. To ensure the feasibility of any given assignment of machines to tasks, the scheduler relies on an underlying graph of time points and edge-weighted distances called a Simple Temporal Network (STN) <cit.>. The start and end points of all tasks included in the schedule are encoded in the STN, as are the sequencing constraints dictated by order process plans, the constraints introduced by the scheduler to serialize tasks that have been assigned the same machine, and any other availability constraints that must be taken into account such as material resupply time. To generate an assignment for all tasks in the instantiated task network, the search proceeds to consider each task in the instantiated task network in topological order. At each step, the search moves forward through the sequence of tasks currently assigned to each resource capable of performing the next unscheduled task, which is referred to as each resource's current timeline, looking for temporal gaps large enough to accommodate this task. As each temporal gap is tested, constraints are propagated in the underlying STN to confirm the continued feasibility of this partial schedule or to signal conflict and the need to move on to the next temporal gap. When a feasible assignment is found for all tasks in the instantiated network, its objective score, which in this case is the instantiated task network's overall scheduled end time, is recorded, and the search moves on to consider alternative resource assignments. When the search is completed, the feasible assignment with the earliest overall end time (i.e., the smallest lead time) is selected as the basis for generating the bid option. The lead time and cost estimates reflected in this generated schedule are biased by supplier-specific refinements to the nominal duration and cost values specified in the candidate design's input process plan. The scheduler operates with a model of the supplier's actual resources (machines and materials) that include coefficients for tuning nominal task durations and costs to the characteristics of the supplier's specific assets as well as the supplier's specific pricing procedures. A supplier may have multiple instances of a particular 3D printer, for example, but they may range in age and consequently have different operating speeds. Similarly, the wear and tear on a milling machine as well as the milling time required to achieve a particular part geometry will vary as a function of the density of the material, and the magnitude of cost coefficients capture supplier-specific operating costs and price margins. §.§.§ Utilizing Multiple Suppliers through Combinatorial Auction When the scheduler is operating in basic bid generation mode, it is assumed that each supplier will generate independent bids for manufacturing a given candidate design. However, for requests with large part quantities this may not be feasible or practical. To accommodate such situations, the scheduler can also be configured to treat the request's `needed by' date as a hard constraint and instead generate partial bids that indicate the number of parts that can be produced while meeting this constraint. In this mode, partial bids generated by different suppliers are assembled into complete multi-supplier bids through application of a combinatorial auction and the resulting complete bids are passed on to the manufacturer-side design client as before for analysis and feedback to the design generator. The combinatorial auction employs a search process that can be configured to emphasize different criteria for determining how to best combine the partial bids of different suppliers, such as minimizing the number of suppliers and producing the lowest cost bid. Figure <ref> illustrates the overall decentralized framework for querying the supply network's current capability to handle requests to manufacture quantities of parts according to various candidate designs. §.§ Design Generator The design generator takes in as inputs the engineering domain and boundary conditions, manufacturing method, material, supply chain situation and constraints on mass, lead time and cost, and performs topology optimization with an objective of minimization of compliance and outputs the optimized part, as shown in Figure <ref>. We utilize a neural network for representing the part geometry (the continuous density distribution field within the domain) which gives us the ability to easily optimize functions of the part boundary and its gradients. This is particularly useful for the cost and time objectives as they are dependent on these quantities (details in section <ref>). Manufacturing methods and materials are inherently not continuous variables to optimize over. There exist techniques for continuous approximations of materials, such as considering the Young's Modulus as the continuous variable representing the material. However, these approximations are often inaccurate and, in our case, would result in an unwarranted increase in the complexity of the already complex and highly non-convex optimization problem. Moreover, our system is designed such that starting from a diverse but finite set of material choices, the user can focus on a particular set of materials as the iterations progress. Hence, we keep the materials as a discrete variable in the overall optimization process. Manufacturing methods are not related to each other, exist as separate entities in space, and are finite in amount. Also, similar to materials, the user can focus on certain manufacturing methods as the iterations progress. Hence, manufacturing methods are also considered as discrete variables in our optimization setting. The lead time and cost depend on the prevailing supply chain situation, and topology optimization with these objectives is challenging. Modeling of a differentiable function that maps the topology to these quantities is required. We use `supplier probing' for this. Canonical forms, or guesses, that depend on the engineering domain and boundary conditions, the manufacturing method, and the material are created. These guesses span many volume fractions within the domain. Creation of these guesses requires extremely low computation compared to the topology optimization of a part. We use differentiable and efficient parametric models for finding the nominal time and nominal cost for each of these guesses. Then, these guesses are sent to the supply chain scheduler to find the corresponding lead time and cost. Using regression, we find the current relationship that exists between the volume fractions, nominal values of time and cost, and supplier values of time and cost for each combination of material and manufacturing method for each of the suppliers. Now, any user given constraints on lead time and cost can be used to find corresponding nominal time and nominal cost, for which differentiable mappings to the topology exist. Thus, we can achieve topology optimization that satisfies the lead time and cost constraints. We provide the details of this process for different manufacturing methods in section <ref>. The easiest method for a user to prescribe the requirements such as mass, cost, and lead time is to specify values constraining these quantities. Hence, we use a concept similar to the epsilon-constraint method, which is one of the primary methods of solving multi-objective optimization problems. We formulate the optimization problem with four objectives of compliance, mass, cost, and lead time, and minimize compliance with user-given constraints on mass, cost, and lead time to obtain a Pareto optimal solution. The optimization can also be performed with a different combination of the above four terms corresponding to minimization and constraints. The supplier probing we perform guides the user in specifying the values of these constraints by getting an estimate of the possible lowest and highest values. After the optimization for the current set of requirements and resources is complete, the analysis plots and decision trees help the user understand the trade-offs and helps the user effectively change the constraint values if required and perform another iteration of the optimization with these new requirements. Note that whenever the requirements and resources change, our system performs the topology optimization for creating the set of optimal solutions corresponding to these new inputs. The density value at each coordinate of the parts produced by our system is dependent on, informed by, and optimal in terms of all these requirements and resources. We describe the density neural network first, which forms the basis of our design generator, and then explain in detail the loss function formulation for different manufacturing methods. §.§.§ Density Neural Network The density neural network Den(𝐗_den) can be represented as follows: Den(𝐗_den) = σ((cos(𝐗_den𝐊_den + 𝐛_1)+𝐛_2)𝐖_den + 𝐨_1) The input is a batch of domain coordinates 𝐗_den(batchsize× 3). We use the domain center as the origin for the coordinates, and the coordinates are normalized with the longest dimension coordinates ranging from -0.5 to 0.5. We use the concepts proposed in <cit.> and <cit.> and a neural network architecture similar to the one used in <cit.> and <cit.>. The first layer weights (kernel 𝐊_den(3 ×kernelsize)) are fixed, which creates Fourier features after passing through the cosine activation. The kernel is created using a grid of a number of dimensions the same as the number of domain dimensions, and then reshaping the grid coordinates to the matrix 𝐊_den(3 ×kernelsize). The grid size in each dimension dictates how well it can represent topological features, and the grid's range of values controls the frequency of the output topology, with higher ranges of values giving a topology with more intricate features. Trainable biases (𝐛_1 and 𝐛_2) are added to improve the expressive power of the neural network. The next layer weights (𝐖_den(kernelsize× 1)) are trainable. This output is passed through a sigmoid activation (σ), that ensures final output values are between 0 and 1, which represent the density, for each of the coordinates in the input batch. We find empirically that the best initialization of the neural network is such that a uniform density topology with volume fraction corresponding to expected active constraint is output. We achieve this by setting 𝐖_den(kernelsize× 1) close to zero and adding an appropriate offset (𝐨_1) before applying the sigmoid activation (details of offset calculation in section <ref>). The density distribution output by the neural network is used to calculate the different terms in the loss function of the neural network. We use Adam (<cit.>) as the optimizer, with a learning rate of 2.0×10^-3 for all the experiments. §.§.§ Loss Function Formulation Each manufacturing method has particular characteristics that define the constraints on the part topology that can be created, as well as the cost and time for manufacturing the topology. We utilize and build upon existing works to achieve requirements and resource-driven topology optimization for additive and subtractive manufacturing. In additive manufacturing, we consider LPBF for metals and FDM for plastics, and in subtractive manufacturing, we consider 3-axis milling and 2-axis cutting with EDM. We present the detailed loss function formulation for each of these manufacturing methods in this section. Our framework is extensible to different manufacturing methods, but we limit the scope of this paper to only the ones mentioned above. Additive Manufacturing  The manufacturing process considered is as follows: Machine setup→Printing→Support Removal→Inspection Objective Function: Loss = c/c_0 + α(max(0,mass/masscon - 1.0)^2) + α(max(0,cost/costcon - 1.0)^2) + α(max(0,time/timecon - 1.0)^2) where, the notation definitions are as given in Table <ref>. Note that each of the variables in the above loss function, i.e. the compliance, mass, cost, and time, must be a differentiable function of the density values at each of the coordinates for the topology to be optimized with respect to them. The compliance (c) can be formulated as such as shown in <cit.> using the SIMP method and used in a self-supervised neural network topology optimization approach as shown in <cit.>. The volume fraction (vf) of a part discretized into n elements for the SIMP method, with ρ_i being the density value at each of these elements, is defined as follows: vf = ∑_i^n ρ_i/n The mass can be easily formulated as a function of ρ_i as follows: mass = ∑_i^n ρ_i v d where v is the unit voxel volume and d is the material density. We use the following parametric equation for defining the nominal time (t_nAM) in terms of the density values: t_nAM= t_pAM + t_sAM + t_rAM + t_iAM where t_pAM is the print time, t_sAM is the setup time, t_rAM is the support removal time and t_iAM is the inspection time. The print time is calculated as follows: Firstly, the support structure volume is calculated, wherein the overhang region (P) is found using the differentiable nature of the neural network as shown in <cit.>. P is a tensor of the same shape as xPhys, where xPhys = Den(𝐗_den). Then, we use cumulative summation along the print axis to get Pcs, followed by a Heavyside function to get Ph = 1/(1+ exp^-10(Pcs-2)). We then perform an element-wise product of Ph with (1-xPhys) to get Pv and perform a summation of element values of Pv to calculate the exact support structure volume. The support structure mass is then calculated using Equation <ref>, where the ρ_i now corresponds to the support structure. A support structure material density of k times the material density is used (we use the common value of k = 0.3 for all the results). The part mass is calculated using Equation <ref>. Now, print time is: t_pAM = m_part + m_support/Q_AM where m_part is the part mass, m_support is the support structure mass and Q_AM is the print rate. The print rate depends on the material being used and can be input by the user. We use standard values for each material in all the examples in section <ref>. Also, we currently use constants to denote the values of setup time, support removal time, and inspection time for all the examples in section <ref> (we use Trumpf-TruPrint 3000 as a reference) (refer to <https://github.com/AdityaJoglekar/Generative_Manufacturing> for all the standard values and constants used). We believe Equation <ref> is a good approximation for our use case and leave using more complex nominal time equations for future work. For the nominal cost (c_nAM), we use the following equation: c_nAM= t_pAM× c_pAM + c_mAM + c_sAM + c_rAM + c_iAM where t_pAM is the printing time (in minutes), c_pAM is the printing cost per minute, c_mAM is the material cost, c_sAM is the setup cost, c_rAM is the support removal cost and c_iAM is the inspection cost. The printing cost per minute is a standard value we input. The material cost is the total printing mass in kg times the material cost per kg, where again we use standard material cost values in the examples shown in section <ref>. The remaining terms in the nominal cost equation are considered standard constant values similar to the nominal time equation. Note that the supply chain model modifies all these standard values according to each supplier's capabilities and resources. They impact the topology as shown in the following example: for the same constraint on cost, a higher setup cost constant value will lead to lower print time possible and thus lower volume and mass possible for the topology. Now, given the actual cost and lead time constraints, the probing procedure will help determine the corresponding volume fractions and constraint values to be used in the loss function. Probing: We generate 13 representative topologies corresponding to volume fractions (vf) ranging from 1.0 to 0.005 (we found empirically this works for a large number of problems), where each topology consists of all elements of ρ_i = vf, and we assume support volume equals to 0.1×vf×total volume of design domain. Mass, nominal time, and nominal cost are calculated and process plans are generated for each of these topologies and passed to the supply chain model to get the actual cost and lead time. We utilize the fact that the volume fractions and the mass, actual cost, and lead time are highly correlated and create linear regression models using these 13 topologies. Each model has a volume fraction as the input and the actual cost or lead time corresponding to a supplier as the output. Hence, given a lead time or actual cost constraint by the user, we can map this constraint to a volume fraction. Note that here indirectly a mapping between the nominal time and lead time is also occurring as each nominal time corresponds to a volume fraction (similarly for nominal cost and actual cost). Hence, given the actual cost and lead time constraints, we can input the corresponding nominal cost and nominal time values as costcon and timecon in Equation <ref> and use Equations <ref>, <ref> and <ref> in Equation <ref> to get a differentiable objective function that can change the topology with respect to the supplier. Probing also helps in finding the approximate active constraint. For additive manufacturing, we can do so by finding the constraint that has the lowest corresponding volume fraction (minvf). For the mass constraint, the corresponding volume fraction can be easily found by using Equations <ref> and <ref>, and for actual cost and lead time constraints, probing can be used as described above. For compliance minimization, the highest volume fraction is the best, but the active constraint will be violated if the volume fraction of the optimized topology goes any higher than minvf. We can use this fact about the active constraint for initialization of the neural network for reasons explained in section <ref>. The offset 𝐨_1 = log(minvf/1-minvf) is used in Equation <ref>. We also find empirically that setting the compliance normalization constant (c_0) in Equation <ref> to the compliance corresponding to a topology with uniform density and volume fraction equal to minvf gives the best results. For the penalty coefficient α in Equation <ref>, we find empirically that using the following schedule gives the best results: Initialize α = 0 and increment by 0.5 in each optimization iteration until 100 iterations. Then increment by (iteration number/100)^3 until α = 100, and keep α = 100 for remaining iterations. Subtractive Manufacturing  3-axis milling: The manufacturing process considered is as follows: Machine setup→Fixture setup→Machining Operation_n times→Polishing→Inspection Objective Function: Loss = c/c_0 + α(max(0,mass/masscon - 1.0)^2) + α(max(0,cost/costcon - 1.0)^2) + α(max(0,time/timecon - 1.0)^2) + β(milling loss)^2 + λ(milling loss) where the notation definitions for c, c_0, mass, masscon, cost, costcon, time, timecon are given in Table <ref> and α is the penalty coefficient, β is the milling loss penalty coefficient and λ is the Lagrange multiplier. We find that utilizing the concept of the Augmented Lagrangian method for the 3-axis milling constraint violation gives better results compared to just a penalty method. In each iteration, the Lagrange multiplier λ is updated as follows: λ = λ + γ(milling loss) where we find empirically that the best results are obtained when γ follows the schedule: Initialize γ = 0. Increment by 0.1 in each optimization iteration until γ = 10, and keep γ = 10 for the remaining iterations. We use the concept proposed in <cit.> for topology optimization with milling constraints. For a part to be manufactured by 3-axis milling, all regions of this part should be able to be reached by the milling tool, and thus, there should not be any void regions present in the part. We calculate the milling loss as shown in Algorithm <ref>. The compliance c and mass are calculated in the same way as in Additive Manufacturing. For the nominal time (t_nM), we use the following equation: t_nM = t_sM + t_fM + t_mM + t_pM + t_iM where t_sM is the machine setup time, t_fM is the fixture setup time, t_mM is the machining time, t_pM is the polishing time and t_iM is the inspection time. The machining time is calculated as follows: t_mM = V_r/Q_v where V_r is the machined volume and Q_v is the volume based removal rate. All other terms in the Equation <ref> are considered as constants (we use Haas DM-3axis as a reference) in the examples shown in section <ref> and can be changed by the user if required. The nominal cost (c_nM) is calculated as follows: c_nM = c_sM + c_fM + t_mM× c_mM + c_pM + c_iM + c_matM where c_sM is the machine setup cost, c_fM is the fixture setup cost, t_mM is the machining time (in minutes), c_mM is the machining cost per minute, c_pM is the polishing cost, c_iM is the inspection cost and c_matM is the material cost. The machining cost per minute is a standard value we input. The material cost is the mass of the block to be machined (in kg) times the material cost per kg. All other terms in the equation are considered constants (Haas DM-3axis is used as a reference). We perform probing similar to as shown in additive manufacturing. The cost and time for 3-axis milling are negatively correlated to the volume fraction of a part because more machining must be done to achieve a lower volume fraction part. Hence, unlike in additive manufacturing, where there was a positive correlation, the volume fraction corresponding to the active constraint is now calculated differently. If the volume fraction corresponding to the mass constraint (vf_mass) is the highest, then the mass constraint is the active constraint, and we can use vf_mass in calculating 𝐨_1 and c_0. If the volume fraction corresponding to the actual cost or lead time constraints is higher than vf_mass, then it indicates that the optimization is infeasible with the given constraints. This is because the above scenario implies that for achieving vf_mass, the cost or time required is more than the given cost or time constraint. Hence, we can eliminate this scenario for design optimization and save time and computational resources. In the examples shown in section <ref>, we eliminate the options with infeasible constraints using the above logic. We can also use some error percentage e_p for elimination, wherein even if the volume fractions corresponding to the actual cost and lead time are e_p greater than vf_mass, we proceed with the design optimization to avoid early elimination. We use the same schedule for penalty coefficient α as used in the Additive Manufacturing module. 2-axis cutting:  The manufacturing process considered is as follows: Machine setup→Cutting Operation→Polishing→Inspection We consider EDM (Electrical Discharge Machining) as the 2-axis cutting process for our model. The objective function is the same as Equation <ref>, with the cost and time defined differently as follows: For the nominal time (t_nEDM), we use the following equation: t_nEDM = t_sEDM + t_cEDM + t_pEDM + t_iEDM where t_sEDM is the machine setup time, t_cEDM is the cutting time, t_pEDM is the polishing time and t_iEDM is the inspection time. The cutting time is defined as follows: t_cEDM = A_EDM/Q_EDM where A_EDM is the cutting area and Q_EDM is the EDM feed rate. A differentiable representation of the EDM cutting area is needed to incorporate it into the loss function. The density gradient ∂ρ/∂ X_den of the topology can be calculated via automatic differentiation of the neural network, as shown in <cit.>. We filter this density gradient by using a Heavyside function H_a, which we find empirically to perform the best, to obtain magnitudes of 1 where the surface is present and 0 elsewhere. H_a(x) = 1/1+e^-x+5 Then we obtain the total area for cutting as the summation of the output of the heavyside function for each element, over all the elements in the design domain. A_EDM = ∑ H_a(|∂ρ/∂ X_den|) We assume the EDM machine GF Machining Solutions AC Progress VP3 with a feed rate (Q_EDM) of 40 in^2/hr in the examples in section <ref>. For the nominal cost (c_nEDM), we use the following equation: c_nEDM = c_sEDM + t_cEDM× c_cEDM + c_pEDM + c_iEDM + c_mEDM where c_sEDM is the machine setup cost, t_cEDM is the cutting time (in minutes), c_cEDM is the cutting cost per minute, c_pEDM is the polishing cost, c_iEDM is the inspection cost and c_mEDM is the material cost. The cutting cost per minute is a standard value we input. The material cost is the mass of the block (in kg) to be cut times the material cost per kg. For the other terms in the equation, we use constants based on GF Machining Solutions AC Progress VP3 as the reference EDM machine. We currently use a similar probing procedure as in 3-axis milling. Note that for 2-axis cutting, the actual cost and lead time are not highly correlated to the volume fraction but rather to the area to be cut. We use the equation (1 - vf)×total volume for approximating the cutting area for a given probing volume fraction vf. This results in a very conservative estimate and eliminates design generation even with some feasible constraints. Creating a better probing model for 2-axis cutting is left for future work. §.§ Explainable AI and Results Interface A key component of our approach is the ability to assist a designer in understanding the design space of feasible alternatives, including the identification of key variables, correlations, and anti- correlations, thresholds, and tradeoffs. This is facilitated through our design space visualization tools that help “explain” why certain designs are determined to be optimal and how the outcome of our design generation tools depends on the tradeoffs made across multiple dimensions of concern. The primary visualization is constructed using decision trees, which divide the design space into important decision points that provide natural partitions in the design. Such learned decision trees <cit.>, can be used to explain how a particular quality is impacted by the other qualities of interest. Figure <ref> shows the process we use to produce the decision trees, with an abbreviated decision tree on the right. The decision tree provides a combined view about how a particular quality (in this case cost) is impacted by all the other concerns. This allows a designer to understand what parts of the design space might be missing from consideration to generate options in the next iteration, or to understand how many similar options may exist in a particular part of the design space, as well as indicating thresholds (for numerical variables) and decisions (for categorical variables) that influence the cost of design. Alternative decision trees can be created to “explain” other variables, such as how the time required for manufacture is affected by other variables, such as the choice of material or compliance. § RESULTS To demonstrate the application and performance of the GM framework, we set up two test cases. We assigned the dimensions of the two test cases to resemble medium and large size parts. The manufacturing methods available are 3-axis milling, additive manufacturing, and 2-axis cutting. Three total suppliers, Suppliers A, B, and C, are configured to represent the available machining facilities. To simplify our analysis of generative manufacturing, we restrict attention to part quantities that can be handled by single supplier bids. For enhanced clarity, we configured Supplier-A to be 3-axis milling and 2-axis cutting, Supplier-B to be additive only, and Supplier-C to provide both options. Cost and time factors are assigned to each of the suppliers. A detailed breakdown of the capabilities of the three suppliers is illustrated in Figure <ref>. For brevity, in this section, we use the term `time' to indicate the total lead time and `cost' to indicate the total cost. In the previous section, we use EDM as an instance of 2-axis cutting. The material in the case studies includes ABS plastics which are not conductive; therefore, EDM machining cannot be performed. However, other 2-axis cutting processes, for example, water jet can be used as a substitute for which we use the same cutting rate and cost model. In this work, we limit the part orientation along the principal axis (x,y,z). This means that a total of six print and 3-axis milling orientations are possible: (x+,x-,y+,y-,z+,z-). For 2-axis cutting, the orientation of (x,y,z) is available. §.§ Cantilever Beam Bracket The cantilever beam is often seen in topology optimization papers as an example to compare the structural performance of the designed part. The problem's dimension and boundary conditions of the problem are illustrated in Figure <ref>. We envision a typical use-case for this type of bracket is medium-sized structural components with a total production of around 100 for each request to the supplier. For manufacturing orientation, we consider y+ for additive, all six orientations for 3-axis milling, and y for 2-axis cutting. Once the specification is made, we can probe the supplier by creating a surrogate process plan and send it to the suppliers. These requests can be generated based on the selected manufacturing method, material, and structural performance requirement by the engineer. The supplier will respond to the request with a bid. The supplier response can give the engineer a relatively fast response with knowledge of the current manufacturing availability, which may help the engineer to refine the constraint specification further before running any relatively computationally expensive generative design. Furthermore, this helps the design optimizer by eliminating options that are not feasible to manufacture such that a subset of the total possible material, supplier, and manufacturing method combinations is being optimized. The probing result is summarized in Table <ref>. From the probing alone, we can identify that due to machine availability, none of the 2-axis cutting options are available from the suppliers. The Ti6Al4V subtractive options cannot be realized due to the low mass constraint. Suppose the engineer is satisfied with the probing results and no further adjustment to the constraint is desired. In that case, the generative design can be performed for all available materials and manufacturing combinations. We summarize the result of the study in Figure <ref>. The result demonstrated variation across manufacturing methods and materials. For the given set of constraints and the objective of maximizing stiffness, all additive manufacturing solutions have the cost constraint as the active constraint, indicating loosening the cost constraint can give stiffer solutions. For 3-axis milling manufacturing solutions, the mass constraint is the active constraint here. Removing more material from a solid block requires more cost and time and results in a part that is less stiff. Our system shows that for this bracket example and supply chain situation considered, for ABS Plastic (which is lighter compared to Al6061 and Ti6AL4V), for 3-axis milling, a solid block, which is the starting point of the milling operation, satisfies all the given constraints. Hence, our system rightly presents the optimal solution as the starting block itself. Each of the solutions presented is of the best supplier. For example, Supplier B gave the best solution in terms of objective value and constraint satisfaction, and hence, we show Supplier B's solution. We present a detailed analysis of solutions obtained for different suppliers and factors in choosing one over the other in section <ref>, Figure <ref>. Finally, based on all the solutions presented, the engineer will either decide on which solution to pursue or use the result to guide refinement on the constraint value selection to further narrow down the candidates. In the rocket engine mount case study, we will explore the iterative refinement of the constraint value. §.§ Rocket Engine Mount The second example is a rocket engine mount. The mount is configured to transfer the thrust from the engine to the fuel tanks. We are inspired by the work <cit.> where the rocket engine consists of four mounting holes. We reduce the size of the engine and tank so that the engine mount can be subtractively machined as a single piece without assembly. The diameter of the tank is 1 m, and the mounting holes are spaced 20 cm apart. We assume the engine is outputting a thrust of 50 kN. The boundary condition is configured such that the four mounting holes are fixed. The resulting reaction from the tank is modeled as a 50 kN force applied on a thin ring on top. The boundary condition and the dimension of the engine mount are illustrated in Figure <ref>. Based on the geometry of the engine mount, we identify the manufacturing orientation for the three manufacturing methods. In additive, we choose the (y+) direction as the print orientation where the part is the lowest in height. In 3-axis milling, due to the potentially complex geometry generated, all six orientations are selected. For 2-axis cutting, we select the cutting direction to be (y). Similar to the bracket example, the engineer first starts by defining the constraints and manufacturing methods. Then, the suppliers are selected. The probing result for the first iteration is summarized in Table <ref>. From the probing result, we can observe that the 3-axis milling and 2-axis cutting manufacturing methods for Ti6Al4V are not feasible due to the heavy stock material required to purchase. Next, the generative design optimization can commence. The result is summarized in Figure <ref>. As with the bracket, we can see a similar tendency between additive and subtractive processes. Given the objective of compliance minimization, the additive solutions reached the cost constraint and subtractive solutions reached the mass constraint. In Figure <ref>, we show the solution for the best supplier. For example, Supplier B is the best for Additive (y+) and Al6061 in terms of the objective value and constraint satisfaction and gives the solution presented in the corresponding cell in the figure. We can also run the topology optimization for each supplier, where the result is summarized in Figure <ref>. For instance, using additive manufacturing and Al6061, even if Supplier C has higher costs and longer production times, resulting in a structure that is theoretically less rigid compared to what topology optimization with Supplier B can achieve, Supplier C might be more reliable, or there could be other considerations influencing the choice of Supplier C. The user can then use our system to assess whether these other factors outweigh the reduction in theoretical structural rigidity. Based on the result from iteration 1, we further reduce the mass and cost constraint and instigate another iteration. With the reduced constraint, probing results only show additive manufacturing as a viable option which is shown in Table <ref>. The result from iteration 2 is summarized in Figure <ref>. Comparing the Al6061 and ABS plastic, Al6061 demonstrated superior structural performance with lower compliance. The slightly higher cost than the constraint is due to the penalty in the objective function and the linear surrogate model obtained from probing the supplier. The penalty value in equation <ref> is a large number but not infinite. The objective function uses the surrogate model for optimization. However, once the final process plan is generated, the actual nominal time and cost are used to create a process plan for which the time and cost are quoted. In Figure <ref>, we show the decision tree from this iteration with decisions related to cost. This particular decision tree informs the designer about how the cost of manufacturing is impacted by decisions about lead-time, supplier, and choice of manufacturing material. As illustrated, the top-level decision point is based on lead time (i.e., the time to complete the manufacturing) – meaning that decisions about lead time are the most important with respect to differentiating cost, followed by mass, then whether the supplier is Supplier B. The decision tree also indicates how many of the designs make similar decisions (which are those included in the same subtree), helping the designer understand clustering behavior. For example, following the tree to the left-most node (lead_time <= 2w, 1d), we can see that of the 15 options that have been generated by the two iterations, 26.67% of them have a lead time of less than or equal to 2 weeks, 1 day. The designs that were generated in the 2nd iteration are contained in the green highlighted subtree, which can be examined further by clicking on the node to show them or by further expanding the subtree to show further details. We can see that the new iteration explored a part of the design space that was not in the first iteration and can easily see that none of these new designs have a lead time below 2 weeks and 1 day. The final decision can be made if the engineer is satisfied with the result in iteration 2. Given the targeted application for a rocket engine mount, the engineer can choose the Al6061 version due to thermal requirements. The engine mount example demonstrates the versatility of our proposed GM framework in a case study. From the supply chain perspective, the information on the current material availability, time, cost, and scheduling of each supplier can be communicated with the design generator. Before running any design generator in each iteration, the surrogate model built from probing the supplier helps to eliminate options that cannot be achieved, while the correlation between time and cost versus nominal time and nominal cost helps the design generator create design candidates that satisfy the design constraints. §.§ Engine Mount Example with No Design Region No-design region adds additional design constraints to the optimization. It is often used when existing components intersect with the design domain. In this side study for the engine mount example, we prescribe a no-design region on one of the quadrants as illustrated in Figure <ref>. All other constraints and requirements remain the same with iteration 1 of the previous example. The result from this case study is shown in Figure <ref>. Due to the placement of the no-design region, none of the 2-axis cutting options is viable. We can also observe that due to the addition of no-design regions, other generated examples no longer demonstrate rotational symmetry. §.§ Engine Mount Example with Alternate Supplier Model A variety of factors may affect the performance of suppliers. Due to the addition of machine inventory or cancellation of a previous order, a supplier may see a sudden reduction in time and cost for the current order. These changes in the supplier landscape should in turn affect the result of generative manufacturing. In this example, we showcase how our system adapts to these changes and gives optimal solutions with respect to the requirements and resources. We change the supplier capabilities shown in Figure <ref> such that now subtractive manufacturing is inexpensive and can be done at a quicker rate compared to additive manufacturing. We dramatically reduce the time and cost coefficients for subtractive manufacturing for Supplier A to simulate such an event. Furthermore, in this example, we tighten the constraints on time to 10 days, the mass to 75 kg, and the cost to less than $25000. The probing results given the new supply chain situation and new constraints are given in Table <ref>. From the probing result given the updated constraints, we can see that none of the additive options is feasible due to the short time requirement. However, due to the reduction in time and cost for Supplier A, bids from Supplier A became feasible. The optimization result is shown in Figure <ref>. §.§ Structural optimization benchmark The underlying neural network-based topology optimization allowed us to integrate design, manufacturing, and supply chain constraints. To verify the performance of the topology optimizer, we compare our implementation with commercial software Autodesk Fusion 360's generative design <cit.>. As each handles manufacturing constraints differently, we focus on comparing the topology optimization function alone without additional manufacturing constraints. The boundary condition is identical to the engine mount example as illustrated in Figure <ref>. We evaluate the mechanical performance of both studies in Fusion 360 with their FE analysis tool. To export the neural network-based topology optimization result into Fusion 360, we first perform a marching cube analysis to extract the iso-surface of the geometry as a mesh. Then the mesh is imported to Fusion 360 and then converted to solid with t-spline analysis. Finally, the loading ring and the four bottom mounting points are added to the solid such that the load can be applied consistently in FE analysis. We observe that both methods are under the mass constraint of 100 kg while the neural network-based topology optimization reached a slightly smaller displacement with a slightly higher mass. This comparison demonstrates that the neural network-based topology optimization can reach a structural performance comparable to that of commercial software. § LIMITATIONS AND FUTURE WORK While we demonstrated the capabilities of our generative manufacturing system on several examples to mimic the real-world process flow with different materials, a combination of suppliers, and a variety of manufacturing methods, there are still assumptions we made that may not completely reflect a typical design and manufacturing process. Smaller parts will likely fit inside the building envelope of the additive or subtractive machines; however, larger parts may call for segmentation and assembly. We do not consider assembly of parts and the corresponding optimization in our current system, and we leave that for future work. Though designs for 3-axis machining can be adapted to a 5-axis machining center, optimization for 3-axis machining is still more restrictive than the 5-axis. We plan to implement design optimization with 5-axis machining and tool size constraints in the future. Other potential areas of focus and future directions include improving the cost and time formulations, optimization of the orientation of the part in additive manufacturing, finding optimal setups in subtractive manufacturing and including thermal analysis and stress-constrained topology optimization in the generative manufacturing compiler. § CONCLUSION Methods that inform and optimize the design based on the engineering as well as business requirements, such as the lead time and actual cost, have not seen much success. Simultaneously considering the design, manufacturing and supply chain requirements and resources is a difficult but crucial problem whose solution can be highly beneficial to numerous industries. We present the Generative Manufacturing compiler and showcase through various examples its capacity to produce optimal components by factoring in all the aforementioned considerations and constraints. We show how the best solution changes if the requirements change or the state of the suppliers changes and the trade-offs within the suppliers for a particular design. Our proposed compiler provides substantial benefits to a user performing the process of part making by enabling adaptation according to the situation and ensuring optimal solutions are generated. § ACKNOWLEDGEMENTS This article has been approved for public release by Lockheed Martin PIRA CET2024070158. This work was supported by the Lockheed Martin Corporation (MRA19001RPS004). We would like to thank Bob Hermida and James L. Mathieson for useful discussions. We would also like to thank Javier Cámara, Rebekka Wohlrab, and Pakshal Shah for their help on the explainability user interfaces. 62 natexlab#1#1 [#1],#1 [Allaire et al.(2002)Allaire, Jouve and Toader]allaire2002level authorAllaire, G., authorJouve, F., authorToader, A.M., year2002. titleA level-set method for shape optimization. journalComptes Rendus Mathematique volume334, pages1125–1130. [Almeida and Pagliuco(2014)]almeida2014development authorAlmeida, D.S.d., authorPagliuco, C.M.d.M., year2014. titleDevelopment status of l75: A brazilian liquid propellant rocket engine. journalJournal of Aerospace Technology and Management volume6, pages475–484. [Altair Engineering Inc.(2024)]altair authorAltair Engineering Inc., year2024. titleAltair. <https://altair.com/>. [Ansys Inc.(2024)]ansys authorAnsys Inc., year2024. titleAnsys. <https://www.ansys.com/>. [aPriori Technologies(2024)]apriori authoraPriori Technologies, year2024. titleapriori. <https://www.apriori.com/>. [Autodesk Inc.(2024a)]autodesk authorAutodesk Inc., year2024a. titleAutodesk. <https://www.autodesk.com/>. [Autodesk Inc.(2024b)]f360 authorAutodesk Inc., year2024b. titleAutodesk fusion 360. <https://www.autodesk.com/products/fusion-360/overview>. [Balsamo et al.(2004)Balsamo, Marco, Inverardi and Simeoni]DBLP:journals/tse/BalsamoMIS04 authorBalsamo, S., authorMarco, A.D., authorInverardi, P., authorSimeoni, M., year2004. titleModel-based performance prediction in software development: A survey. journalIEEE Trans. Software Eng. volume30, pages295–310. [Barclift et al.(2017)Barclift, Armstrong, Simpson and Joshi]barclift2017cad authorBarclift, M., authorArmstrong, A., authorSimpson, T.W., authorJoshi, S.B., year2017. titleCad-integrated cost estimation and build orientation optimization to support design for metal additive manufacturing, in: booktitleInternational Design Engineering Technical Conferences and Computers and Information in Engineering Conference, organizationAmerican Society of Mechanical Engineers. p. pagesV02AT03A035. [van Beek et al.(2023)van Beek, Nevile Karkaria and Chen]van2023digital authorvan Beek, A., authorNevile Karkaria, V., authorChen, W., year2023. titleDigital twins for the designs of systems: a perspective. journalStructural and Multidisciplinary Optimization volume66, pages49. [Bendsøe(1989)]bendsoe1989optimal authorBendsøe, M.P., year1989. titleOptimal shape design as a material distribution problem. journalStructural optimization volume1, pages193–202. [Brackett et al.(2011)Brackett, Ashcroft and Hague]Brackett2011 authorBrackett, D., authorAshcroft, I., authorHague, R., year2011. titleTopology optimization for additive manufacturing. journal2011 International Solid Freeform Fabrication Symposium . [Breiman et al.(2017)Breiman, Friedman, Olshen and Stone]Breiman2017 authorBreiman, L., authorFriedman, J.H., authorOlshen, R.A., authorStone, C.J., year2017. titleClassification and regression trees. publisherRoutledge. [Cámara et al.(2021)Cámara, Silva, Garlan and Schmerl]DBLP:conf/ecsa/CamaraSGS21 authorCámara, J., authorSilva, M., authorGarlan, D., authorSchmerl, B.R., year2021. titleExplaining architectural design tradeoff spaces: A machine learning approach, in: booktitleSoftware Architecture - 15th European Conference, ECSA 2021, Virtual Event, Sweden, September 13-17, 2021, publisherSpringer. pp. pages49–65. [Cámara et al.(2023)Cámara, Wohlrab, Garlan and Schmerl]DBLP:journals/jss/CamaraWGS23 authorCámara, J., authorWohlrab, R., authorGarlan, D., authorSchmerl, B.R., year2023. titleExtra: Explaining architectural design tradeoff spaces via dimensionality reduction. journalJ. Syst. Softw. volume198, pages111578. <https://doi.org/10.1016/j.jss.2022.111578>. [Chandrasekhar and Suresh(2021a)]Chandrasekhar2021Fourier authorChandrasekhar, A., authorSuresh, K., year2021a. titleLength scale control in topology optimization using fourier enhanced neural networks. journalCoRR volumeabs/2109.01861. <https://arxiv.org/abs/2109.01861>, http://arxiv.org/abs/2109.01861arXiv:2109.01861. [Chandrasekhar and Suresh(2021b)]Chandrasekhar2021 authorChandrasekhar, A., authorSuresh, K., year2021b. titleTounn: Topology optimization using neural networks. journalStructural and Multidisciplinary Optimization volume63. 10.1007/s00158-020-02748-4. [Chen et al.(2023)Chen, Joglekar, Whitefoot and Kara]chen2023concurrent authorChen, H., authorJoglekar, A., authorWhitefoot, K.S., authorKara, L.B., year2023. titleConcurrent build direction, part segmentation, and topology optimization for additive manufacturing using neural networks. journalJournal of Mechanical Design . [Chen and Hall(2022)]Chen-Hall2022 authorChen, Z.L., authorHall, N., year2022. titleSupply Chain Scheduling. publisherSpringer Publishing. [Cámara et al.(2023)Cámara, Wohlrab, Garlan and Schmerl]ieeesoft23 authorCámara, J., authorWohlrab, R., authorGarlan, D., authorSchmerl, B., year2023. titleFocusing on what matters: Explaining quality tradeoffs in software-intensive systems via dimensionality reduction. journalIEEE Software , pages1–1010.1109/MS.2023.3320689. [Daniels(2023)]daniels2023building authorDaniels, J., year2023. titleBuilding the factory of the future with the industrial internet of things. journalComputer volume56, pages84–88. [Dassault Systèmes(2024a)]dgd authorDassault Systèmes, year2024a. title3ds performance driven generative design. <https://www.3ds.com/cloud/performance-driven-generative-design>. [Dassault Systèmes(2024b)]catia authorDassault Systèmes, year2024b. titleCatia. <https://www.3ds.com/products/catia>. [Dassault Systèmes(2024c)]solidworks authorDassault Systèmes, year2024c. titleSolidworks. <https://www.solidworks.com/>. [Dechter et al.(1991)Dechter, Mieri and Pearl]Dechter71 authorDechter, R., authorMieri, I., authorPearl, J., year1991. titleTemporal constraint networks. journalArtificial Intelligence volume49, pages61–95. [Gaynor and Guest(2014)]Gaynor2014 authorGaynor, A.T., authorGuest, J.K., year2014. titleTopology optimization for additive manufacturing: Considering maximum overhang constraint. 10.2514/6.2014-2036. [Gersborg and Andreasen(2011)]gersborg2011explicit authorGersborg, A.R., authorAndreasen, C.S., year2011. titleAn explicit parameterization for casting constraints in gradient driven topology optimization. journalStructural and Multidisciplinary Optimization volume44, pages875–881. [Guest and Zhu(2012)]guest2012casting authorGuest, J.K., authorZhu, M., year2012. titleCasting and milling restrictions in topology optimization via projection-based algorithms, in: booktitleInternational Design Engineering Technical Conferences and Computers and Information in Engineering Conference, organizationAmerican Society of Mechanical Engineers. pp. pages913–920. [Kang et al.(2011)Kang, Jackson and Schulte]10.1007/978-3-642-21292-5_3 authorKang, E., authorJackson, E., authorSchulte, W., year2011. titleAn approach for effective design space exploration, in: editorCalinescu, R., editorJackson, E. (Eds.), booktitleFoundations of Computer Software. Modeling, Development, and Verification of Adaptive Systems, publisherSpringer Berlin Heidelberg, addressBerlin, Heidelberg. pp. pages33–54. [Kingma and Ba(2014)]kingma2014adam authorKingma, D.P., authorBa, J., year2014. titleAdam: A method for stochastic optimization. journalarXiv preprint arXiv:1412.6980 . [Laborie(2009)]Laborie2009 authorLaborie, P., year2009. titleBm ilog cp optimizer for detailed scheduling illustrated on three problem, in: booktitleProceedings of CPAIOR 2009, Lecture Notes in Computer Science 5547, publisherSpringer-Verlag. pp. pages148–162. [Langelaar(2016)]Langelaar2016 authorLangelaar, M., year2016. titleTopology optimization of 3d self-supporting structures for additive manufacturing. journalAdditive Manufacturing volume12. 10.1016/j.addma.2016.06.010. [Langelaar(2019)]langelaar2019topology authorLangelaar, M., year2019. titleTopology optimization for multi-axis machining. journalComputer Methods in Applied Mechanics and Engineering volume351, pages226–252. [Leary et al.(2014)Leary, Merli, Torti, Mazur and Brandt]Leary2014 authorLeary, M., authorMerli, L., authorTorti, F., authorMazur, M., authorBrandt, M., year2014. titleOptimal topology for additive manufacture: A method for enabling additive manufacture of support-free optimal structures. journalMaterials and Design volume63. 10.1016/j.matdes.2014.06.015. [Lever et al.(2017)Lever, Krzywinski and Altman]pcanature authorLever, J., authorKrzywinski, M., authorAltman, N., year2017. titlePrincipal component analysis. journalNature Methods volume14, pages641–642. [Liu and Ma(2015)]liu20153d authorLiu, J., authorMa, Y.S., year2015. title3d level-set topology optimization: a machining feature-based approach. journalStructural and Multidisciplinary Optimization volume52, pages563–582. [Liu and To(2017)]Liu2017Support authorLiu, J., authorTo, A.C., year2017. titleDeposition path planning-integrated structural topology optimization for 3d additive manufacturing subject to self-support constraint. journalComputer-Aided Design volume91, pages27–45. <https://www.sciencedirect.com/science/article/pii/S0010448517300635>, https://doi.org/10.1016/j.cad.2017.05.003. [Mezzadri et al.(2018)Mezzadri, Bouriakov and Qian]Mezzadri2018 authorMezzadri, F., authorBouriakov, V., authorQian, X., year2018. titleTopology optimization of self-supporting support structures for additive manufacturing. journalAdditive Manufacturing volume21. 10.1016/j.addma.2018.04.016. [Mhapsekar et al.(2018)Mhapsekar, McConaha and Anand]Mhapsekar2018 authorMhapsekar, K., authorMcConaha, M., authorAnand, S., year2018. titleAdditive manufacturing constraints in topology optimization for improved manufacturability. journalJournal of Manufacturing Science and Engineering, Transactions of the ASME volume140. 10.1115/1.4039198. [Mirzendehdel et al.(2020)Mirzendehdel, Behandish and Nelaturi]mirzendehdel2020topology authorMirzendehdel, A.M., authorBehandish, M., authorNelaturi, S., year2020. titleTopology optimization with accessibility constraint for multi-axis machining. journalComputer-Aided Design volume122, pages102825. [Mirzendehdel and Suresh(2016)]Mirzendehdel2016 authorMirzendehdel, A.M., authorSuresh, K., year2016. titleSupport structure constrained topology optimization for additive manufacturing. journalCAD Computer Aided Design volume81. 10.1016/j.cad.2016.08.006. [Morris et al.(2020)Morris, Butscher and Iorio]morris2020subtractive authorMorris, N., authorButscher, A., authorIorio, F., year2020. titleA subtractive manufacturing constraint for level set topology optimization. journalStructural and Multidisciplinary Optimization volume61, pages1573–1588. [Morton and Pentico(1993)]Morton1993 authorMorton, T.E., authorPentico, D., year1993. titleHeuristic Scheduling Systems. publisherWiley Publishers. [Murashkin et al.(2013)Murashkin, Antkiewicz, Rayside and Czarnecki]murashkin2013visualization authorMurashkin, A., authorAntkiewicz, M., authorRayside, D., authorCzarnecki, K., year2013. titleVisualization and exploration of optimal variants in product line engineering, in: booktitleProc. of the 17th Intl. Software Product Line Conference. [Nemhouser(1994)]Nemhouser1994 authorNemhouser, G.L., year1994. titleThe age of optimization: Solving large-scale real-world problems. journalOperations Research volume42, pages5–13. [nTopology Inc.(2024)]ntopology authornTopology Inc., year2024. titlentopology. <https://www.ntop.com/>. [Pinedo(1993)]Pinedo2016 authorPinedo, M.L., year1993. titleScheduling: Theory, Algorithms and Systems. publisherSpringer. [Qian(2017)]Qian2017 authorQian, X., year2017. titleUndercut and overhang angle control in topology optimization: A density gradient based integral approach. journalInternational Journal for Numerical Methods in Engineering volume111. 10.1002/nme.5461. [Seepersad(2014)]seepersad2014challenges authorSeepersad, C.C., year2014. titleChallenges and opportunities in design for additive manufacturing. journal3D printing and Additive Manufacturing volume1, pages10–13. [Siemens Digital Industries Software(2024)]siemens authorSiemens Digital Industries Software, year2024. titleSiemens nx. <https://plm.sw.siemens.com/en-US/nx/NX>. [Singh and Gu(2012)]singh2012towards authorSingh, V., authorGu, N., year2012. titleTowards an integrated generative design framework. journalDesign studies volume33, pages185–207. [Sitzmann et al.(2020)Sitzmann, Martel, Bergman, Lindell and Wetzstein]sitzmann2020implicit authorSitzmann, V., authorMartel, J., authorBergman, A., authorLindell, D., authorWetzstein, G., year2020. titleImplicit neural representations with periodic activation functions. journalAdvances in Neural Information Processing Systems volume33, pages7462–7473. [Smith(1987)]Smith1987 authorSmith, S.F., year1987. titleA constraint-based framework for reactive management of factory schedules, in: editorOliff, M. (Ed.), booktitleIntelligent Manufacturing, publisherBenjamin Cummings Publishers. [Tancik et al.(2020)Tancik, Srinivasan, Mildenhall, Fridovich-Keil, Raghavan, Singhal, Ramamoorthi, Barron and Ng]tancik2020fourier authorTancik, M., authorSrinivasan, P., authorMildenhall, B., authorFridovich-Keil, S., authorRaghavan, N., authorSinghal, U., authorRamamoorthi, R., authorBarron, J., authorNg, R., year2020. titleFourier features let networks learn high frequency functions in low dimensional domains. journalAdvances in Neural Information Processing Systems volume33, pages7537–7547. [Thompson et al.(2016)Thompson, Moroni, Vaneker, Fadel, Campbell, Gibson, Bernard, Schulz, Graf, Ahuja and Martina]Thompson2016 authorThompson, M.K., authorMoroni, G., authorVaneker, T., authorFadel, G., authorCampbell, R.I., authorGibson, I., authorBernard, A., authorSchulz, J., authorGraf, P., authorAhuja, B., authorMartina, F., year2016. titleDesign for additive manufacturing: Trends, opportunities, considerations, and constraints. journalCIRP Annals - Manufacturing Technology volume65. 10.1016/j.cirp.2016.05.004. [Vatanabe et al.(2016)Vatanabe, Lippi, de Lima, Paulino and Silva]vatanabe2016topology authorVatanabe, S.L., authorLippi, T.N., authorde Lima, C.R., authorPaulino, G.H., authorSilva, E.C., year2016. titleTopology optimization with manufacturing constraints: A unified projection-based approach. journalAdvances in Engineering Software volume100, pages97–112. [van de Ven et al.(2018)van de Ven, Maas, Ayas, Langelaar and van Keulen]van2018continuous authorvan de Ven, E., authorMaas, R., authorAyas, C., authorLangelaar, M., authorvan Keulen, F., year2018. titleContinuous front propagation-based overhang control for topology optimization with additive manufacturing. journalStructural and Multidisciplinary Optimization volume57, pages2075–2091. [Wang and Qian(2020)]Wang2020 authorWang, C., authorQian, X., year2020. titleSimultaneous optimization of build orientation and topology for additive manufacturing. journalAdditive Manufacturing volume34. 10.1016/j.addma.2020.101246. [Wang et al.(2019)Wang, Qian, Gerstler and Shubrooks]Wang2019 authorWang, C., authorQian, X., authorGerstler, W.D., authorShubrooks, J., year2019. titleBoundary slope control in topology optimization for additive manufacturing: For self-support and surface roughness. journalJournal of Manufacturing Science and Engineering, Transactions of the ASME volume141. 10.1115/1.4043978. [Wohlrab et al.(2023)Wohlrab, Cámara, Garlan and Schmerl]DBLP:journals/jss/WohlrabCGS23 authorWohlrab, R., authorCámara, J., authorGarlan, D., authorSchmerl, B.R., year2023. titleExplaining quality attribute tradeoffs in automated planning for self-adaptive systems. journalJ. Syst. Softw. volume198. [Zhang et al.(2019)Zhang, Cheng and Xu]Zhang2019 authorZhang, K., authorCheng, G., authorXu, L., year2019. titleTopology optimization considering overhang constraint in additive manufacturing. journalComputers and Structures volume212. 10.1016/j.compstruc.2018.10.011. [Zhou and Rozvany(1991)]zhou1991coc authorZhou, M., authorRozvany, G., year1991. titleThe coc algorithm, part ii: Topological, geometrical and generalized shape optimization. journalComputer methods in applied mechanics and engineering volume89, pages309–336.
http://arxiv.org/abs/2409.03250v1
20240905045735
Reinforcement-Learning-Enabled Beam Alignment for Water-Air Direct Optical Wireless Communications
[ "Jiayue Liu", "Tianqi Mao", "Dongxuan He", "Yang Yang", "Zhen Gao", "Dezhi Zheng", "Jun Zhang" ]
eess.SP
[ "eess.SP" ]
Reinforcement-Learning-Enabled Beam Alignment for Water-Air Direct Optical Wireless Communications Jiayue Liu^1,2,3, Tianqi Mao^1,2,3, Dongxuan He^4, Yang Yang^5, Zhen Gao^1,2,3, Dezhi Zheng^1,2,3, Jun Zhang^1,2 ^1State Key Laboratory of CNS/ATM, Beijing Institute of Technology, Beijing 100081, China ^2 MIIT Key Laboratory of Complex-Field Intelligent Sensing, Beijing Institute of Technology, Beijing 100081, China ^3 Yangtze Delta Region Academy, Beijing Institute of Technology (Jiaxing), Jiaxing 314019, China ^4 School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China ^5 Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China E-mails: {[email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]} September 9, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The escalating interests on underwater exploration/ reconnaissance applications have motivated high-rate data transmission from underwater to airborne relaying platforms, especially under high-sea scenarios. Thanks to its broad bandwidth and superior confidentiality, Optical wireless communication has become one promising candidate for water-air transmission. However, the optical signals inevitably suffer from deviations when crossing the highly-dynamic water-air interfaces in the absence of relaying ships/buoys. To address the issue, this article proposes one novel beam alignment strategy based on deep reinforcement learning (DRL) for water-air direct optical wireless communications. Specifically, the dynamic water-air interface is mathematically modeled using sea-wave spectrum analysis, followed by characterization of the propagation channel with ray-tracing techniques. Then the deep deterministic policy gradient (DDPG) scheme is introduced for DRL-based transceiving beam alignment. A logarithm-exponential (LE) nonlinear reward function with respect to the received signal strength is designed for high-resolution rewarding between different actions. Simulation results validate the superiority of the proposed DRL-based beam alignment scheme. Water-air direct communications, optical wireless communications (OWC), dynamic water surface, deep reinforcement learning (DRL). § INTRODUCTION Recent years have witnessed unprecedented developments of maritime technologies including unmanned underwater vehicles (UUVs) and underwater buoy platforms <cit.>. Such advancements have facilitated numerous civilian/military applications such as ocean exploration and tactical surveillance <cit.>. To guarantee timely backhauling of the measurement data, establishing communication links between underwater and airborne relaying platforms, i.e., water-air links, can be mandatory, especially for high-sea scenarios, where the longshore stations are too distant to support direct transmission. Traditional approaches tend to employ acoustic communications attributed to its robustness to the deleterious propagation channel in underwater environment, which is constrained by limited bandwidth and excessive latency <cit.>. On the other hand, despite the broad achievable communication bandwidth under terrestrial scenarios, the radio-frequency (RF) signal suffers from severe attenuation/absorption during underwater propagation, making it inapplicable for practical implementations <cit.>. Alternatively, optical wireless communication (OWC) has been demonstrated to support superior throughput levels for underwater and free-space transmission, thanks to its substantial unlicensed spectrum resources and moderate propagation loss for both water and atmospheric mediums <cit.>. Therefore, OWC has become one promising candidate for next-generation broadband water-air data transmission. The majority of existing literature on water-air OWC concerns multi-hop communications with offshore relaying platforms <cit.>, which cleverly circumvents penetration through highly dynamic water-air interface. However, such methods become inapplicable without the presence of relaying platforms. This can happen for high maneuverability tasks where deployment of relaying platforms cannot be instantly accomplished, and for denied environments where offshore relaying nodes have been destroyed. Therefore, it is worthwhile to investigate direct OWC across the water-air interface as a complement to relaying strategies. Unlike the relaying counterpart, the direct water-air optical transmission suffers from severe transceiving beam alignment caused by the dynamic characteristics of the water-air interface. To be more specific, refraction of the wavy water surface can cause random attenuation and deflection to the optical path, leading to frequent outages, especially for highly directional laser transmission <cit.>. There have been preliminary researches working on water-air direct OWC <cit.>. applied photodiode array to detect the beam-direction changes caused by waves, and used micro-electro-mechanical system (MEMS) to compensate for the beam misalignment. However, this method makes it difficult to deal with horizontal offset when the transmission distance is sufficiently large. Besides, <cit.> utilized the scattering of the underwater LED emitter to ensure reliable water-air transmission, which was verified by experimental demonstrations. Afterward, the authors further investigated the waving effect on channel gain of water-air OWC, and introduced array-based transceivers to enhance the achievable rate and the error performance <cit.>. Note that the proposed methods only mitigated the impacts of dynamic waves passively, which may cause unstable channel gain and even random interruptions without active beam alignment operations. From the aforementioned discussions, the previous breakthroughs mainly concentrate on hardware implementations of water-air direct OWC. On the other hand, there is lack of research on effective beam alignment strategies for water-air OWC. To fill the gap in related research, this article proposes a beam alignment algorithm for water-air direct OWC based on deep reinforcement learning (DRL). Specifically, the water-air OWC channel is mathematically modeled based on the wave spectrum theory, followed by characterization of the optical channel using ray-tracing method. Meanwhile, a DRL environment with a designed reward function is established on basis of the proposed channel model, in which the beam alignment algorithm is trained utilizing deep deterministic policy gradient (DDPG) strategy. Simulation results demonstrate that the proposed beam alignment method has superior performance in keeping high channel gain and resisting channel variations. § SYSTEM MODEL In this section, the mathematical model of the propagation channel of water-air direct OWC system is provided. §.§ Water-Air Communication Scenario As illustrated in Fig. <ref>, this article considers uplink direct OWC between UUVs and airborne drones, i.e., water-air OWC for brevity. Under this scenario, the optical signals are transmitted from the laser diodes (LD) through the water and atmosphere mediums sequentially, and detected by the avalanche photodiode (APD) at the airborne receiving platform. As presented in Fig. <ref>, to characterize the propagation channel of water-air OWC, we define the maximum accessible angle, the maximum angle that the transmitter/receiver can emit/detect optical beam, of LD and APD as ω_D and ω_A, respectively <cit.>. Besides, the angles of departure and arrival are denoted as α_D and α_A, and the propagation distances through water and atmosphere mediums are represented by d_water and d_air. Moreover, the optical signals are assumed to cross the water-air interface at the incident angle θ_1 and emergence angle θ_2. §.§ Channel Model The propagation channel of water-air OWC is determined by characteristics of the LD and APD, the path loss through water and air mediums, and the penetration loss crossing the water-air interface. Hence, the optical channel gain can be formulated as G = G_D(α_D)· G_p(d_water,d_air)G_ref(n,θ_1)· G_A(α_A) where G_D, G_A denote the departure and arrival gains, and G_p and G_r stand for the path gain and refraction gain, respectively. §.§.§ Departure Gain G_D depends on the departure angle α_D and the LD wavelength λ, calculated as <cit.> G_D(α_D) = exp(-2sin^2(α_D)/ω_D^2[1+(λcos(α_D)/πω_D^2)^2]) §.§.§ Path Gain The value of G_p is decided by the spreading loss and absorption effects through water and air mediums, which is expressed as <cit.> G_p(d_water,d_air) = exp(α(λ)d_water)/(d_water+d_air)^2 where α(λ) denotes the absorption coefficient according to Lambert's law <cit.>. For simplicity, we omit bubbles or turbulence in the water medium. §.§.§ Refraction Gain According to Snell's Law and Fresnel Equation, G_ref can be calculated with the incident angle θ_1 and the refraction indices n_1 and n_2 for water and air mediums, shown as <cit.> G_ref = 1-1/2[(n_2cosθ_1-n_1cosθ_2/n_2cosθ_1+n_1cosθ_2)^2 + (n_1cosθ_1-n_2cosθ_2/n_1cosθ_1+n_2cosθ_2)^2] §.§.§ Arrival Gain The value of G_A can be determined by the maximum accessible angle of the APD ω_A and the arrival angle α_A, written as <cit.> G_A = n_2^2/sin^2ω_Acosα_A § MATHEMATICAL MODELLING OF OPTICAL PATHS CROSSING WATER-AIR INTERFACE Different from free-space/underwater optical communications, the water-air OWC channel would be significantly impacted by dynamic refraction effects of the waving water surface. Therefore, accurate mathematical modeling of the impacts of the dynamic water surface on optical path can be necessary for channel characterization of water-air OWC. Without loss of generality, this paper mainly investigates the dynamic characteristics of ocean waves. Below the mathematical model of the waving water-air interface is derived with the inspiration of existing oceanography theories. Then the optical propagation path can be determined using ray-tracing based on the established model. The wave spectrum theory, which is commonly employed in oceanography, can describe the ocean wave by energy distribution in the frequency domain. There are various types of wave spectrum, classified according to statistics of sea conditions in specific areas. In this paper, the JONSWAP spectrum model, proposed by Joint North Sea Wave Project, is used to introduce the ocean wave conditions <cit.>. The 2-dimensional (2-D) JONSWAP spectrum can be formulated as S(ω) = ag^2/ω^5exp[-1.25(ω_p/ω)^4]·γ^exp[-(ω-ω_p)^2/2(σω_p)^2] where ω represents the frequency, and α = 0.076(U_10^2/gx_f)^0.22. Here x_f is the fetch on the sea, g represents gravity, U_10 stands for the wind speed at 10 m altitude. Moreover, ω_p = 22(g^2/x_fU_10)^1/3 denotes the peak power, and γ and σ represent the shape-forming parameters. Under 3-dimensional (3-D) circumstances, the directional spectrum is introduced to describe the ocean wave, which can be formulated as G(ω,θ) = 1/π[1+pcos(2θ)+qcos(4θ)], θ≤π/2 where θ denotes the direction of wave propagation, and we have p = 0.5+0.82exp[-1/2(ω/ω_p)^4] and q = 0.32exp[-1/2(ω/ω_p)^4]. Then the 3-D JONSWAP spectrum can be calculated as S_JONSWAP(ω,θ) = S(ω)G(ω,θ) The illustrations of the 2-D and 3-D spectrum under 12 m/s wind speed and 2×10^4 m fetch are exemplified as Fig. <ref>. Based on the wave spectrum model, the harmonic wave method is employed to simulate the ocean surface with low computational cost <cit.>. This method assumes that the ocean wave is composed by a group of sine functions expressed as T = ∑_i a_i cos(ω_i+ϕ_i) which obeys the power distribution described by the wave spectrum. By substituting the JONSWAP spectrum into function <ref>, the expression of the wave surface at (x,y) on the plane at time instant t can be formulated as T(x,y,t)=∑_i ∑_j √(S_JONSWAP(ω_i,θ_j) dω dθ) ×cos[ ω_i t - ω_i^2/g(xcosθ_j+ysinθ_j)+ϵ_i,j] where ω_i and θ_j represent frequency and direction angle, respectively, and ϵ_i,j denotes random phase shift. The simulated model of water-air interface is exemplified as Fig. <ref>. On the basis of the water-air interface model, reconstructed ray-tracing algorithm <cit.> is introduced to calculate the optical propagation path by assuming the optical emitter has a certain extent and calculating its center by iterations. Specific steps are as follows: The initial step is to assume a screen at the same size as the field of view (FOV) of the receiver and divide it into m× m pixels. Each pixel is represented by (x_i,y_j,z_c) under receiver coordinate system, for i,j=1,2,…,m, and z_c remains a constant value. Under this condition, the coordinate difference between pixels are Δ x=x_i-x_i-1=2z_ctan(FOV/2)/m, and Δ y is the same. Meanwhile, the direction of the receiver (0,0,z_c) is regarded as the central coordinate (x_c,y_c,z_c). Then, the ray-tracing algorithm is used to trace and calculate the light intensity of each pixel, denoted as I_i,j. According to the light intensity, the central coordinate is updated to the centroid of I_i,j, which can be calculated as x_c = ∑_i=1^m∑_j=1^m I_i,jx_i/∑_i=1^m∑_j=1^m x_i and y_c = ∑_i=1^m∑_j=1^m I_i,jy_j/∑_i=1^m∑_j=1^m y_j After updating of the central coordinate, more delicate pixel division around center (x_c,y_c,z_c) is conducted for the division gap Δ x and Δ y reduce by 10 times. Besides, the same operation as above is repeated until the central coordinate no longer changes, which represents the receiver direction is confirmed as v_c=(x_c,y_c,z_c)^T. Finally, since the above result v_c is calculated in the local coordinate system of the receiver, a coordinate transformation v=(x,y,z)^T=R_zR_xR_z(x_c,y_c,z_c)^T is performed to obtain the direction of the optical path in the absolute coordinate system <cit.>, thus the refraction spot and the optical path can be solved. § BEAM ALIGNMENT ALGORITHM In a water-air OWC system, beam alignment between the transmitter and receiver can be mandatory to enhance the channel gain of the highly directional laser link. However, the unpredictability and complexity of ocean waves make it difficult for traditional model-based algorithms to handle this issue. Inspired by its superiority in decision-making, DRL has shown to be a promising method in beam alignment. In this paper, DDPG is selected to accomplish the beam alignment task. §.§ Preliminaries for DRL Techniques In a communication environment involving both water and air, accurately obtaining variations of the sea surface can be challenging for both the transmitter and receiver. However, reinforcement learning (RL) is a suitable algorithm that can automatically extract information from the environment. The main concept behind RL is to consider all of the influential factors as the Environment, while an Agent is trained to observe changes in the environment parameters and conduct actions under the influence of the environment. Under the proposed model, the water-air optical channel is considered as the environment, while the transmitter/receiver is regarded as the agent. The agent is influenced by the environment, which manifests as observation results like arrival angle and light intensity. The agent conducts actions by adjusting the transmitter and receiver direction based on the observation from the environment. Furthermore, among the various algorithms in the RL area, an algorithm that can process complex changes in the environment and take continuous action is required for the problem of beam alignment under a water-air OWC channel. Accordingly, DDPG, a kind of acter-critic algorithm in DRL, is the most suitable algorithm for solving such problems. As shown in Fig. <ref>, the agent is composed of an Actor network which takes actions according to the environment, and a Critic network which judges the quality of each action and conducts feedback to regulate the actor. §.§ DRL Beam Alignment System Training Process In this paper, the beam alignment algorithm is trained as the agent, and the optical channel is considered as the environment. Key factors that influence the training process include initial states and hyper-parameters that must also be set appropriately. The details of the training process will be described in the following text. §.§.§ Establishing of the Environment An RL Environment can be described by two functions, namely the reset function and the step function. The reset function is operated to reset the state of the environment, which requires no extra input parameter, and generates the output of initial observation vector and initial state. The step function is operated to control the changes in the environment caused by the natural deformation and the actions from the agent, which requires the previous state and action vector as input parameters, and the four output parameters of which are shown as table. <ref>. Moreover, the reset function and step function are compiled based on the cross-domain OWC channel model. In this research, the reward plays a crucial role in the training process of a model, influencing how quickly it converges. To achieve the optimal training result, it is necessary to design a reward function to calculate step reward based on the characteristics of the environment. The agent tends to get more rewards by adjusting its action strategies. As the performance of beam alignment is directly related to light intensity at the receiver, the reward function should be more distinguishable by different actions. Therefore, the reward function is designed as a logarithm-exponential (LE) function of the light intensity. This function includes logarithm terms to distinguish intensity under small orders of magnitude and an exponential term to amplify intensity with higher value, given by Reward(I) = G · (ln(aI+1) + lg(bI) + exp(cI)) + B where I represents the received light intensity, G is the total gain to control the peak value, a, b, and c are coefficients that adjust the value of each section to the same order of magnitude, and B is the bias which can sometimes simplify calculation without influencing the training process. §.§.§ Options of the Agent The agent is a part of the environment and its interaction with the environment. Therefore, the interface between the agent and the environment is a key impact factor in both the training process and practical application. The interface between the agent and the environment can be divided into two groups, which are organized into two vectors named observation and action. The observation vector contains all factors that can be observed by the agent from the environment and acts as the input of the action and critic network. In a cross-domain OWC channel, the observation vector is designed as an eight-dimensional vector including the transmitter direction, the receiver direction, the light intensity, and relative time. The action vector contains all factors that decide the action of the agent. In a cross-domain OWC channel, the observer vector is designed as a 4-dimensional vector including the transmitter direction and the receiver direction. It is worth noting that the orientation of the transmitter and receiver is limited (transmitter upwards and receiver downwards), so two-dimensional vectors can be used to represent the three-dimensional direction they can reach. Such operation can reduce network parameters, thereby reducing training and running costs. §.§.§ Hyper-Parameters for Training Process After the environment and agent have been established and set, the training process is ready to begin. Before training, various hyper-parameters can be set to influence the training process and result from various aspects. Part of the hyper-parameters that have a critical influence on the training process are customized and shown in the table. <ref>, while others remain default value. § SIMULATION RESULTS In this section, the results of the simulation are displayed and analyzed. Parameter settings of the simulation environment are as follows, the sea level is regarded as 0m and the transmitter and receiver are assumed to be located at -10m in the sea and 10m in the air. The refresh interval of the environment is set to 0.05s. During the experiment, the proposed method is compared to these methods below, a) the theoretical upper-bound (UB) with the maximum channel gain, b) straight-facing alignment strategy that the transmitter and receiver face directly to each other, c) gain with no alignment algorithm. The operation simulation of the proposed method is conducted under an environment that includes a wind speed of 12m/s, 20m vertical distance, 10% horizontal offset, and a time-span of 25s with 0.05s sampling interval. The simulation result describes the OWC channel gain over time as Fig. <ref>. The proposed method performs much better than the method without alignment and is closer to the theoretical upper-band than the straight-facing alignment method. As shown in Fig. <ref>(a), the average channel gain of the proposed algorithm maintains a high level which is closer to the theoretical upper-bound than other methods, and takes little influence from the horizontal offset. In addition, to verify the stability of the method, a variable σ_diff^2=var(G_UB-G) is used to measure the stability, where G_UB and G are the channel gain of the upper-bound and each method, respectively. A lower value of σ_diff^2 indicates that the method is more stable against wave influences. As shown in Fig. <ref>(b), σ_diff^2 of the proposed method is lower than its counterparts, which indicates the higher stability of the proposed method. Overall, the proposed method has better performance in terms of resistance to influences of horizontal offset, as well as influences caused by dynamic-wave fluctuations, thus having higher channel gain and better stability in the water-air OWC channel when compared to other methods. § CONCLUSION In this paper, we propose a DRL-based beam alignment strategy for water-air direct OWC, which can significantly enhance the resilience to dynamic characteristics of the water surface. More specifically, the dynamic properties of the water-air interface are investigated with sea-wave spectrum analysis, followed by the propagation modeling using ray-tracing methods. On the basis of the established channel model, the beam alignment problem is modeled as a reinforcement learning process, where the DDPG scheme is employed. To further enhance its convergence performance, a logarithm-exponential (LE) nonlinear reward function with respect to the received signal strength is developed for more distinguishable rewards between different actions. Simulation results demonstrate that the proposed method keeps high channel gain in the water-air OWC channel and can resist influences of wave fluctuations and horizontal offsets. § ACKNOWLEDGMENT This work was supported in part by the National Natural Science Foundation of China under Grant No. 62088101, in part by the Young Elite Scientists Sponsorship Program by CAST under Grant 2022QNRC001, in part by the National Natural Science Foundation of China under Grant 62101306, and in part by the National Natural Science Foundation of China under Grant 62371065. (Jiayue Liu and Tianqi Mao are Co-first authors with equal contribution.) (Corresponding author: Dezhi Zheng.) 10 url@samestyle An_overview M. C. Domingo, “An overview of the internet of underwater things,” Journal of Network and Computer Applications, vol. 35, no. 6, pp. 1879–1890, 2012. [Online]. Available: <https://www.sciencedirect.com/science/article/pii/S1084804512001646> Overview_UWC ——, “Overview of channel models for underwater wireless communication networks,” Physical Communication, vol. 1, no. 3, pp. 163–182, 2008. [Online]. Available: <https://www.sciencedirect.com/science/article/pii/S1874490708000451> Zhou_access_19 J. Zhou, H. Jiang, P. Wu, and Q. Chen, “Study of propagation channel characteristics for underwater acoustic communication environments,” IEEE Access, vol. 7, pp. 79 438–79 445, 2019. A_Survay_UOWC Z. Zeng, S. Fu, H. Zhang, Y. Dong, and J. Cheng, “A survey of underwater optical wireless communications,” IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 204–238, 2017. Waving_Effect T. Lin, C. Fu, T. Wei, N. Huang, X. Liu, L. Tang, L. Su, J. Luo, and C. Gong, “Waving effect characterization for water-to-air optical wireless communication,” Journal of Lightwave Technology, vol. 41, no. 1, pp. 120–136, 2023. Recent_Progress H. Luo, J. Wang, F. Bu, R. Ruby, K. Wu, and Z. Guo, “Recent progress of air/water cross-boundary communications for underwater sensor networks: A review,” IEEE Sensors Journal, vol. 22, no. 9, pp. 8360–8382, 2022. xiaoyang2019performance X. Xiaoyang, S. Liwei, Z. Jinyu, Z. Wu, D. Wenjing, Z. Xu, “Performance analysis of sea unmanned ship routing protocol based on ad hoc network,” in 2019 International Conference on Information Technology and Computer Application (ITCA).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 221–224. Underwater_optic H. Kaushal and G. Kaddoum, “Underwater optical wireless communication,” IEEE Access, vol. 4, pp. 1518–1547, 2016. Effect_of Y. Dong, S. Tang, X. Zhang, “Effect of random sea surface on downlink underwater wireless optical communications,” IEEE Communications Letters, vol. 17, no. 11, pp. 2164–2167, 2013. Underwater_and L.-K. Chen, Y. Shao, and Y. Di, “Underwater and water-air optical wireless communication,” Journal of Lightwave Technology, vol. 40, no. 5, pp. 1440–1452, 2022. Mitigation Y. Di, Y. Shao, and L.-K. Chen, “Mitigation of wave-induced packet loss for water-air optical wireless communication by a tracking system,” in 2021 Optical Fiber Communications Conference and Exhibition (OFC), 2021, pp. 1–3. Preliminary T. Lin, N. Huang, C. Gong, J. Luo, Z. Xu, “Preliminary characterization of coverage for water-to-air visible light communication through wavy water surface,” IEEE Photonics Journal, vol. 13, no. 1, pp. 1–13, 2021. Improvement_of D. Wu, Z. Ghassemlooy, H. L. Minh, S. Rajbhandari, and A. C. Boucouvalas, “Improvement of the transmission bandwidth for indoor optical wireless communication systems using a diffused gaussian beam,” IEEE Communications Letters, vol. 16, no. 8, pp. 1316–1319, 2012. Study_on H. Wu and Q. Fan, “Study on led visible light communication channel model based on poisson stochastic network theory,” in 2020 International Conference on Wireless Communications and Smart Grid (ICWCSG).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 5–9. Absorption_spec R. M. Pope and E. S. Fry, “Absorption spectrum (380–700 nm) of pure water. ii. integrating cavity measurements,” Applied optics, vol. 36, no. 33, pp. 8710–8723, 1997. Fresnel A. I. Lvovsky, “Fresnel equations,” Encyclopedia of Optical Engineering, vol. 27, pp. 1–6, 2013. A_Study_Spectra J. Prendergast, M. Li, and W. Sheng, “A study on the effects of wave spectra on wave energy conversions,” IEEE Journal of Oceanic Engineering, vol. 45, no. 1, pp. 271–283, 2020. Three-dimensional Z. Chang, F. Han, Z. Sun, Z. Gao, and L. Wang, “Three-dimensional dynamic sea surface modeling based on ocean wave spectrum,” Acta Oceanologica Sinica: English ver., vol. 40, no. 10, p. 11, 2021. Ray_tracing C. Benthin, I. Wald, M. Scherbaum, and H. Friedrich, “Ray tracing on the cell processor,” in 2006 IEEE Symposium on Interactive Ray Tracing, 2006, pp. 15–23. lang1987linear S. Lang, Linear algebra.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 1987.
http://arxiv.org/abs/2409.02195v1
20240903180351
GRANDProto300: status, science case, and prospects
[ "Simon Chiche" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM" ]
Schwinger Effect of Extremal Reissner-Nordström Black Holes Puxin Lin, Gary Shiu ============================================================ § INTRODUCTION GRANDProto300 is the mid-scale prototype of the GRAND <cit.> experiment, designed to detect radio signals emitted by ultra-high-energy astroparticles interacting in the Earth's atmosphere. It will serve as a test bench for the GRAND experiment and probe the feasibility of radio-detection of astroparticles with a sparse large-scale radio array. Specifically, GRANDProto300 aims to demonstrate the feasibility of autonomous radio detection and the accurate reconstruction of inclined air showers in a radio-quiet environment. Its science case will range from the study of the Galactic-to-extragalactic transition to fast radio bursts and ultra-high-energy gamma rays. We present the current status of the experiment, its design, and expected performances. § GRANDPROTO300 CONCEPT §.§ Site location and radio array design GRANDProto300 will consist of a radio array of 300 antennas deployed over ∼ 200 km^2 in the radio-quiet location of Xiao Dushan (Gansu province, China). This location was chosen for its low radio background, its average altitude of 1100 m above sea level and its flat solid ground in a mountainous area, making it an ideal topography to detect inclined air showers. The radio array location has been officially approved by the Chinese authorities, and antenna deployment began in 2023. As shown in Fig. <ref>, the antenna layout consists of a hexagonal grid, combining a sparse array (1 km-step) with a denser infill (577 m-step). Thanks to its design, GRANDProto300 should be able to target the radio emission from ultra-high-energy cosmic rays and possibly gamma rays within the energy range 10^16.5-10^18 eV, where a transition from Galactic to extragalactic sources of cosmic rays is expected to happen <cit.>. Eventually, GRANDProto300 will act as the seed of GRAND10k <cit.>, a radio array of 10 000 antennas that should be able to target the first ultra-high-energy neutrinos. §.§ Challenges of large-scale radio detection GRANDProto300 will be a test bench for the GRAND experiment. As such, it will need to address several challenges of radio detection. First, GRANDProto300 will need to reconstruct the characteristics of inclined air showers (with near-horizontal arrival directions), such as the primary particle nature, its energy, and its arrival direction. Inclined air shower detection with a sparse radio array is still an uncharted territory and is challenging because these showers undergo many effects such as ground reflection, asymmetries, or coherence loss <cit.>, making their radio emission more complex than the one from vertical showers. This means that reconstruction algorithms accounting for these effects need to be developed and tested by GRANDProto300 <cit.>. Another challenge that GRANDProto300 aims to tackle is the autonomous radio detection of high-energy astroparticles. Current experiments for air-shower radio-detection such as AERA <cit.> or LOFAR <cit.>, combine radio antennas with surface detectors to detect the arrival of an air shower and then trigger the acquisition of the radio signal by the antennas. Yet, external triggers become too expensive for an experiment like GRAND with thousands of antennas. Hence, it is necessary to develop a trigger based on the radio signals only which require to perform background rejection as early as possible in the detection chain <cit.>. Towards this purpose, several promising approaches rely, for example, on the radio signal polarization <cit.>, or on machine-learning algorithms <cit.>. Eventually, GRANDProto300 will also allow us to test and validate the GRAND software <cit.> and hardware (antenna and electronics design, reliability, etc) in experimental conditions, which will allow us to further scale the experiment to future construction stages. § GRANDPROTO300 COMMISSIONING GRANDProto300 is currently in a commissioning phase: this means that radio antennas were deployed, the hardware was tested, and the first data are being taken. §.§ Detector overview GRANDProto300 uses butterfly antennas with three perpendicular arms to measure the three polarizations of the radio signal in the frequency range 50-200 MHz. Each antenna is powered by solar panels, they are equipped with an low-noise amplifier (LNA) and linked to a data acquisition (DAQ) box containing a front-end board with a field-programmable gate array (FPGA) that filters the radio signal and sample it at a rate of 500 Mega samples/s. Eventually, all data are transferred to a central DAQ station through a bullet Wi-Fi. §.§ GP13 testing and radio measurements The first 13 antennas were deployed in Xiao Dushan in February 2023. Using this setup, various tests of the equipment were performed. As shown in the left-hand panel of Fig. <ref>, ∼ 40 data measurements of frequency spectra were realized to test the influence of each parameter. The conclusion was that all components of the detection unit radiate. Additionally, the tests revealed a heating issue with the LNA and the front-end board. To solve this issue, all the components have been shielded and the heating issue was fixed, which allowed for radio measurements with much cleaner frequency spectra. As shown in the right-hand panel of Fig. <ref>, GP13 then successfully triggered on the signal from a beacon antenna fed with a sine wave. After testing the equipment, the first data were taken. As shown in Fig. <ref>, coincident pulses were observed between the different channels of a given antenna (left-hand panel) and radio signals were observed with temporal coincidence between several antennas (right-hand panel). Eventually, using the timing information at the antenna level, the position of a beacon antenna located on top of the central DAQ station was reconstructed, as shown in Fig. <ref>. The top-left panel shows the distribution of the trigger times at several detection units. The results yield a standard deviation of the trigger times of ∼ 5 ns for most units, which meets the expectations targeted by GRAND for an accurate reconstruction. The timing information was then used to evaluate a normalized χ^2, i.e., the summed squared differences between the measured data and a model predicting the trigger times, assuming a spherical wavefront for the radio signal (bottom-left panel). Quality cuts are applied to keep only signals with a normalized χ^2<5. All the events that pass this cut (171 out of 172 in this case) were then used to reconstruct the beacon position, The results are shown on the top-right and bottom-right panels. The position of the beacon was successfully reconstructed with a standard deviation of ∼ 15 m on the Northing and of ∼ 8 m on the Easting. §.§ Next stages of GRANDProto300 The GP13 radio array will soon be expanded to GP80, with 70 additional antennas to be deployed by the end of 2024. By then, the firmware needs to be updated, an improved communication between the detection units and the central DAQ is needed and the aim is to reconstruct the arrival direction of other sources. With 83 antennas (current 13 and 70 additional, evenly spaced), triggered rate predictions estimate that, with a conservative threshold, GP80 should be able to detect ∼ 30 cosmic-ray induced events per day, with primary energies between 2× 10^17- 2× 10^18 eV. The main objective of GP80 will therefore be to reconstruct the first cosmic-ray events. The GRANDProto300 stage is then expected by ∼ 2026, with 𝒪(300) radio antennas over 200 km^2, and could also be completed with surface detectors to help calibrate the detector. This radio array should be able to detect ultra-high-energy cosmic-rays and possibly ultra-high-energy gamma rays (if complemented by surface detectors) in the energy range 10^16.5-10^18 eV and will validate the GRAND detection principle allowing the experiment to scale up to the further stages. § EXPECTED PERFORMANCES GRANDProto300 will be the first experiment to detect inclined air showers using a sparse radio array. An accurate reconstruction of air showers will therefore be crucial to validate the GRAND detection principle. We discuss below the expected performances of GRANDProto300. §.§ Antenna layout and trigger rate Different antenna configurations were tested to evaluate and optimize the GRANDProto300 trigger rate using ZHAireS Monte-Carlo simulations <cit.>. The study found that a hexagonal grid of antennas provides the best performance. Additionally, the effect of adding a dense infill on top of the hexagonal grid was tested, considering different geometries and a sparse array with a 1 km-spacing. The trigger rate was estimated by fixing a trigger threshold at the antenna level of V=75 μ V and requiring that at least N_ trig=5 neighboring antennas are triggered to consider an event triggered. The trigger rate for different geometries is shown in Fig. <ref> as a function of the primary particle energy (left-hand panel) and zenith angle (right-hand panel). Both plots demonstrate that the hexagonal and triangular geometries yield similar trigger rates, despite the triangular configuration having approximately 100 more antennas. Moreover, the results indicate that the inclusion of an infill enhances detection efficiency. §.§ X_ max and angular resolution The reconstruction of the depth of shower maximum, X_ max, and of the shower direction were also evaluated with ZHAireS simulations. The electric field amplitude at the antenna level was first described with a phenomenological model that exploits the symmetries of the radio signal to derive an angular distribution function (ADF) <cit.>. The ADF depends on the shower arrival direction and on the radio signal emission point, which are treated as free parameters. Assuming a spherical wavefront model, these parameters were fitted using a χ^2 minimization based on the measured amplitudes at the antenna level <cit.>. On the left-hand panel of Fig. <ref> we show the reconstructed X_ max as a function of the primary particle energy. The results show a similar behaviour between the reconstructed X_ max and the models. Additionally, the standard deviation on X_ max was shown to be comparable to that achieved by current radio experiments <cit.>. On the right-hand panel of Fig. <ref> we show that with a GRANDProto300-like layout we reach a mean and median angular resolution below 0.1^∘. If confirmed experimentally, this performance will be one of the main assets of the GRAND experiment, making possible the identification of the first ultra-high-energy point sources and opening the path toward an ultra-high-energy neutrino astronomy. § SCIENCE CASE Thanks to its design and expected performance, GRANDProto300 will have a broad science case covering astroparticle physics and radio astronomy. §.§ Galactic-to-extragalactic transition GRANDProto300 will be able to detect cosmic rays in the energy range 10^16.5-10^18 eV, between the knee and the second knee of the cosmic-ray spectrum, where a Galactic-to-extragalactic transition of sources is believed to happen <cit.>. GRANDProto300 will add more statistics in this transition region and provide the first independent measurement of the cosmic-ray spectrum based on the radio signal only. Additionally, if complemented with particle detectors, GRANDProto300 will be able to measure the shower electromagnetic and muonic content independently, providing one of the most efficient ways to reconstruct the primary particle nature and to infer an event-by-event mass composition <cit.>. §.§ Ultra-high-energy gamma rays Ultra-high-energy gamma rays (>10^17 eV) are expected to be produced by the interaction of ultra-high-energy cosmic rays with cosmological photon backgrounds <cit.>. Yet, their detection is challenging since their interaction cross section limits their horizon to ∼ 1 Mpc. Since almost no muons are expected in gamma-ray-induced air showers, an independent measurement of the shower electromagnetic and muonic content would provide an efficient way to identify gamma ray primaries. Hence, if GRANDProto300 is complemented by particle detectors, it would possibly detect the first ultra-high-energy gamma rays or at least put new limits on their flux. §.§ Fast radio bursts Eventually, GRANDProto300 should be able to detect fast radio bursts (FRB), powerful transient radio pulses with a typical duration of a few ms <cit.>. Thanks to its large field of view and high sensitivity, GRANDProto300 is well-suited to perform FRB searches. Investigations are in progress to evaluate whether GRANDProto300 should target FRBs using the unphased radio signal or beam-forming. In the latter case, we estimate that GRANDProto300 should detect ∼ 1 FRB per month. § CONCLUSION GRANDProto300 is a 300-antennas radio array that will detect cosmic rays in the energy range 10^16.5-10^18 eV. The first 13 antennas were deployed in 2023 and preliminary data are encouraging. The next step will be the deployment of 70 additional antennas by 2025, which should allow for the first detection of a cosmic ray event. The full GRANDProto300 radio array is expected by 2026 and will cover several physics cases, including the Galactic-to-extragalctic transition, ultra-high-energy gamma rays, and fast radio bursts. GRANDProto300 will then be expanded to GRAND10k in the 2030s, which will target the detection of the first ultra-high-energy neutrinos. elsarticle-num
http://arxiv.org/abs/2409.02436v1
20240904042636
Occlusion-Based Cooperative Transport for Concave Objects with a Swarm of Miniature Mobile Robots
[ "Sanjuksha Nirgude", "Animesh Nema", "Aishwary Jagetia" ]
cs.RO
[ "cs.RO" ]
Occlusion-Based Cooperative Transport for Concave Objects with a Swarm of Miniature Mobile Robots Sanjuksha Nirgude, Animesh Nema, Aishwary Jagetia Robotics Engineering Department Worcester Polytechnic Institute Worcester, MA, USA [email protected], [email protected], [email protected] September 9, 2024 ================================================================================================================================================================================================ § ABSTRACT An occlusion based strategy for collective transport of a concave object using a swarm of mobile robots has been proposed in this paper. We aim to overcome the challenges of transporting concave objects using decentralized approach. The interesting aspect of this task is that the agents have no prior knowledge about the geometry of the object and do not explicitly communicate with each other. The concept is to eliminate the concavity of the object by filling a number of robots in its cavity and then carry out an occlusion based transport strategy on the newly formed convex object or "pseudo object". We divide our work into two parts- concavity filling of various concave objects and occlusion based collective transport of convex objects. decentralized approach, collective transport, occlusion based transport, concave object, convex object. § INTRODUCTION AND BACKGROUND Swarm Intelligence is the collective behavior of decentralized, self-organized systems which could be either natural or man-made. Swarm Intelligence exists in nature such as the behavior of ants, bees, birds which can be used as an inspiration for finding its applications in the field of robotics. One such application is the transportation of objects using a number of mobile robots. Such tasks may seem trivial at first, but can be incredibly complicated, depending on various aspects such as shape of the object, size of the object, visibility and perception. In addition to that, the process is decentralized i.e., every agent behaves independently instead of following a fixed leader. There are various techniques to transport an object using decentralized approach. General structure of a robot's behavior, performing collective transport consists of searching the object, positioning itself around the object and then transporting the object<cit.>. Collective transport methods can be categorized into three categories: Pulling, Pushing and Caging. Pulling constitutes of complex mechanisms like grasping and lifting the objects, whereas caging requires robots to maintain their formation during dynamic movements. During pushing, robot's pushing positions and speed are the constraints to be addressed. Increasing the number of pushing robots increases the stability of the object as pushing force is distributed over multiple points. Also, due to hardware requirements for pulling and caging strategies, we have preferred pushing strategy for our analysis. In the paper<cit.> a simple odometry based co-ordination strategy in combination with omni-directional camera has been used. There is no communication between robots while performing collective transport task. The transportation strategy in this paper consists of four stages, namely prey discovery, team co-ordination, recruitment and transportation. In <cit.> a collective transport approach has been proposed by the authors using kilobots and r-ones. The agents have no prior knowledge about the object's shape,size, location of its neighbors or the object. They only know the location of the goal. The agents perceive the direction of the goal by using their light sensor(s) and apply forces on the object in the direction of perceived light. By doing so, they optimally transport objects of complex shapes to the desired location. The r-ones also execute flocking behavior in case some agents are occluded from the light source. In such a case, the robots observe the direction of their neighbors to modify their own directions. The authors proved experimentally the scalability, robustness and optimality of their approach by testing their agents under different circumstances. In <cit.> the authors have proposed a 5 step approach to successfully transport an object of any shape be it, concave or convex. The tasks assigned to the robots are in the following order. First the robots have to explore the environment and locate the object. Once the robots find the object, they will align themselves with the object and grasp it. The third step is object characterization. i.e., finding the centroid of the object,width,diameter, orientation etc. They do so by determining their own positions and centroid. The object’s diameter determines the minimum distance from an obstacle where it is safe to rotate the object. Once the object information has been extracted, the robots then perform a path planning function. During this phase, some of the robots stay attached to the object while others explore a suitable path. The robots then navigate the object through a chosen path. The paper talks about various algorithms for each of it's steps, hence giving an idea about collective transport of complex objects. Our paper uses an occlusion based collective transport strategy.The concept behind the occlusion based strategy is that each agent searches the object, moves towards it and then looks for the goal. All the robots push the object by moving in a direction perpendicular to the object’s surface at their points of contact. This way the motion of the object will be approximately towards the goal. The robots work in a decentralized manner and conduct co-operative transport without explicitly communicating with each other. Our project is divided into two parts in which we fill the concavity of the objects and also complete the collective transport of convex object separately. In case of collective transport experiment the location of the object is known by the robot along with its own position.In the collective transport experiment the task sequence starts with the robot orienting towards the object and moving towards it. When the robot reaches the object, it checks if the goal in our case the light source is visible from its position, if the goal is visible,it moves around the object and looks for the goal again. If the goal is not visible (or occluded), the robot starts pushing the object. Otherwise the robots moves around object, executing a left-hand-wall-following behavior. This is repeated by every agent collectively, till the object is transported to the goal. While this approach has been successfully tested on convex objects <cit.>, concave objects pose much harder challenges. The agents can easily lose the sense of direction towards the goal and might never transport the object. Our motivation for this project comes from the paper <cit.>. In which one of the major limitation was its inability to transport concave objects. We propose a method to overcome this drawback of the occlusion based strategy. The object's concavity can be eliminated by filling a number of agents in it's concave contour, thus making it more 'convex' like. Another swarm of agents which will be referred to as the 'pushing agents' can then perform an occlusion based approach mentioned above by treating the object as any other convex polygon. Therefore, a concave object can be transported to the goal. § PROPOSED WORK As described above, the occlusion based collective transport strategy only works when the object is in convex shape. If the object is concave the strategy fails. Based on our strategy, initially the robot searches for an object while performing a random walk. Once the object is seen, robot approaches the object. When the robot reaches the object, it performs a left-hand-wall-following behavior around the object to checks if the object is concave or convex. If the range of the angles where the object is detected relative to each robot is greater than pi, the object is identified to be a concave object, else it is a convex object. It can be seen in the figure <ref> that the robot is on the convex side of the object. Hence, the angle is less than pi and the object will be identified as convex object. Whereas in figure <ref> it can be seen that the robot senses the object at an angle more than pi ,therefore identifying it as an concave object. If the object is detected to be concave, the robot will stay at the particular position and change its robot id to "object robots". Hence, filling the concavity of the object. When a next robot arrives behind this robot it will consider the first robot as an object and check if the concavity of the object still exists. This process will repeat until the concavity of the object is filled and robots collectively form a "pseudo object" which incorporates both the actual object and the object robots. Once the pseudo object changes its shape to convex, the remaining robots will change their robot id to "pushing robots", and the "object robots" will stay intact with the object. Hence, the task of converting the concave object to convex object will be accomplished. At this point, the occlusion based collective transport strategy comes into effect. Now the robot checks if the goal can be seen from its position. If it is not visible or if the robot is in occluded region the robot must start pushing the object. Otherwise the robots should move around object, executing a left-hand-wall-following behavior. As shown in figure <ref> the robots after filling the concavity of the object now start applying the occlusion based collective transport strategy for moving the object. The robots with red LED's are now the "pushing robots" in the occluded area of the object. The position A and the position B are the positions of the extreme observer robots from where the goal is visible. § EXPERIMENTS Our Experiments are broadly divided into two sub-experiments. We have implemented the concave filling experiment in Buzz programming language <cit.> and collective transport experiment in C++. We have simulated both the experiments in ARGoS Simulator <cit.>. §.§ Experiment 1: Concave Filling Concave filling for different concave shapes is shown in the figure <ref>. Our environment setup is shown in <ref>, <ref> and <ref> for different shapes. §.§.§ Experimental Setup We have experimented on three different shapes and size objects as shown in figure <ref>. In order to develop concave object we have used small convex objects to form one big concave object. Using the Concave filling algorithm discussed below, we have experimented concavity filling on these different shape objects. §.§.§ Algorithm On initialization, the robots perform a random walk to look for the object and their LED's are switched off. Once in a certain range of the object, the robots turn red indicating that it has started using their proximity sensors to detect the object geometry while performing left hand wall behavior. As soon as 4 proximity readings (theoretically it should be 5 readings) are detected simultaneously, the robot turns blue,signaling that it has turned into a 'pseudo object'. §.§.§ Parameters We tested the following parameters for the concave filling. * Distribution of the robots: The distribution of the robots under a certain range slightly affected the performance of the system. It can be said that the distribution of the robots affects the concave filling but in most cases, the robots were still able to accomplish the task. However, placing the robots too close to the object during initialization or orienting the robots away from the light source doesn't result in a good performance. * Number of robots executing Concave filling: The number of robots affect the performance in two ways. * The total number of robots executing the task determine the degree of the filling in the sense that less number of robots would be incapable of filling the object depending on it's size. Therefore, enough robots have to be deployed to completely fill the object. * The number of robots simultaneously deployed affect the performance because the proximity sensors detect the neighbor robots yet do not distinguish them from the object. The robots are programmed to stop as soon as 4 sensors simultaneously detect readings. * Light Source above the object: The light source above the object does not really affect the performance in a given range. The robots do not use the light source during the concave filling, they just use the light source to move towards the object. Once, in a certain range, they only use their proximity readings. On changing the location of the light source, the robots may approach the object differently though. * Shape and size of the object: The time and accuracy of concave filling varies according to the shape of the object. However, for the the same shape, it's size doesn't necessarily affect the performance drastically. The increase in size however may require additional robots. §.§.§ Results * Analysis 1: U shape as shown in figure <ref> * Analysis 2: U shape with 15 degree orientation as shown in figure <ref> * Analysis 3: L shape as shown in figure <ref> * Analysis 4: L Shape: Changing the Robots Initial Orientation as shown in figure <ref> * Analysis 5: L Shape: Changing the number of Robots as shown in figure <ref> * Analysis 6: Arc Shape as shown in figure <ref> §.§ Experiment 2: Collective Transport In our second experiment we replicated the Occlusion-based convex object based experiments implemented in <cit.> for convex object. §.§.§ Experimental Setup The environment of the experiment as shown in figure <ref> is a rectangular arena that is bounded by the walls. The floor of the arena has white color, and its walls are painted in gray. The goal is a light source, depicted as a yellow sphere, placed at a certain height to avoid robots collision with it. The initial configuration of a trial is illustrated in figure <ref>. We also have initial configurations of trials changed to randomly distributed robots in the environment. The object’s centroid and the orientation was positioned randomly as shown. The sequence of the implementation is explained with the algorithm. §.§.§ Algorithm The algorithm for Occlusion based experiment is implemented for convex objects in this paper. Further it can also be performed on concave objects after filling their concavity. The experiments starts with finding the object position. The robots in the environment also have their own 'X' and 'Y' positions along with the objects position. The object position is dynamic in nature, hence as the object moves we get its updated position. With the help of both these positions the robot can find the angle at which the object is placed from it. Once it gets the orientation direction of the object, it keeps checking if its current orientation is directing towards the object. If the orientation is towards the object,it updates the status of it being oriented. If not, it continues the process of rotating to orient itself. Once the robot is oriented, it starts moving towards the object. After reaching the object, the robots use the proximity sensors to maintain distance from the object and follow the left hand wall rule. While performing the left hand wall rule the robots keep checking for light using their light sensors. As the robot reaches the occluded region on the object it doesn't get any readings on the light sensor. Now it is declared that the robot is in occluded area of the object. The next task is to push the object. In order to push the object, the robot again orients itself to the object following the initial orientation steps using the current position of the object and its own current position. Once oriented it starts moving towards the object, but this time it is not constrained by the proximity sensor hence it touches the object and pushes it. As the occluded region is decided by light source the robots push towards the light source which is our goal. §.§.§ Parameters For the occlusion based transport, we tested the following parameters:- * Distribution of the robots: One of the most important parameters to test the robustness of the experiment is to test whether the robots are able to perform the task irrespective of how they have been placed in the environment. We carried out experiments by placing the robots randomly in the environment and then placing them in a formation as shown in figure <ref> to compare the results obtained. When the robots were placed randomly the result were better than when placed in the formation. As robots initially try to orient themselves towards the object and then start moving towards the object the random distribution enables them to avoid other robots which have not yet being oriented hence giving better results. But when the robots are placed in formation, all the robots orient together hence start to move towards the object together. As the condition of reaching the object is analyzed based on the proximity values, the robots are misguided when they come across other robots. Hence, considering this other robot as an object and performing left hand wall rule. This happens at a large scale when the robots are placed in grid formation. * Position of the object: The position of the object also influences the performance efficiency of the experiment.When the object was placed near walls it was noticed that the robots approach the object but they get lost. When near the wall the robots start applying the left hand wall rule on the wall instead of the object. Hence, taking rounds along the wall of the arena. Hence, affecting the success rate of the trials. § LIMITATIONS §.§ Concave Filling * The range of proximity sensors affects the performance of the experiments due to the fact that Khepera IV has a limited range. Hence, it is very hard to achieve 5 proximity readings simultaneously ( it signals that the angle is greater than pi). * Simultaneously deploying too many robots affects the performance as the robots cannot distinguish between object and neighbor robots. The robots react when they come close to their neighbor as they would for the object. §.§ Collective Transport As we are not using camera on the robot we are not able to distinguish between the object to be transported and walls of the arena. Hence causing the robot to perform the left-hand wall rule along the walls of the arena. Another limitation is that objects cannot be placed near the wall. The path followed by the robots is not optimal and may vary from shape to shape. § CONCLUSION The occlusion based collective transport strategy can be used to perform collective transport when the object and robot position is known. The concave filling algorithm presented can be used to eliminate or minimize the concavity of the object.The robots need not have any prior knowledge about the object’s geometry to perform the concave filling. The object can then be transported as a “convex” object, using the occlusion based transport strategy. § FUTURE WORK In the future we would like to perform the same experiment using a camera. This would eliminate the use of light source being placed above the object for concave filling. Also, this would allow to differentiate robots between its neighbors, wall, object and goal. We would also like to integrate both concave filling and occlusion based transport as a single experiment. A path optimization technique can be used so that the robots follow an optimal path. Obstacle avoidance can also be used so that the robots do not collide with its neighbors. IEEEtran
http://arxiv.org/abs/2409.03567v1
20240905142126
Meshless quadrature formulas arising from numerical differentiation
[ "Oleg Davydov", "Bruno Degli Esposti" ]
math.NA
[ "math.NA", "cs.NA", "65D32, 41A55" ]
Meshless quadrature formulas arising from numerical differentiation Oleg DavydovDepartment of Mathematics, University of Giessen, Arndtstrasse 2, 35392 Giessen, Germany, <[email protected]>, Bruno Degli EspostiDepartment of Mathematics and Computer Science, University of Florence, Viale Morgagni 67, 50134 Firenze, Italy, <[email protected]> September 9, 2024 ======================================================================================================================================================================================================================================================================================================================= § ABSTRACT We suggest a method for simultaneously generating high order quadrature weights for integrals over Lipschitz domains and their boundaries that requires neither meshing nor moment computation. The weights are determined on pre-defined scattered nodes as a minimum norm solution of a sparse underdetermined linear system arising from a discretization of a suitable boundary value problem by either collocation or meshless finite differences. The method is easy to implement independently of the domain's representation, since it only requires as inputs the position of all quadrature nodes and the direction of outward-pointing normals at each node belonging to the boundary. Numerical experiments demonstrate the robustness and high accuracy of the method on a number of smooth and piecewise smooth domains in 2D and 3D, including some with reentrant corners and edges. ./figures/ § INTRODUCTION Efficient high order numerical integration over piecewise smooth domains Ω in ^2 and ^3 and on surfaces is needed in many numerical methods and engineering applications, see recent extensive literature reviews in <cit.>. In this paper we suggest a new technique for generating quadrature formulas on scattered nodes that only requires as input the normals to the boundary at the boundary nodes, in addition to the positions of the nodes themselves, and relies on the discretization of a boundary value problem for the divergence operator by numerical differentiation and the least squares solution of a sparse underdetermined linear system. Recall that the classic way of obtaining a weight vector =(_i)_i=1^N∈^N of a quadrature formula ∫_Ωf(x)(x)≈∑_i=1^N_if(y_i) for the integral with respect to some measure μ in a domain Ω, with given knots Y={y_1,…,y_N}⊂Ω, is to choose a finite dimensional linear space ={ s_1,…, s_M}⊂ C(Ω) with good approximation properties, such as (piecewise) polynomials or radial basis functions, and require the exactness of the formula for all f∈, which is equivalent to the exact reproduction of the moments ∫_Ωs_j, ∫_Ωs_j(x)(x)=∑_i=1^N_is_j(y_i), j=1,…,M, which leads to a linear system for the weight vector , as soon as the moments are known. In particular, the book <cit.> presents many quadrature formulas for special multivariate regions such as cubes, simplices, balls or spheres, and special node sets, where exactness is enforced for spaces of polynomials, and the moments are computed by analytic methods. With the help of the substitution rule these formulas can be adapted to the smooth images of such elementary regions. A more complicated domain may be partitioned into mapped elementary regions leading to a compound quadrature formula generated by summing up the formulas over this partition. This approach is used in particular in the finite element method. For the high order integration it however requires challenging high order mesh generation, often prohibitively costly and non-robust in applications <cit.>. Instead of meshing into images of elementary regions, different types of partitioning may be used followed by the dimension reduction that replaces volume or surface integrals by repeated univariate integration, such as the standard iterated integration on generalized rectangles or sectors <cit.>. Recent versions of the dimension reduction method <cit.> rely on the divergence theorem over arbitrary subregions with known parametrizations of their piecewise smooth boundaries. However, a significant drawback of these methods is that many quadrature nodes must be placed outside of in its bounding box, unless the shape of the integration domain satisfies some restrictive assumptions (see e.g. the concept of base-line in <cit.>). When the quadrature nodes in Y are fixed a priori (for example, supplied by the user) and the task is to generate the weight vector , the problem setting is related to scattered data fitting, because the integral ∫_Ωf can be approximated by ∫_Ωs if s is an approximation of f, and this leads to a quadrature formula as soon as s is obtained for each fixed Y by a linear scattered data fitting operator _Y:^N→, with s=_Y f|_Y, where f|_Y:=(f(y_i))_i=1^N. The most established tool for scattered data fitting is kernel-based interpolation, in particular with radial basis functions <cit.>. The meshless quadrature formulas obtained this way provide optimal recovery of the integration functional <cit.>, in particular polyharmonic and thin plate spline RBF give rise to optimal quadrature formulas for multivariate Sobolev spaces <cit.>. However, numerical implementation of these methods for general domains is problematic because the corresponding linear system (<ref>) is not sparse, and the moments of the radial basis functions have to be provided. The first problem may be tackled by either partitioning Ω or using spaces with locally supported bases such as B-splines, but the computation of the moments remains a major obstacle unless Ω is a polygonal domain <cit.>. A dimension reduction approach for the moments over curved triangles has been suggested in <cit.>. When using polynomial spaces on scattered nodes, a feasible approach is to choose the polynomial degree such that the dimension M of is significantly smaller than the number of nodes N, which typically makes the system (<ref>) underdetermined and solvable. Then a weight vector satisfying (<ref>) may be found as a minimum norm solution <cit.> or by subset selection <cit.>. These methods require the moments of the polynomials, in particular <cit.> suggests a sophisticated algorithm for computing them on bivariate domains bounded by B-spline curves. The method that we suggest does not require any meshing, and bypasses the moment computation problem. It relies on the discretization of two linear differential operators and satisfying an “integration by parts” formula of the form ∫_Ω u=∫_∂Ω u . Many identities of this type may be obtained from the generalized Stokes theorem. For example, a natural choice is =Δ, the Laplace operator (or Laplace-Beltrami operator when Ω is a Riemannian manifold), and =∂_ν, the outward normal derivative at the boundary. This pair is suitable for smooth domains, but unfortunately not for piecewise smooth ones, as will be demonstrated in the paper. Instead, we recommend =, the divergence operator, and =γ_ν, the normal trace operator, as a general purpose choice, which works well in all of our numerical tests on Lipschitz domains in 2D and 3D with either smooth or piecewise smooth boundary. Given two functions f:Ω→ and g:∂Ω→, we look for a combined quadrature to approximate the difference of two integrals ∫_Ωf(x)(x)-∫_∂Ωg(x)(x) ≈∑_i=1^N_Y_if(y_i)- ∑_i=1^N_Z_ig(z_i), with knots Y={y_1,…,y_N_Y}⊂Ω and Z={z_1,…,z_N_Z}⊂∂Ω and weight vectors =(_i)_i=1^N_Y∈^N_Y and =(_i)_i=1^N_Z∈^N_Z. Clearly, (<ref>) provides in particular a quadrature formula with knots in Y and weight vector for ∫_Ωf if we set g≡ 0, and a quadrature formula with knots in Z and weight vector for ∫_∂Ωg if we set f≡ 0. Assuming that the nodes in Y and Z are provided by the user, or placed by a node generation algorithm <cit.>, we propose a method for the computation of the weight vectors and . As in the classical approach based on the reproduction of moments (<ref>), we may employ a finite dimensional space of functions ={ s_1,…, s_M}. However, we rely on the simultaneous approximation of f by s and g by s, rather than on the direct approximation of f by s∈. Therefore, we require the exactness of the combined formula (<ref>) for all pairs f,g obtained as f= s and g= s from the same s∈. According to (<ref>), the left hand side of (<ref>) is zero in this case, so that the exactness condition is equivalent to ∑_i=1^N_Y_i∑_j=1^Mc_j s_j(y_i)- ∑_i=1^N_Z_i∑_j=1^Mc_j s_j(z_i)=0 for all c_j∈. In addition, we require the exactness of (<ref>) for a single pair of functions f̂,ĝ with known integrals ∫_Ωf̂ and ∫_∂Ωĝ, such that ∫_Ωf̂(x)(x)-∫_∂Ωĝ(x)(x)0. In particular, we may use f̂≡ 1, ĝ≡ 0 if we know the measure of Ω, or f̂≡ 0, ĝ≡ 1 if we know the boundary measure of ∂Ω. If neither is known, then we may employ f̂≡ 0, ĝ=∂_νΦ with Φ being a fundamental solution of the Laplace(-Beltrami) equation centered in Ω, see Section <ref>. In any case, we get the following linear system for the weight vectors and , ∑_i=1^N_Yℓ_ij_i-∑_i=1^N_Zb_ij_i = 0, j=1,…,M, ∑_i=1^N_Y_if̂(y_i)- ∑_i=1^N_Z_iĝ(z_i) = ∫_Ωf̂-∫_∂Ωĝ, where ℓ_ij and b_ij are the entries of the collocation matrices L=( s_j(y_i))_i=1,j=1^N_Y,M, B=( s_j(y_i))_i=1,j=1^N_Z,M for the operators , w.r.t. the sets Y, Z and the space . If Ω is a closed manifold, then ∂Ω=∅ and (<ref>) simplifies in that everything related to the integral over ∂Ω disappears. Moreover, we may move away from using any function space and replace L and B by numerical differentiation matrices on irregular nodes, as those used in the meshless finite difference method, see for example <cit.>, which leads to a purely meshless approach with full flexibility of arbitrarily distributed degrees of freedom associated with an additional finite set X={x_1,…,x_N_X} discretizing either Ω or some larger set D⊃Ω. In this case we again find and by solving the linear system (<ref>), with the coefficients ℓ_ij,b_ij of matrices L,B obtained as the weights of suitable numerical differentiation formulas u(y_i)≈∑_k=1^N_Xλ_ik^Tu(x_k), i=1,…, N_Y, u(z_i)≈∑_k=1^N_Xβ_ik^Tu(x_k), i=1,…,N_Z, where L=(ℓ_ij)_i=1,j=1^N_Y,M =(λ_ik^T)_i=1,k=1^N_Y,N_X, B=(b_ij)_i=1,j=1^N_Z,M =(β_ik^T)_i=1,k=1^N_Z,N_X. Note that λ_ij are scalars if u is scalar-valued, but in general they are vectors, in particular d-vectors when is the divergence operator on a d-dimensional manifold and thus u is a vector field. We choose the functional space or the set of nodes X such that the system (<ref>) is underdetermined, and compute the combined quadrature weight vector (,) as its solution with the smallest 2-norm, _2^2+_2^2 →min subject to [ L^T -B^T; f̂|_Y^T -ĝ|_Z^T ][ ; ] = [ 0; ∫_Ωf̂ -∫_∂Ωĝ ], where f̂|_Y=(f̂(y_i))_i=1^N_Y, ĝ|_Z=(ĝ(z_i))_i=1^N_Z, which is a linearly constrained least squares problem. Efficient algorithms for this problem are available and can handle very big node sets Y,Z if the constraints are given by a sparse matrix <cit.>. In particular, we use a direct method based on the sparse QR decomposition provided by the open source library SuiteSparseQR <cit.>. The sparsity of the matrices L and B is achieved by using a locally supported basis of , such as tensor-product B-splines, or by choosing small sets of influence in the numerical differentiation formulas (<ref>), with λ_ik 0 and β_ik 0 only for small distances between y_i,z_i and x_k. This leads to a scheme that uniquely combines the following features. * High order quadrature weights are provided for user-supplied nodes on Lipschitz domains, with the only additional input of outward-pointing normals at the boundary nodes. Therefore the method is not tied to any specific representation of the integration domain, e.g. parametric or implicit. * No meshing is needed, which makes this method particularly well-suited for the meshless workflow in numerical computations that attracts growing attention as an alternative to mesh-based methods, because mesh generation remains challenging in more complicated applications <cit.>. * Moment computation, the typical bottleneck of meshless quadrature methods, is avoided. * Simple and easy to implement algorithms rely on a discretization of a boundary value problem for the divergence operator and a minimum-norm solution to an underdetermined sparse linear system, for which efficient linear algebra routines are available. We develop the details of both the meshless finite difference approach based on the numerical differentiation matrices (<ref>), and the functional approach based on the collocation matrices (<ref>), and formulate two practical algorithms for domains in ^d that use polyharmonic kernel numerical differentiation (Algorithm <ref>) and collocation with tensor-product B-splines (Algorithm <ref>), respectively. Extensive numerical experiments demonstrate the effectiveness of both algorithms, with the largest tests involving more than 100k quadrature nodes, and convergence orders up to h^7 for both interior and boundary quadrature on several 2D and 3D domains, including some with reentrant corners and edges. In addition to SuiteSparseQR, our MATLAB code relies on the numerical differentiation formulas provided by mFDlab <cit.>, and on several MATLAB toolboxes (optimization, curve fitting, statistics). The paper is organized as follows. Section <ref> presents the algorithms for the computation of the quadrature weights and their theoretical background, while Section <ref> is devoted to numerical experiments. We start Section <ref> by an abstract error analysis that motivated our approach (Section <ref>), and then describe in Section <ref> a general framework for the development of quadrature formulas of the new type, followed by the details of the options available for each of its aspect, namely the choice of the functions and satisfying (<ref>) in Section <ref>, differential operators and in Section <ref>, methods for numerical differentiation by either collocation or meshless finite differences in Section <ref>, and the methods for solving the underdetermined linear system (<ref>) in Section <ref>. Two practical, ready to implement algorithms are presented in Section <ref>. In the numerical testing part of the paper, Section <ref> is devoted to the validation of the settings suggested in Algorithms <ref> and <ref>, and Section <ref> to convergence order tests. ./figures/ § QUADRATURE FORMULAS Let be a C^0 manifold of dimension d, and Ω⊂ a domain (that is, a connected and open subset) with compact closure , and let μ,σ be finite, strictly positive Borel measures defined on and ∂Ω, respectively. We suppose that μ(∂Ω) = 0, so that the space L^1(Ω) of absolutely integrable functions can be identified with L^1(). For any given set Y of N_Y scattered nodes in , and, in the case of nonempty boundary ∂Ω∅, any given set Z of N_Z scattered nodes in ∂Ω, we seek quadrature weights ∈^N_Y and ∈^N_Z such that ∫_Ω f(x) ≈∑_i=1^N_Y_i f(y_i) and ∫_∂Ω g(x) ≈∑_i=1^N_Z_i g(z_i) for all sufficiently regular integrands f → and g ∂Ω→. To make the pointwise evaluation f(y_i) and g(z_i) a meaningful operation we assume that f∈ L^1_Y(Ω) and g∈ L^1_Z(∂Ω), where L^1_Y(Ω) and L^1_Z(∂Ω) are the subsets of L^1(Ω) and L^1(∂Ω) whose elements (equivalence classes of functions) have at least one representative that is continuous at each point of Y and Z, respectively. We will denote the two quadrature formulas (Y, ) and (Z, ), respectively. We will compute the quadrature formulas (Y, ) and (Z, ) simultaneously. Moreover, they arise as a combined quadrature Q(f,g) ∑_i=1^N_Y_i f(y_i)-∑_i=1^N_Z_i g(z_i) for the difference[Note that we could use the sum of the two integrals instead of their difference, and respectively the sum of the two quadrature formulas in the definition of the combined quadrature, which would look more natural in a sense. However we preferred the difference in order to match the usual form of the compatibility condition (<ref>) and to emphasize the "cancellation" nature of the equation I(f,g)=0 that plays a crucial role in the method.] of the two integrals I(f,g) ∫_Ω f(x) -∫_∂Ω g(x). Let → L^1(Ω) and → L^1(∂Ω) be operators such that ∫_Ω u = ∫_∂Ω u for all u ∈, with being a common domain of definition for the two. In particular, we have in mind the situation where is a linear space of functions on , both and are linear differential operators, measures μ,σ are induced by a Riemannian metric in , and identity (<ref>) holds by the divergence theorem or another corollary of the generalized Stokes theorem. When Ω= is a closed manifold with ∂Ω=∅, the measure σ and the operator are clearly irrelevant, and the right-hand side in (<ref>) is understood to be identically zero. The underlying idea of the method may be roughly explained as follows. The identity (<ref>) implies that I(f,g)=I( u, u)=0 as soon as f= u and g= u for the same function u∈. If u is discretized by a vector c∈^M, and matrices L ∈^N_Y × M and B ∈^N_Z × M are such that Lc≈ u Y=fY, Bc≈ u Z=gZ, then Q(f,g) ≈^T L c - ^T B c = (^T L - ^T B) c, and hence requiring that the weight vectors , satisfy ^TL-^TB=0 implies Q(f,g)≈ 0=I(f,g) for all pairs (f,g) such that f= u and g= u for some u ∈. In addition, we require from , that Q(,)=I(,) for a single pair (,) with I(,)0. As we will demonstrate, this combination of conditions produces accurate quadrature formulas (Y, ) and (Z, ) for appropriate choices of the operators , and numerical differentiation matrices L,B. We first make these considerations more precise in Section <ref>, then formulate in <ref> a general framework for obtaining quadrature formulas of this type, and explore in Sections <ref>–<ref> the options available for each of its steps. The final subsection <ref> presents two specific algorithms that we recommend for practical use on piecewise smooth Lipschitz domains in ^d, and which are extensively tested in Section <ref>. §.§ Error analysis in an abstract setting For any given operators and and finite sets Y⊂ and Z⊂∂Ω, we say that a triple (Ψ,L,B), where Ψ→^M, L ∈^N_Y × M and B ∈^N_Z × M, is a numerical differentiation scheme with discretization operator Ψ and differentiation matrices L,B. We denote by _L(u) and _B(u) the recovery error with which the matrices L and B approximate the pointwise values of u and u using the information about u∈ contained in Ψ(u): ε_L(u) := u Y - LΨ(u)_∞= max_i=1,…,N_Y u (y_i) - (LΨ(u))_i, ε_B(u) := u Z - BΨ(u)_∞= max_i=1,…,N_Z u (z_i) - ( BΨ(u))_i. The vector Ψ(u) ∈^M may correspond to pointwise values of u over a scattered set of nodes, coefficients of splines or radial basis functions that approximate u, Fourier coefficients, or other kinds of discrete information about u. Tools of approximation theory may then be used to define matrices L and B that accurately reconstruct the pointwise values of u and u from a given vector Ψ(u). Our approach is motivated by the following proposition that estimates the combined quadrature error δ(f,g) I(f,g)-Q(f,g). Let (,) ∈ L^1_Y(Ω) × L^1_Z(∂Ω) be a fixed pair of functions such that I(,) ≠ 0. Given any pair (f,g) ∈ L^1_Y(Ω) × L^1_Z(∂Ω), let _f,g⊂ be the set of solutions u to the boundary value problem u = f - α in Ω u = g - α on ∂Ω, where the coefficient α∈ is chosen so that (<ref>) satisfies the compatibility condition (<ref>): α=I(f,g)/I(,). Then for any pair of quadrature formulas (<ref>), the following estimate of the combined error holds: δ(f,g)≤αε̂ + inf_u ∈_f,g{_1 ε_L(u) + _1 ε_B(u) + Ψ(u)_∞L^T -B^T_1 }, where :=|δ(,)|. For all u ∈_f,g, by the linearity of δ, δ(f,g) = δ(f - α, g - α) + δ(α,α) = δ( u, u) + αδ(,)≤δ( u, u) + αε̂. Moreover, by the compatibility condition (<ref>) and by the definitions of ε_L(u) and ε_B(u), δ( u, u) = ∑_i=1^N_Y_i ( u)(y_i) - ∑_i=1^N_Z_i ( u)(z_i) = ∑_i=1^N_Y_i ( u)(y_i) ±∑_i=1^N_Y_i ( LΨ(u) )_i ±∑_i=1^N_Z_i ( BΨ(u) )_i - ∑_i=1^N_Z_i ( u)(z_i) ≤_1 ε_L(u) + ^T L Ψ(u) - ^T B Ψ(u) + q_1 ε_B(u) ≤_1 ε_L(u) + _1 ε_B(u) + Ψ(u)_∞L^T -B^T_1. Taking the infimum over u ∈_f,g completes the proof. By choosing either f≡0 or g≡0, we obtain separate error bounds for both quadrature formulas (<ref>). By the previous proposition, the special cases δ(f,0) = ∫_Ω f - ∑_i=1^N_Y_i f(y_i), δ(0,g) = -∫_∂Ω g + ∑_i=1^N_Z_i g(z_i), immediately imply the following error bounds for quadrature formulas (Y, ) and (Z, ), ∫_Ω f - ∑_i=1^N_Y_i f(y_i) ≤αε̂ + inf_u ∈_f{_1 ε_L(u) + _1 ε_B(u) + Ψ(u)_∞L^T -B^T_1 }, ∫_∂Ω g - ∑_i=1^N_Z_i g(z_i) ≤βε̂ + inf_u ∈_g{_1 ε_L(u) + _1 ε_B(u) + Ψ(u)_∞L^T -B^T_1 }, with _f and _g being the sets of solutions to the boundary value problems u = f - α in Ω u = - α on ∂Ω, and u = - β in Ω u = g - β on ∂Ω, respectively, where α=I(f,0)/I(,) and β=I(0,g)/I(,) are again determined by the compatibility condition (<ref>). The case of a closed manifold simplifies to just one quadrature formula. If ∂Ω = ∅, then there is only one quadrature formula (Y, ), and the argumentation of Proposition <ref> leads to the error estimate ∫_Ω f - ∑_i=1^N_Y_i f(y_i)≤αε̂ + inf_u ∈_f{_1 ε_L(u) + Ψ(u)_∞L^T_1 }, where _f is the set of solutions to the equation u = f - α in Ω, and α=∫_Ω f /∫_Ω, with ∈ L_Y^1(Ω) such that ∫_Ω0. §.§ A general framework Motivated by Proposition <ref> we intend to generate quadrature weights , by trying to minimize the error |δ(f,g)| of the combined quadrature. It is clear from (<ref>) that |δ(f,g)| is small if α, _L(u), _B(u) and L^T -B^T_1 are small and _1, _1 and Ψ(u)_∞ are bounded. Moreover, if we choose the weight vectors and such that L^T -B^T =0, then the size of Ψ(u)_∞ is irrelevant. We may also require that =|δ(,)|=0, which is just another linear equation for and to satisfy. Assuming that _L(u) and _B(u) are small thanks to (a) the smoothness of u enforced by the appropriate choice of the operators ,, and (b) the choice of the numerical differentiation scheme, we may use any remaining degrees of freedom in and in order to minimize the stability constants _1 and _1. We may minimize the joint 1-norm _1 + _1, or some other norm ( , )_♯ that may be preferable for computational or other reasons. All these ideas lead us to the following algorithm. Given Ω, , , Y, Z as in Section <ref>, compute quadrature weights ^* ∈^N_Y and ^* ∈^N_Z as follows: * Choose auxiliary functions ∈ L^1_Y(Ω) and ∈ L^1_Z(∂Ω) such that ∫_Ω - ∫_∂Ω≠ 0. * Choose operators and such that ∫_Ω u = ∫_∂Ω u for all u ∈, where the set contains at least one solution of the boundary value problem (<ref>) for each pair of functions f ∈ L^1_Y(Ω) and g ∈ L^1_Z(∂Ω) whose integrals we wish to approximate using formulas (Y, ^*) and (Z, ^*). * Choose a numerical differentiation scheme (Ψ,L,B) for the linear reconstruction of the pointwise values u Y and u Z from the vector Ψ(u) in the form of matrix-vector products LΨ(u) and BΨ(u). * Assemble the non-homogeneous linear system with unknowns ( , ) L^T - B^T = 0, ∑_i=1^N_Y_i (y_i) - ∑_i=1^N_Z_i (z_i) = ∫_Ω - ∫_∂Ω, which can be written more compactly as A x = b, with A = [ L^T -B^T; |_Y^T -|_Z^T ], x = [ ; ], b = [ 0; ∫_Ω - ∫_∂Ω ]. * Compute a solution ( ^*, ^*) of (<ref>) that minimizes some norm ( , )_♯^N_Y×^N_Z→^+ defined on the space of all possible weights ( , ). The algorithm terminates successfully as long as the linear system (<ref>) is consistent, and Corollary <ref> implies that the quadrature formulas (Y, ^*) and (Z, ^*) satisfy the error bounds ∫_Ω f - ∑_i=1^N_Y_i^* f(y_i) ≤inf_u ∈_f{ ^*_1 ε_L(u) + ^*_1 ε_B(u) }, ∫_∂Ω g - ∑_i=1^N_Z_i^* g(z_i) ≤inf_u ∈_g{ ^*_1 ε_L(u) + ^*_1 ε_B(u) }. The consistency of (<ref>) may be characterized in terms of a certain discrete incompatibility condition as follows. Recall that and satisfy I(,) ≠ 0 and hence in view of (<ref>) are incompatible as the right hand sides of the boundary value problem for the operators and , that is, the problem u = λ f̂, u = λ ĝ, is not solvable for u∈ whenever λ∈∖{0}. The linear system Ax=b given by (<ref>) is consistent if and only if the following discrete incompatibility condition holds: every solution (c,λ) ∈^M × to the linear system L c = λ f̂|_Y, B c = λ ĝ|_Z, satisfies λ = 0. The characterization essentially follows from the identity (A)^⊥ = (A^T) and the usual assumptions on (,). We can reason as follows: b ∈(A) ⇔ [ 0; ∫_Ωf̂ - ∫_∂Ωĝ ]∈(A^T)^⊥ ⇔ [ 0; ∫_Ωf̂ - ∫_∂Ωĝ ]·[ c; λ ] = 0 for all (c,λ) ∈(A^T) ⇔ λ = 0 for all (c,λ) ∈(A^T) ⇔ λ = 0 for all (c,λ) that satisfy (<ref>). Since (<ref>) is a discretization of (<ref>), we expect that the discrete incompatibility condition holds whenever the discretization obtained via the choice of Y,Z and the numerical differentiation scheme is sufficiently accurate for the linear system to inherit this important feature of the continuous problem. Even though we cannot guarantee the consistency of (<ref>) in general, in particular for arbitrary user-supplied quadrature nodes Y,Z, a simple strategy of choosing the size M of the discretization vector Ψ(u) such that the linear system (<ref>) is underdetermined with significantly fewer rows than columns makes the system solvable in all of our numerical experiments, leaving enough room for enforcing reasonable size of the stability constants _1 and _1 through minimization, see Section <ref>. In the following subsections we illustrate and compare several options for the choices to be made at each step of Algorithm <ref>. §.§ Auxiliary functions f̂ and ĝ The non-homogeneous constraint ∑_i=1^N_Y_i (y_i) - ∑_i=1^N_Z_i (z_i) = ∫_Ω - ∫_∂Ω≠ 0 plays a fundamental role in the definition of linear system (<ref>), because it rules out the trivial solution ( ^*, ^*) = (0,0). In practice, choices for and need not be complicated: (,) ≡ (1,0) or (,) ≡ (0,1) are usually effective, as long as at least one of the moments Ω∫_Ω 1 , ∂Ω∫_∂Ω 1 is either known in advance, or can be numerically approximated in a straightforward way. In particular, this can be easily done using standard quadrature formulas when the boundary ∂Ω is defined by explicit parametric patches over simple regions in the parameter domain. As an alternative, we suggest the following approach that only requires knowing sufficiently many points on the boundary ∂Ω to generate the set Z of quadrature nodes, as well as the outward-pointing normals on ∂Ω at Z. This applies for instance to implicit surfaces as used in the level-set method, and to trimmed multipatch surfaces as used in Computer Aided Design. Assume that Ω is a domain in ^d and μ and σ are the standard Lebesgue and hypersurface measures. Let Φ(x,x_0) be the fundamental solution of the Laplace equation centered at a point x_0 ∈Ω: Φ(x,x_0) = 1/2πlog(x-x_0) if d = 2, Φ(x,x_0) = -x-x_0^2-d/d(d-2)ω_d if d = 1 or d > 2, with ω_d being the measure of the d-dimensional unit ball. Let ν be the outward-pointing unit normal field along ∂Ω, and let ∂_ν be the normal derivative operator on ∂Ω. By Green's third identity, ∫_∂Ω∂_νΦ(x,x_0) = 1, and so the choice (x)=0 and (x)= ∂_νΦ(x,x_0)=ν(x)^T(x-x_0)/dω_dx-x_0^d satisfies ∫_Ω - ∫_∂Ω = -1 independently of the shape of the domain and the position of x_0 ∈Ω. Observe that the singularity of Φ(x,x_0) in Ω plays no role in the solution of boundary value problems (<ref>), because ≡ 0. In practice, care must be taken to avoid points x_0 too close to the boundary, for otherwise the smoothness of the solution u of (<ref>) may be affected by the near-singularity of , and as a consequence the quantities ε_L(u), ε_B(u) in the quadrature errors estimates may blow up. §.§ Operators and The error bounds in Section <ref> depend on the accurate reconstruction of pointwise values of u and u from a given vector Ψ(u), with u being a solution to a boundary value problem of the form u = f in Ω, u = g on ∂Ω, with ∫_Ω f = ∫_∂Ω g . No matter which numerical differentiation method is used to define matrices L and B, the accuracy of the reconstruction, and thus the quantities _L(u) and _B(u) in (<ref>), depend crucially on the regularity of u as measured by e.g. Sobolev norms. On the contrary, the accuracy of the quadrature formulas should rather depend on the regularity of the integrands f and g, as is natural in numerical integration. Therefore, we are led to consider operators and for which the boundary value problem (<ref>) admits solutions of sufficiently high regularity, depending on the smoothness of f and g. §.§.§ Neumann problem for Laplace-Beltrami operator The classical results in the theory of elliptic partial differential equations, see e.g. <cit.>, suggest the use of the Laplace-Beltrami operator and the normal derivative trace operator. Let Ω be a domain in a C^∞ Riemannian manifold , with compact closure and (possibly empty) smooth boundary ∂Ω, and let μ and σ be the measures on Ω and ∂Ω induced by the Riemannian metric of . Denote by H^s(Ω) and H^s(∂Ω) the L^2-norm Sobolev spaces on Ω and ∂Ω of (possibly fractional) order s ≥ 0, and by ν be the outward-pointing unit normal field along ∂Ω. Then it follows from the divergence theorem that ∫_ΩΔ u = ∫_∂Ω∂_ν u for all u ∈ H^2(Ω), where Δ is the Laplace-Beltrami operator on the manifold, and ∂_ν is the normal derivative trace operator. In other words, identity (<ref>) is satisfied for =Δ, = ∂_ν, with = H^2(Ω). Under the above assumptions, for all f ∈ H^s(Ω) and g ∈ H^s+1/2(∂Ω), the Neumann boundary value problem Δ u = f in Ω ∂_ν u = g on ∂Ω with constraint ∫_Ω u = 0 has a unique solution u ∈ H^s+2(Ω) that depends continuously on f and g, if and only if ∫_Ω f = ∫_∂Ω g , a constraint known as compatibility condition for the Neumann boundary value problem. As a direct consequence of this result, the choice (<ref>) is fully satisfactory for a bounded domain Ω in a smooth Riemannian manifold with (possibly empty) smooth boundary ∂Ω, because solutions to the boundary value problem (<ref>) are guaranteed to exist, and to be more regular than the corresponding data in the right-hand side. However, if ∂Ω is not smooth, as is often the case in practical applications, elliptic regularity results as in Proposition <ref> do not apply anymore, and so the solution u to the Neumann boundary value problem (<ref>) may not be regular enough for the numerical differentiation scheme Δ u Y≈ LΨ(u) , ∂_ν u Z≈ BΨ(u). to be sufficiently accurate even for highly smooth f and g, which indicates that the choice (<ref>) may be problematic in this case because the numerical differentiation errors ε_L(u) and ε_B(u) featuring in the estimate (<ref>) are presumably affected by the loss of smoothness of u. Indeed, let Ω be a bounded Lipschitz domain in ^d, and μ,σ the standard Lebesgue and hypersurface measures. Then Δ is the usual Laplace operator. The Lipschitz regularity of Ω is sufficient to define an outward-pointing unit normal field ν almost everywhere on ∂Ω, and (<ref>) still holds. Moreover, for all f ∈ L^2(Ω) and all g ∈ L^2(∂Ω), the Neumann boundary problem (<ref>) has a unique solution u ∈ H^3/2(Ω) if and only if the compatibility condition (<ref>) is satisfied, as implied by the results in <cit.>. However, standard examples in <cit.> for the corresponding Dirichlet problem on piecewise smooth domains Ω⊂^2 with reentrant corners can be modified to apply to the Neumann problem, and generate for any s>3/2 a solution u ∉ H^s(Ω), even when f is infinitely differentiable and g=0. This means that the regularity of u is not only limited by the regularity of f and g, but also by the regularity of ∂Ω. The following example, adapted from <cit.>, shows that this may happen even on elementary geometries, such as a square. Let Ω = (0,1)^2, and let Γ_1,…,Γ_4 be the left, bottom, right, and top sides of ∂Ω, respectively. For all i = 1,…,4 and s ≥ 0, let γ_i be the trace operator from H^s+1/2(Ω) to H^s(Γ_i). Functions in H^s(Γ_i) are parametrized by the restrictions of Cartesian coordinates (x,y) to ∂Ω, that is, x ∈ [0,1] for i = 2,4 and y ∈ [0,1] for i = 1,3. The following function, defined using polar coordinates (r,θ) centered at the origin, is harmonic in Ω: v(r,θ) = r^2 ( log(r) cos(2θ) - θsin(2θ) ). Let φ [0,1] → be a smooth function such that φ(r) ≡ 1 for all r ∈ [0,1/3], and φ(r) ≡ 0 for all r ∈ [2/3,1]. Then, the function u(r,θ) φ(r) v(r,θ) is the unique solution (up to a constant) of Δ u = Δ (φ v) in Ω ∂_ν u = 0 on ∂Ω∖Γ_1 ∂_ν u = γ_1(-∂_x u) = π y φ(y) on Γ_1. The right hand side functions f,g of this Neumann problem are smooth in the sense that they are restrictions to Ω and ∂Ω of suitable functions that belong to C^∞(^2). Nevertheless, it is clear that u ∉ H^3(Ω), because u and v coincide for r < 1/3 and ∂^3 v(x,y)/∂ x^3 = 2x/x^2+y^2 = 2cos(θ)/r∉ L^2(Ω). In principle, the estimate (<ref>) that relates the error of our quadrature to the numerical differentiation errors ε_L(u) and ε_B(u), is just an upper bound that is not guaranteed to be tight. Moreover, on Lipschitz domains in ^2 with piecewise smooth boundary, it can be proved that u still has regularity H^s+2 away from corners, so one could hope that a few inaccurate approximations of Δ u(y_i) and ∂_ν u(z_i) around corners would not affect the overall accuracy of the quadrature formulas. However, our numerical experiments in Section <ref> demonstrate that a catastrophic loss of accuracy in the quadrature formulas may occur on a Lipschitz domain with piecewise smooth boundary, whereas we do not register this behavior for the alternative divergence approach suggested below. §.§.§ The divergence approach The most basic instance of the identity (<ref>) is clearly the divergence theorem, which suggests the choice = , = γ_ν, where is the divergence operator on the manifold and γ_ν is the normal trace operator. This leads us to the boundary value problem F = f in Ω, γ_ν(F) = g on ∂Ω with ∫_Ω f = ∫_∂Ω g , where the measures μ and σ are determined by the Riemannian metric, the compatibility condition on the right must be satisfied due to the divergence theorem, and the solution F is sought in an appropriate space of vector fields on Ω. In the case of the manifold =^d, vector fields may be identified with vector valued functions F:Ω→^d, and the normal trace is just the projection γ_ν(F)=ν^TF of F to the outward-pointing unit normal ν on the boundary ∂Ω for sufficiently smooth F. Any solution u of the Neumann problem (<ref>) gives rise to a solution F=∇ u of (<ref>). Hence, no smooth solutions are ever lost by using operators and γ_ν instead of Δ and ∂_ν. However, unlike (<ref>), the boundary value problem (<ref>) is underdetermined, with highly non-unique solutions F. Recall that the error bound (<ref>) involves the infimum over all solutions of (<ref>), so the non-uniqueness is an advantage as (<ref>) may rely on the solution F with the smallest numerical differentiation errors _L(F) and _B(F). In particular, one may hope that on Lipschitz domains with piecewise smooth boundary there still exists a solution F of (<ref>) that inherits a high order of smoothness from smooth functions f,g in the right hand side, even when the same is not true about (<ref>). Although we have not found results of this type in the literature, considerable attention has been paid to the corresponding Dirichlet problem, where the full trace of F on the boundary is prescribed instead of the normal trace ν^TF in (<ref>). The Dirichlet problem, however, requires additional compatibility conditions between f and the boundary data when Ω is not sufficiently smooth. In particular, if F vanishes on the boundary of a polygon Ω⊂^2, then its divergence f must vanish at the vertices of Ω, which is a quite unnatural condition to impose on integrands f. Nevertheless, several results on the Dirichlet problem for domains in =^d imply that a solution F of (<ref>) belongs to H^s+1(Ω)^d when f∈ H^s(Ω) and g satisfies certain smoothness and compatibility assumptions. In particular, this follows from <cit.> for Lipschitz domains in ^d, with s∈, when g=0 and f vanishes on the boundary ∂Ω to order s-1. Furthermore, <cit.> gives the same regularity result for polygons in ^2 when g=0 and f vanishes at all vertices of the polygon, and also for non-homogeneous g, with smoothness and compatibility conditions that are difficult to recast to our setting, in which g is the normal projection of the Dirichlet boundary data studied in <cit.>. These results are generalized in <cit.> to arbitrary bounded Lipschitz domains in smooth manifolds, see in particular <cit.> for the case =^d. Our numerical results in Section <ref> for the divergence-based quadrature do not indicate any loss of convergence order on non-smooth domains, which suggests that the smoothness of F is not lost. Moreover, the following elementary construction produces a smooth solution F of (<ref>) on Ω=(0,1)^2, whenever f is smooth in Ω and g is smooth on each side of the square, in contrast to Example <ref> for the elliptic Neumann problem. §.§.§ Smooth solution of divergence boundary problem on the square As before, let Γ_1,…,Γ_4 denote the left, bottom, right, and top sides of ∂Ω, and let g_i, i=1,…,4, be the univariate functions representing g|_Γ_i in the natural Cartesian parametrizations. We set g̃_i g_i-α_i, with α_i=∫_0^1g_i(t) dt, and consider the vector fields G_1(x,y)= [ (x-1)g̃_1(y); -∫_0^yg̃_1(t) dt ], G_2(x,y)= [ -∫_0^xg̃_2(t) dt; (y-1)g̃_2(x) ], G_3(x,y)= [ xg̃_3(y); -∫_0^yg̃_3(t) dt ], G_4(x,y)= [ -∫_0^xg̃_4(t) dt; yg̃_4(x) ] that satisfy G_i=0, ν^TG_i|_∂Ω= g|_Γ_i-α_i in Γ_i, 0 otherwise. By construction, it follows that G̃(x,y) G_1(x,y)+G_2(x,y)+G_3(x,y)+G_4(x,y)+ [ α_1(x-1)+α_3x; α_2(y-1)+α_4y ] is a solution of the problem G̃=α_1+α_2+α_3+α_4, ν^TG̃|_∂Ω=g. Moreover, let α=∫_Ω f(x,y) dx dy, h(y)=∫_0^1f(ξ,y) dξ-α and f̃(x,y)=f(x,y)-α-h(y). Then the vector field F̃(x,y)=[ ∫_0^xf̃(ξ,y) dξ; ∫_0^yh(ζ) dζ ] satisfies F̃=f-α, ν^TF̃|_∂Ω=0. Since α=α_1+α_2+α_3+α_4 by the compatibility condition, we conclude that F=F̃+G̃ solves (<ref>). It is easy to see that for any r≥0 we have F∈ C^r(Ω)^2 as soon as f∈ C^r(Ω) and g_i∈ C^r[0,1], i=1,…,4, which even allows for g to be discontinuous at the corners of ∂Ω. §.§ Numerical differentiation schemes The choice of the numerical differentiation scheme (Ψ,L,B) is crucial for the performance of the method, because the factors _L(u) and _B(u) in the error bounds (<ref>) and (<ref>), as well as the solvability of the linear system (<ref>) with reasonable stability constants _1 and _1 depend on this choice. In particular, the discrete incompatibility condition (<ref>) must be satisfied. Moreover, for an efficient numerical implementation it is important that the differentiation matrices L = ( ℓ_ij)_i=1,j=1^N_Y,M and B = ( b_ij)_i=1,j=1^N_Z,M are sparse. We describe two approaches that work well in our numerical tests. The first is based on meshless numerical differentiation formulas as used in the generalized finite difference methods, and the second employs tensor-product splines. In both cases we assume that and are linear differential operators of integer orders on a space of functions on a domain D such that Ω⊂ D. The domains Ω and D need not have the same dimension; indeed, when is embedded into an ambient space such as a surface in =^3, one may prefer to choose a domain D in rather than in , leading to an immersed approach. For the sake of notational simplicity, but without loss of generality, we assume in this section that the functions in are scalar-valued. §.§.§ Meshless finite difference formulas Assuming that ⊂ C(D), we choose a finite set X = {x_1,…,x_M}⊂ D, and define the discretization operator Ψ→^M by pointwise evaluation over X, Ψ(u) = ( u(x_1), …, u(x_M) )^T, and approximate u (y_i) and u (z_i) at the quadrature nodes y_i ∈ Y and z_i ∈ Z by appropriate numerical differentiation formulas u (y_i) ≈∑_j=1^M ℓ_i j u(x_j), u (z_i) ≈∑_j=1^M b_i j u(x_j). Since differentiation is a local operation, sufficiently accurate formulas can be found with the sums in (<ref>) restricted to small sets of indices S_L,i and S_B,i corresponding to the nodes in X close to y_i and z_i, known as sets of influence. This in turn implies that ℓ_ij=0 for j∉ S_L,i, b_ij=0 for j∉ S_B,i, and so the matrices L and B are sparse. When X is a gridded set and Y∪ Z⊂ X, formulas of this type can be obtained by classical finite differences as those used in the finite difference method for partial differential equations. For irregular sets X,Y,Z, as typically needed on more complicated domains Ω, numerical differentiation formulas are in the core of the meshless finite difference methods, such as RBF-FD or GFDM, see for example <cit.> and references therein. Therefore, we generally refer to (<ref>) as meshless finite difference formulas. A basic approach to obtaining accurate numerical differentiation weights ℓ_ij, j∈ S_L,i, and b_ij, j∈ S_B,i, is to impose exactness of formulas (<ref>) over a local basis of functions that can provide a good approximation of u∈ in a neighborhood of y_i and z_i, respectively. In the case when Ω is a domain in ^d, a natural local approximation tool is given by the spaces Π^d_q of multivariate polynomials of total degree <q. The sets of influence are usually chosen somewhat larger than the minimum needed to admit a numerical differentiation formula exact for all u∈Π^d_q, and the extra degrees of freedom are used either to minimize a (semi-)norm of the weight vectors (ℓ_ij)_j∈ S_L,i and (b_ij)_j∈ S_B,i, or to enhance the local approximation space by adding radial basis functions. We refer to <cit.> for the theory and error bounds for these methods. Polynomial type methods are also available on certain manifolds, such as the d-dimensional sphere, where spherical harmonics may be employed. Otherwise, numerical differentiation formulas on arbitrary manifolds may be generated with the help of positive definite kernels. Suitable local error bounds for the kernel-based numerical differentiation may be found in <cit.> for domains in ^d and in <cit.> for reproducing kernels of Sobolev spaces on manifolds. We discuss in more detail the numerical differentiation formulas employed in our numerical tests. They are generated by the polyharmonic radial basis kernel K(x,y)=x-y^2m-1, x,y∈^d, with a polynomial term in Π^d_q, q≥ m, which is a standard approach in RBF-FD since <cit.>. We use m=q. Thus, the weights ℓ_ij, j∈ S_L,i, are obtained by requiring the exactness of the first formula in (<ref>) for all functions of the form u(x)=∑_j∈ S_L,ic_jx-x_j^2q-1+p̃(x), c_j∈, p̃∈Π^d_q, with ∑_j∈ S_L,ic_jp(x_j)=0 for all p∈Π^d_q, and similarly for the weights b_ij, j∈ S_B,i. In most cases the square linear system that arises from these conditions is regular. However, as shown in <cit.>, numerical differentiation weights are uniquely determined and can be computed by a null space method also for certain `deficient' sets of influence, such as the five point stencil for the Laplacian on gridded nodes. Let Ω be a domain in ^d, and Ω⊂ D⊂^d. Denote by k_L and k_B the orders of the operators and , respectively. Suppose we use polyharmonic numerical differentiation formulas with q=q_L>k_L for the operator , and q=q_B>k_B for . Under appropriate assumptions on the sets of influence X_L,i{x_j:j∈ S_L,i}, and X_B,i{x_j:j∈ S_B,i}, such as quasi-uniformity and boundedness of the polynomial Lebesgue constants, the errors (<ref>) and (<ref>) of the polyharmonic numerical differentiation can be estimated as _L(u)= u Y - LΨ(u)_∞ =max_1≤ i≤ N_Y| u (y_i) - ∑_j∈ S_L,iℓ_i j u(x_j)|≤ C_Lh_L^q_L-k_L|u|_W^q_L_∞(D), _B(u)= u Z - BΨ(u)_∞ =max_1≤ i≤ N_Z| u (z_i) - ∑_j∈ S_B,i b_i j u(x_j)|≤ C_Bh_B^q_B-k_B|u|_W^q_B_∞(D), where h_L is the maximum diameter of the sets {y_i}∪ X_L,i, i=1,…,N_Y, h_B is the maximum diameter of the sets {z_i}∪ X_B,i, i=1,…,N_Z, and C_L,C_B are some positive constants independent of h_L,h_B and u. Note that a simple method for the selection of the sets of influence that ensures in practice that the errors behave in accordance with these estimates is to compose X_L,i of 2Π^d_q_L closest nodes of y_i in X, and X_B,i of 2Π^d_q_B closest nodes of z_i, as recommended in <cit.> for RBF-FD. Further methods that try to optimize the selection can be found in <cit.> and references therein. In any case, the number of nodes in X_L,i and X_B,i remains bounded if q_L and q_B are fixed, which in turn implies that the matrices L and B are sparse and that the diameters h_L and h_B shrink as the density of the nodes increases. Assuming that k_B=k_L-1, as in both the elliptic and the divergence settings of Section <ref>, we get the same approximation order for _L(u) and _B(u) by choosing q_B=q_L-1. Hence, by (<ref>) and (<ref>), the quadrature formulas computed by Algorithm <ref> satisfy the estimates ∫_Ω f - ∑_i=1^N_Y_i^* f(y_i) ≤ C ( ^*_1+ ^*_1 )h^q-kinf_u ∈_fmax{|u|_W^q-1_∞(D),|u|_W^q_∞(D)}, ∫_∂Ω g - ∑_i=1^N_Z_i^* g(z_i) ≤ C ( ^*_1+ ^*_1 )h^q-kinf_u ∈_gmax{|u|_W^q-1_∞(D),|u|_W^q_∞(D)}, where C=max{C_L,C_B}, h=max{h_L,h_B}, q=q_L=q_B+1, k=k_L=k_B+1. These estimates show that, as soon as the stability constants ^*_1 and ^*_1 remain bounded and the sets _f and _g contain functions u∈ W^q_∞(D), the error of both quadrature formulas behaves as (h^q-k). In order to relate the convergence order to the smoothness of the integrands f and g, we need regularity results for the boundary value problems (<ref>), such as those discussed in Section <ref>. For example, in the setting of a smooth domain Ω⊂^d and operators =Δ, =∂_ν (with k=2), assuming that and are infinitely differentiable, we may use Proposition <ref> to infer that there exists u_f∈ U_f that belongs to H^s+2(Ω) as soon as f∈ H^s(Ω). By the Sobolev embedding theorem, H^s+2(Ω)⊂ W^q_∞(Ω) if s+2>q+d/2, and by the Stein extension theorem u_f may be extended to a function in W^q_∞(D). Therefore, the estimate ∫_Ω f - ∑_i=1^N_Y_i^* f(y_i) =(h^q-2) holds for all functions f∈ H^s(Ω) with s>q-2+d/2 as long as ^*_1+ ^*_1 is bounded. Similarly, we derive from Proposition <ref> that there exists u_g∈ U_g that belongs to H^s+3/2(Ω) as soon as g∈ H^s(∂Ω). By the same arguments with the Sobolev and Stein theorems, u_g may be extended to a function in W^q_∞(D) if s+3/2>q+d/2, and we obtain the estimate ∫_∂Ω g - ∑_i=1^N_Z_i^* g(z_i) =(h^q-2) for all functions g∈ H^s(∂Ω) with s>q-2+(d+1)/2 as long as ^*_1+ ^*_1 is bounded. §.§.§ Choosing discretization nodes In practical applications, quadrature formulas often need to be defined on fixed, user-supplied nodes, so in this work we never assume to be in control of the placement of quadrature nodes Y and Z. The set X⊂ D of discretization nodes that determines the operator Ψ, however, can be chosen freely, and so it must be computed by Algorithm <ref>. First of all, we have to choose D, which may coincide with , but it may also be a larger domain in or in the ambient space , for example a bounding box around . While exploring these options numerically, we have seen that using D= and irregular X was consistently a better choice than a bounding box D with a Cartesian grid X. Even though a discretization set X fitting may be obtained via mesh generation, for example by taking X as the vertices of a triangulation of Ω, we only need meshless nodes for our purposes. The nodes need not be connected into grids or networks, which simplifies node generation algorithms, especially for complicated 3D domains or surfaces. For a survey on the topic of meshless node generation, see <cit.>. The generation of X should take into account multiple criteria that in part contradict one another. On the one hand, for any fixed quadrature nodes Y and Z, we want the set X to be as large as possible, in order to improve the errors _L(u) and _B(u) of the numerical differentiation thanks to a higher density of X in . On the other hand, a smaller set X produces more stable quadrature formulas. Indeed, let us consider two nested sets of nodes X ⊂X̃, and corresponding matrices L,B,L̃,B̃ so that L,B are submatrices of L̃,B̃. Then, the solution (^*,^*) to (<ref>) with L̃,B̃ that minimizes the 1-norm _1 + _1 satisfies not just L̃^T ^* - B̃^T ^* = 0, but also L^T ^* - B^T ^* = 0, and so ^*_1 + ^*_1 ≤^*_1 + ^*_1. Moreover, as soon as X is large enough so that M+1>N_Y + N_Z, the linear system (<ref>) is overdetermined, i.e. it has more equations than unknowns, and so it may not admit solutions anymore. In general, X should not be unnecessarily irregular, although a higher density of the nodes may be advantageous near the boundary of Ω, especially in the vicinity of its corners or fine features. In the numerical tests we only consider quasi-uniform nodes. For more details on the distribution of discretization nodes X and how the size N_X should be chosen, see Section <ref>. §.§.§ Numerical differentiation by tensor-product splines Let be an M-dimensional linear subspace of spanned by a basis {s_1,…,s_M}⊂. Any map Ψ→^M, Ψ(u) = ( c_1(u), …, c_M(u) )^T, can be seen as a discretization operator that gives rise to the numerical differentiation scheme u (y_i) ≈( ∑_j=1^M c_j(u) s_j ) (y_i) =∑_j=1^M c_j(u) s_j(y_i) =(LΨ(u))_i, i=1,…,N_Y u (z_i) ≈( ∑_j=1^M c_j(u) s_j ) (z_i) =∑_j=1^M c_j(u) s_j(z_i) =(BΨ(u))_i, i=1,…,N_Z, given by the matrices L ∈^N_Y × M and B ∈^N_Z × M with entries ℓ_i j = s_j(y_i) and b_ij = s_j(z_i). Since these matrices do not depend on Ψ, the weights ^*, ^* computed by Algorithm <ref> do not depend on it either, and hence the choice of Ψ is irrelevant for the implementation of our quadrature in this setting. However, numerical differentiation errors _L(u)=max_i=1,…,N_Y u (y_i) - (∑_j=1^M c_j(u) s_j )(y_i), _B(u)=max_i=1,…,N_Z u (z_i) - (∑_j=1^M c_j(u) s_j )(z_i) of (<ref>)–(<ref>) do depend on Ψ. Therefore we may take infimum of (<ref>) and (<ref>) over all possible Ψ, which is equivalent to infimum over all s=∑_j=1^M c_j(u) s_j∈, and implies the estimates ∫_Ω f - ∑_i=1^N_Y_i^* f(y_i) ≤inf_u ∈_finf_s∈{ ^*_1 (u-s)|_Y_∞ + ^*_1 (u-s)|_Z_∞}, ∫_∂Ω g - ∑_i=1^N_Z_i^* g(z_i) ≤inf_u ∈_ginf_s∈{ ^*_1 (u-s)|_Y_∞ + ^*_1 (u-s)|_Z_∞}. Note that the weights ^* and ^* do not depend on the choice of the basis {s_1,…,s_M} of a given space . Indeed, if we choose another basis and let V be the change-of-basis matrix, then the new differentiation matrices are given by L̃ = L V and B̃ = B V, hence the condition L̃^T -B̃^T = V^T (L^T-B^T )=0 is equivalent to L^T -B^T =0 because V is invertible. Nevertheless, the matrices L and B still depend on the choice of the basis, and their properties strongly influence the efficiency and stability of the computation of the weights. In particular, in order to obtain sparse matrices L and B, we need locally supported basis functions s_j, such that only a few of them do not vanish in the vicinity of each point y_i or z_i. Although the space can be chosen freely, and in principle it would be interesting to compare different approaches, in this work we only consider the case of the unfitted tensor-product spline approximation. Suppose that the manifold of dimension d is embedded in ^n, with n≥ d, and let H be the n-dimensional bounding box around such that the lengths of its sides are integer multiples of a parameter h_ > 0: ⊂ H = [a_1, b_1] ×…× [a_n, b_n], b_i - a_i = N_i h_ for all i = 1,…,n. For each dimension, let T_i be the uniform knot vector T_i = { a_i, …, a_i, a_i + h_, …, b_i - h_, b_i, …, b_i }, where the first and last knots are repeated q times, and let _i be the univariate spline space of order q (degree q-1) defined by the knot vector T_i. We define to be the restriction of the tensor-product spline space _1 ⊗…⊗_n to , and a natural basis for is given by the restriction to of those tensor-product B-splines s_1,…,s_M, whose supports have non-empty intersection with Ω. In the case when d<n this is known as the ambient, or immersed approach <cit.>. Clearly, an upper bound for M = is given by ∏_i=1^n _i = ∏_i=1^n ( N_i + q - 1 ), although M will often only be a fraction of this upper bound, for example when n>d, or when n=d but Ω≪H. We denote by D the interior of the union of the supports of all B-splines in . Then D may also be considered as the domain of definition of the splines in . B-splines s_j have local supports contained in cubes with side length qh_, and so the matrices L and B are sparse as each of their rows has at most q^n nonzeros. Note that the assembly of L and B does not require any expensive tests whether the intersection of s_j with Ω is non-empty, because we can simply discard any basis function such that s_j|_Y∪ Z = 0, see Algorithm <ref>. Assume that n=d, such that Ω⊂ D are domains in ^d. As before, we denote by k_L and k_B the orders of the operators and , respectively, with max{k_L,k_B}< q. Then it follows by <cit.> that for any u∈ W^q_∞(D) there exists s∈ such that simultaneously (u-s)|_Y_∞≤ C_Lh_^q-k_L|u|_W^q_∞(D) and (u-s)|_Z_∞≤ C_Bh_^q-k_B|u|_W^q_∞(D), where C_L,C_B are some positive constants independent of h_L,h_B and u. Therefore, we obtain from (<ref>) and (<ref>) ∫_Ω f - ∑_i=1^N_Y_i^* f(y_i) ≤inf_u ∈_f{ C_L ^*_1 h_^q-k_L + C_B ^*_1h_^q-k_B}|u|_W^q_∞(D), ∫_∂Ω g - ∑_i=1^N_Z_i^* g(z_i) ≤inf_u ∈_g{ C_L ^*_1 h_^q-k_L + C_B ^*_1h_^q-k_B}|u|_W^q_∞(D), as long as _f and _g contain functions extensible to D with a finite seminorm |u|_W^q_∞(D). Similar to Section <ref>, for a smooth domain Ω⊂^d and operators =Δ, =∂_ν with k_L=2, k_B=1, assuming that and are infinitely differentiable, we may use Proposition <ref>, the Sobolev embedding theorem, and the Stein extension theorem, to obtain the same (h^q-2) estimates (<ref>) and (<ref>), as long as the combined stability constant ^*_1+ ^*_1 is bounded. We refer to <cit.> for approximation results by ambient tensor-product splines when Ω is a compact closed hypersurface. §.§ On the choice of minimization norm As we have seen in Section <ref>, assuming stability of the quadrature formulas, that is, _1 = (1) and _1 = (1), the regularity of the boundary value problem (<ref>), and appropriate numerical differentiation methods, Algorithm <ref> computes quadrature formulas of high convergence order. This seems to suggest to use the combined 1-norm (,)_♯ = (,)_1 = _1+_1 in Step 5 of Algorithm <ref>. However, the main reason for the 1-norm to appear in the estimates is that in the error analysis it was convenient to use the 1-norms _1, _1 of the quadrature weight vectors , , and the ∞-norm of the error of numerical differentiation. In fact, the Hölder inequality with any pair of conjugated exponents p,p' ∈ [1,+∞] such that 1/p + 1/p' = 1, leads to the estimates ∫_Ω f - ∑_i=1^N_Y_i f(y_i) ≤inf_u ∈_f{_p u Y - LΨ(u)_p' + _p u Z - BΨ(u)_p'}, ∫_∂Ω g - ∑_i=1^N_Z_i g(z_i) ≤inf_u ∈_g{_p u Y - LΨ(u)_p' + _p u Z - BΨ(u)_p'}, where p=1 of (<ref>) and (<ref>) is not necessarily optimal. Nevertheless, there are still good reasons why one may prefer the minimization of the 1-norm of the quadrature weights. First of all, the 1-norm is the standard way to assess stability of a quadrature formula: any perturbation of size ε > 0 in the values f(y_i) and g(z_i) is potentially amplified by _1 and _1 when computing the sums ∑_i=1^N_Y_i f(y_i) and ∑_i=1^N_Z_i g(z_i). Second, 1-norm minimization is useful for obtaining positive formulas with non-negative weights _i≥0 or _i≥0, which is a highly desirable property in many applications, although far from necessary for achieving stability of numerical integration in general. If we know the measures Ω∫_Ω 1 , ∂Ω∫_∂Ω 1 , of the domain and its boundary, then we can choose ≡ 1 and ≡-1 as auxiliary functions, so that ∑_i=1^N_Y_i +∑_i=1^N_Z_i = Ω + ∂Ω holds for all quadrature formulas (Y,) and (Z,) satisfying (<ref>). Hence, Ω + ∂Ω≤_1 + _1, and equality is attained if and only if the weights and are non-negative. If a pair of non-negative weight vectors (,) satisfying system (<ref>) exists, then it will be found as (^*,^*) by 1-norm minimization, and the combined 1-norm (^*,^*)_1 will be equal to Ω + ∂Ω, implying excellent stability. Note that in this case we may ensure that both formulas (Y,) and (Z,) are exact for constants by requiring two conditions ∑_i=1^N_Y_i = Ω and ∑_i=1^N_Z_i = ∂Ω instead of (<ref>). It is however possible that positive formulas with (<ref>) exist but none of them satisfies (<ref>). If we enforce instead of (<ref>) just one of the two conditions in (<ref>), that is, we use one of the pairs of auxiliary functions (,)≡(1,0) or (,)≡(0,1), then there is no guarantee that 1-norm minimization delivers a positive formula for either (Y,) or (Z,), even if they exist. From a computational point of view, any 1-norm minimization problem can be reformulated as a linear program with twice as many unknowns, so any general purpose linear programming solver can be used for 1-norm minimization. When the simplex algorithm is used, sparse interior quadrature formulas with no more than m nonzero weights can be obtained, with m being the total number of rows in the linear system (<ref>). For a numerical comparison of 1-norm and 2-norm minimization, we refer to the experiments in Section <ref>. The choice p = 2 is very attractive from the computational point of view: efficient linear algebra routines based on e.g. QR decomposition are available to find a solution of system (<ref>) with minimal combined 2-norm (,)_2 √(_2^2 + _2^2). Moreover, the strict convexity of the 2-norm guarantees uniqueness of the optimal solution (^*,^*), a property that may not hold for the 1-norm. §.§ Practical choices for effective algorithms After discussing in Sections <ref>–<ref> various options available for the realization of the framework outlined in Algorithm <ref>, we now describe two particular settings tested in the numerical experiments of Section <ref> and recommended for practical applications. We only consider domains Ω in ^d, even if the framework is applicable to surfaces and other manifolds. The evidence gathered in our experiments suggests that these choices are essentially optimal among the ones that will be compared in Section <ref>, and therefore represent a good starting point for applications and future research. The first algorithm is based on meshless finite difference formulas, whereas the second one is based on tensor-product spline spaces. This way we demonstrate that both functional and meshless finite difference approaches have effective realizations. In either case, we assume that |∂Ω| is known, and choose (,) ≡ (0,1), = , = γ_ν, (,)_♯ = (,)_2 = √(_2^2 + _2^2). For simplicity, we assume that the quadrature nodes in Y and Z, although irregular in general, are not intentionally generated with density varying over the domain or its boundary, and we characterize their density by a single spacing parameter h > 0 defined as the step size of a uniform Cartesian grid with N_Y nodes in a d-dimensional cube of measure |Ω|, and similarly for Z: h ≈ h_Y ≈ h_Z, with h_Y ( Ω/N_Y)^1/d, h_Z ( ∂Ω/N_Z)^1/(d-1). We assume that, in addition to supplying the quadrature nodes in Y and Z, the user provides sufficiently accurate outward-pointing unit normals at the boundary nodes of Z ⊂∂Ω, and, in the case of Algorithm <ref>, is in position to generate an additional node set X in targeting a prescribed spacing parameter h_X > 0. Approach based on meshless finite difference formulas (MFD). Note that in order to increase the performance of Algorithm <ref>, one should assemble the sparse matrices L^T and B^T directly, instead of precomputing matrices L_k and D_k for each spatial dimension separately. Moreover, the index sets S_L,i and S_B,i can be determined on the fly during the assembly procedure. Nevertheless, we have chosen to present Algorithm <ref> in the form above to enhance readability, and to clearly show how L^T and B^T can be put together in the divergence case, which has not been detailed out in Section <ref>. Indeed, we have assumed in that section that functions in are scalar valued, but in the case of = and = γ_ν the functions in are actually vector valued. In the scalar case, the discretization operator Ψ→^N_X is defined by pointwise evaluation over X, see (<ref>), and so the elements in the vector Ψ(u) follow the order of the nodes in X. In the vector valued case, the discretization operator Ψ(F) ∈^dN_X is defined by pointwise evaluation over X of all d components of the vector field F ∈, and so one must choose whether to order the elements of Ψ(F) so that all d components for a fixed node are contiguous, as in Ψ(F) = ( F_1(x_1), …, F_d(x_1), …, F_1(x_N_X), …, F_d(x_N_X) )^T, or so that all N_X pointwise values for a fixed component are contiguous, as in Ψ(F) = ( F_1(x_1), …, F_1(x_N_X), …, F_d(x_1), …, F_d(x_N_X) )^T. In Algorithm <ref> we have chosen the latter ordering, and although the discretization operator Ψ is not used in the algorithm, the chosen ordering for Ψ(F) clearly determines the order of the columns of L and B, and hence their assembly. In a high performance implementation, the order that provides the fastest memory access should be preferred. In any case, the final values of and do not depend on the chosen ordering for Ψ(F), because changing the order amounts to permuting the rows of the linear system Ax=b. Approach based on a tensor-product spline space (BSP). Once again, we have written Algorithm <ref> in a way to enhance readability rather than performance. We conclude this section with three remarks. First, we note that the set X of Algorithm <ref> may also be generated as a Cartesian grid over the bounding box H as defined in Algorithm <ref>. This version of MFD delivered satisfactory results in our preliminary numerical experiments, but we decided to present it in the current form because the theoretical analysis in Section <ref> in this case avoids the need for the existence of the extensions of the functions u_f and u_g to a larger domain D⊃Ω without loss of smoothness, which may lead to a better performance in some settings. Moreover, being able to generate the node set X⊂ targeting a prescribed spacing parameter h_X is not a restrictive hypothesis in practice, because, whenever node generation is deemed expensive due to the complexity of , one may obtain X by thinning the set Y ∪ Z. Second, Algorithms <ref> and <ref> can be adapted to the setting where the quadrature nodes Y and Z are generated with locally varying density according to a spacing function h(x) →^+. In this case, the set X of Algorithm <ref> may be generated by using a scaled spacing function h_X(x) = c h(x) for some c>1, either from scratch, or by a suitable subsampling algorithm, see e.g. <cit.>. In the case of Algorithm <ref>, the spacing function h(x) may be used to guide local refinement of the tensor-product spline space. Third, the choice of the coefficients in h_X = 1.6 h and h_ = 4 h is justified by the numerical experiments of Section <ref>. For now, we point out that these choices lead by design to underdetermined systems Ax=b. To see why, let us consider the case of Algorithm <ref>. Assuming that N_Y ≫ N_Z, the matrix A has approximately N_Y columns and dN_X rows. By the definitions of h and h_X, d N_X/N_Y≈d Ω h_X^-d/Ω h^-d = d (1.6)^-d, and so the ratio of rows to columns is always smaller than 1 for all d ∈, with a peak of about 0.8 for d=2. A similar argument shows that the linear system Ax=b assembled in Algorithm <ref> is also underdetermined for small enough h. ./figures/ search path = ./tables, col sep = comma, string replace = NaN, empty cells with = -, set thousands separator = , every column/.append style = zerofill, sci e, precision = 2 columns/Nsamples/.style= column name=N_seeds, int detect , columns/hX/.style= column name=h_X, sci , columns/hY/.style= column name=h, sci , columns/hZ/.style= column name=h_Z, sci , columns/NX/.style= column name=N_X, int detect , columns/NY/.style= column name=N_Y, int detect , columns/NZ/.style= column name=N_Z, int detect , columns/NrowsA/.style= column name=m, int detect , columns/NrowsA_mfd/.style= column name=m_MFD, int detect , columns/NrowsA_bsp/.style= column name=m_BSP, int detect , columns/sizeratioA/.style= column name=m/N, fixed , columns/sizeratioA_mfd/.style= column name=m_MFD/N, fixed , columns/sizeratioA_bsp/.style= column name=m_BSP/N, fixed , columns/nnzA/.style= column name=nnz(A), int detect , columns/residual/.style= column name=Ax-b, sci , columns/tnodegen/.style= column name=t_nodes, sci , columns/tmatassembly/.style= column name=t_mat, sci , columns/tsolve/.style= column name=t_solve, sci , columns/sreintrungeavg/.style= column name=e_avg(f_1), sci , columns/sreintrungestd/.style= column name=e_std(f_1), sci , columns/sreintrungerms/.style= column name=e_rms(f_1), sci , columns/sreintrungemax/.style= column name=e_max(f_1), sci , columns/srebndrungeavg/.style= column name=e_avg(g_1), sci , columns/srebndrungestd/.style= column name=e_std(g_1), sci , columns/srebndrungerms/.style= column name=e_rms(g_1), sci , columns/srebndrungemax/.style= column name=e_max(g_1), sci , columns/sreintfrankeavg/.style= column name=e_avg(f_2), sci , columns/sreintfrankestd/.style= column name=e_std(f_2), sci , columns/sreintfrankerms/.style= column name=e_rms(f_2), sci , columns/sreintfrankemax/.style= column name=e_max(f_2), sci , columns/srebndfrankeavg/.style= column name=e_avg(g_2), sci , columns/srebndfrankestd/.style= column name=e_std(g_2), sci , columns/srebndfrankerms/.style= column name=e_rms(g_2), sci , columns/srebndfrankemax/.style= column name=e_max(g_2), sci , columns/sreintexpsumavg/.style= column name=e_avg(f_3), sci , columns/sreintexpsumstd/.style= column name=e_std(f_3), sci , columns/sreintexpsumrms/.style= column name=e_rms(f_3), sci , columns/sreintexpsummax/.style= column name=e_max(f_3), sci , columns/srebndexpsumavg/.style= column name=e_avg(g_3), sci , columns/srebndexpsumstd/.style= column name=e_std(g_3), sci , columns/srebndexpsumrms/.style= column name=e_rms(g_3), sci , columns/srebndexpsummax/.style= column name=e_max(g_3), sci , columns/stabw/.style= column name=K_, fixed, dec sep align =c, precision = 2 , columns/stabq/.style= column name=K_, fixed , columns/stabwmax/.style= column name=max K_, fixed, dec sep align =c, precision = 2 , columns/stabqmax/.style= column name=max K_, fixed , columns/minw/.style= column name=min, sci , columns/minq/.style= column name=min, sci , columns/nnzw/.style= column name=(), int detect , columns/nnzq/.style= column name=(), int detect , columns/nnzwq/.style= column name=()+(), int detect § NUMERICAL TESTS In this section we numerically evaluate the accuracy and stability of quadrature formulas (Y,) and (Z,) produced by Algorithms <ref> and <ref>. Let Ω be a bounded Lipschitz domain in ^2 or ^3 with piecewise smooth boundary. Quadrature errors of (Y,μ) and (Z,) are computed for two test functions f_1, f_2 ∈ C^∞(Ω), and, respectively, their restrictions g_1 = f_1∂Ω and g_2 = f_2∂Ω. The first test function f_1 is a multidimensional generalization of Runge's function centered at a domain-dependent point x_R ∈^d: f_1(x,x_R) = 1/1+25x-x_R_2^2. The second test function f_2 as scaled and translated Franke's function <cit.> in 2D, and its 3D generalization <cit.>. As our domains do not fit into the square [0,1]^2 or cube [0,1]^3, for which these test functions have been designed, we compose them with the affine mapping (x_1,…,x_d) ↦( x_1+1/2, …, x_d+1/2), d=2,3, between [0,1]^d and [-1,1]^d. In two dimensions, Franke's function is evaluated with the command in MATLAB. The accuracy of the quadrature formulas is assessed by computing the relative errors e(f_1) δ(f_1,0) / I(f_1,0) , … , e(g_2) δ(0,g_2) / I(0,g_2). In all tests, the denominators are large enough to make relative errors meaningful. Relative errors were preferred to absolute errors to make results comparable across different test functions and domains. As will be clear from the convergence plots of Section <ref> that go below 10^-15 in some instances, we need very accurate reference values of the integrals I(f_1,0), I(f_2,0), I(0,g_1), I(0,g_2), in order to compute at least one reliable significant digit of the true relative errors. For every test domain, parametrizations of all smooth pieces of ∂Ω are known in closed form, with the parametric domain given by either an interval for 2D domains, or a rectangle for 3D domains. The integrals of f_1 and f_2 over Ω were computed by finding vector fields F_1 and F_2 whose divergence is f_1 and f_2, then using the divergence theorem to turn the integrals over Ω into integrals over ∂Ω, and finally integrating over the parametric domain of each smooth boundary piece with MATLAB's adaptive quadrature routines and . To meet the required accuracy target, absolute and relative tolerances were set to 8 · 10^-16. Note that even though the test functions f_1 and f_2 are both infinitely differentiable and of a simple shape, we can distinguish their smoothness because the partial derivatives of f_1 grow as the factorial of their order, whereas those of f_2, as en entire function, grow much slower as an exponent of the order. This makes f_1, that actually stems from the famous Runge example for polynomial interpolation, a more difficult test function for high order methods than f_2. The stability of the quadrature formulas is assessed by computing the normalized stability constants K_1/Ω∑_i=1^N_Y_i, K_1/∂Ω∑_i=1^N_Z_i, that measure the sensitivity of the quadrature to the maximum absolute error in the function values, such that formulas with smaller K_ and K_ are more stable. Again, the normalization helps to compare the stability across all test functions and domains. For any quadrature formula exact on constants we have K_≥ 1 or K_≥ 1, and the equality holds if and only if the formula is positive. Without exactness for constants, but with the relative quadrature error for the constant function less than some ∈(0,1), we get K_> 1- and K_ > 1-. Therefore, the best stability is attained when K_ and K_ are close to one. Note that Algorithms <ref> and <ref> may produce quadrature weights such that K_ < 1, because exactness for constants is only enforced for (Z,) by the choice (,) ≡ (0,1). If |Ω| is known, in addition to |∂Ω|, then one may also enforce exactness of (Y,) for constants by adding one more row to the linear system (<ref>), but this does not provide any tangible benefit, as will be shown in Section <ref>. In what follows we first demonstrate in Section <ref> the effectiveness of the settings suggested in Algorithms <ref> and <ref>, which serves as a kind of a tuning step for the free parameters in the algorithms. We do the testing mostly for Algorithm <ref>, because the results for Algorithm <ref> are very similar. A second batch of numerical experiments in Section <ref> shows that quadrature errors converge to zero as h → 0 at the rate determined by the order of the numerical differentiation scheme. In all tests the weights of the meshless finite difference formulas are computed using the open source library mFDlab <cit.>, and those for the tensor-product spline differentiation by MATLAB's Curve Fitting Toolbox. §.§ Validation of recommended parameters §.§.§ Choice of quadrature nodes In the first test we demonstrate the robust performance of the MFD method described in Algorithm <ref> with respect to the choice of nodes in the sets Y and Z. These experiments are performed on an elliptical domain Ω_1 in ^2 with semi-axes of length 1 and 3/4 centered at the origin: Ω_1 = { (x_1,x_2) ∈^2 | x_1^2 + x_2^2/(3/4)^2 < 1 }. A domain in ^d with piecewise smooth boundary may be discretized by scattered nodes using a wide range of techniques, see the survey <cit.>. We consider three important cases: a meshless advancing front point cloud generation algorithm, and two rejection sampling algorithms based either on the quasi-random Halton sequence or on pseudo-random uniformly distributed samples, in both cases mapped to a bounding box containing Ω. The advancing front method works by first generating a set of boundary nodes Z from a parametric description of ∂Ω, possibly consisting of multiple patches, and then placing interior nodes Y_int in Ω by advancing the front inside the domain starting from Z. Node sets Z and Y_int are disjoint, and their nodes are spaced according to locally varying spacing functions h_Z ∂Ω→^+ and h_Y Ω→^+, although we restrict ourselves to constant spacing h_Z ≡ h_Y ≡ h ∈^+ for all numerical experiments, which implies that the nodes are quasi-uniform. Outward-pointing unit normals νZ are also computed and stored, because they are required to assemble matrix B in Algorithms <ref> and <ref>. We use a custom implementation of the advancing front method, with the set Z generated according to <cit.>, and Y_int according to <cit.>. In the case of rejection sampling, a set Y_H of N_H = ( h^-d) nodes is initially generated in a bounding box around Ω, and then only nodes inside Ω are kept by using a level set function such that Ω = { x ∈|φ(x) < 0 }, Y_int = { y ∈ Y_H |φ(y) < 0 }. Nodes in a first-order approximation of a tubular neighborhood of ∂Ω of width 2h are projected to the zero set of φ along the direction parallel to the gradient of φ, and then a thinning operation is applied to the projected points to ensure that no pair of nodes in Z has distance smaller than h. Regardless of how the sets Y_int and Z are generated, we produce both open quadrature formulas, for which Y ∩∂Ω = ∅, and closed quadrature formulas, for which Y ∩∂Ω≠∅. In all of our numerical experiments, the former are obtained by taking Y = Y_int, whereas the latter are obtained by taking Y = Y_int∪ Z. Note that the accuracy and stability of are also affected by the choice of Y, because the weight vectors and are computed simultaneously. In all tests, unless stated otherwise, the set X of discretization nodes used to define the numerical differentiation scheme is generated in the same way as the closed version of Y, but with a larger spacing h_X = 1.6 h. Node generation can be a stochastic process, which applies in particular to the advancing front method and rejection-sampling of random nodes, or a deterministic process whose starting conditions are arbitrary and can therefore be randomized, like rejection-sampling of Halton nodes. In either case, node sets X, Y, Z can be understood as functions of a seed, an integer variable i_s that initializes the pseudo-random number generators used in all cases. A naive comparison of quadrature errors based on a single choice of seed can lead to misleading results, because all errors are subject to random noise, and so any particular node distribution may overperform or underperform compared to the others, sometimes by even an order of magnitude. Numerical quadrature is much more affected by this than for example the data fitting problem, where the maximum or the root mean square error over an entire domain is computed, which significantly reduces the influence of the noise in the pointwise errors. Therefore we compute average errors as follows. Let e(·,i_s) be the relative quadrature error for a given seed i_s ∈. We denote by e_rms(·) the root mean square values (RMS) of e(·,i_s) for i_s = 1, …,n_s: e_rms(·) = (1/n_s∑_i_s = 1^n_s e(·,i_s)^2 )^1/2, where we take n_s=64. The stability constants K_ and K_ are obtained by averaging the values K_(i_s) = 1/Ω∑_j=1^N_Y(i_s)_j(i_s) and K_(i_s) = 1/∂Ω∑_j=1^N_Z(i_s)_j(i_s) over all seeds i_s = 1, …, n_s. Table <ref> reports the errors and stability constants for open and closed quadrature formulas computed by Algorithm <ref> using the three node generation methods described above on Ω_1 and ∂Ω_1. Numerical differentiation is performed by meshless finite difference formulas of order q=5, and the Runge function f_1 is centered at x_R = (0,0). In all rows of Table <ref>, care was taken to tweak the value of h so that N_Y ≈ 2500. We observe that nodes generated by the advancing front algorithm lead to the most accurate and stable quadrature formulas, although Halton nodes and even random nodes remain competitive. This suggests that our quadrature formulas are robust with respect to the exact placement of the input nodes in Y and Z. On average, closed quadrature formulas are more accurate and stable than their open counterparts, and so they will be used in all subsequent experiments. Note that the errors for the worst of 64 seeds are less than three times higher than the RMS errors in the table in all cases except of the open quadrature on random nodes, where they may get to almost 8 times higher. The worst stability constant K_w is 1.71 for the closed quadrature on advancing front nodes and is significantly higher than the average only for random nodes, with 8.29 for the closed and 233.21 for the open formulas. The largest constant K_v for the open formulas on random nodes is 12.72 and never exceeds 1.07 in other cases. §.§.§ Choice of spacing parameters h_X and h_ In order to keep the linear system (<ref>) underdetermined and solvable, the spacing parameters h_X and h_ must be sufficiently large, because they determine the size of the system matrix. Nevertheless, as seen by the considerations in Section <ref>, they should remain comparable to h to ensure that the numerical differentiation scheme is asymptotically accurate in the sense that ε_L(u) = (h^q-k) and ε_B(u) = (h^q-k). Therefore, we suggest to choose h_X and h_ as multiples of h with some fixed coefficients, and investigate how the errors of the quadrature formulas depend on the ratios h_X/h and h_/h. In Algorithms <ref> and <ref> we recommend to choose the spacing parameters as h_X = 1.6 h and h_ = 4 h based on numerical evidence that we have gathered in our numerical experiments on several 2D and 3D domains. The case of the ellipse Ω_1 is presented in Figure <ref>, where quadrature errors are plotted as a function of the ratios h_X/h and h_/h for h = 0.025 and q = 5. We use node sets X,Y,Z generated by the advancing front algorithm, with Z ⊂ Y. As in the previous test, data points are the RMS of 64 quadrature errors given by different seeds. The Runge function f_1 is centered at x_R = (0,0). To aid visualization, instead of the errors themselves we plot their relative size compared to the choices h_X = 1.6 h and h_ = 4 h. The vertical axis is in logarithmic scale. In addition we plot the stability constant K_w. The two plots clearly show that quadrature errors decrease as the ratios h_X/h and h_/h get smaller, but only up to a point: when the ratios get too small, the errors and the stability constant K_ quickly rise up again. Reducing the ratios even further leads to overdetermined systems for which no exact solution exists, and so their errors are not included in the plots. The ratios h_X = 1.6 h and h_ = 4 h were found to be a good compromise between accuracy and stability, as they are close the minimum of the curves in Figure <ref>, but leaning on the side of stability, so that a safety margin is left in Algorithms <ref> and <ref>. In our experiments, the same ratios were found to also be appropriate for 3D domains. The stability constant for is not included in Figure <ref> because its dependence on h_X/h and h_S/h is very weak. Even though increasing the ratios h_X/h or h_/h reduces the number of rows of the system matrix A, it does not change the total number of nonzeros in A. Indeed, the number of nonzeros in every column of the matrix (L^T -B^T) depends in the MFD case only on the size n_L or n_B of the set of influence of the corresponding node in Y or Z, as chosen in Step 4 of Algorithm <ref>. Likewise, in the BSP case the number of nonzeros in every column in general equals the number q^d of B-splines of degree q-1 that include the corresponding quadrature node in the interior of their support. Taking into account the last row of A corresponding to the non-homogeneous constraint, we get the following upper bound for the number of nonzeros: (A)≤ N_Z+ n_LN_Y+n_BN_Z, for Algorithm <ref>, q^d(N_Y+N_Z), for Algorithm <ref>. Note that we do not test how the performance of the MFD quadrature depends on the parameters n_L and n_B, leaving this to a future work, where also more sophisticated methods for the selection of the sets of influence could be investigated. As mentioned in Section <ref>, we rely on the safe and simple but possibly not optimal selection of k nearest neighbors in X to a node in Y or Z, with k equal to twice the dimension of the corresponding polynomial space. As a result, (A) is somewhat larger for MFD than for BSP in the 2D case, for the same q, and about half of it in 3D. §.§.§ Choice of operators and In Section <ref> we discussed two possible ways to choose operators and for our scheme, the elliptic approach with = Δ and = ∂_ν, and the divergence approach, with = and = γ_ν. Theoretical considerations indicated that the elliptic approach should be appropriate for domains with smooth boundary. However, the theory of the elliptic approach breaks down when the boundary is not smooth. In the same time, the divergence approach looks promising for piecewise smooth domains, even though the theory to justify it is far from complete because of the lack of results on the regularity of the corresponding boundary value problem (<ref>). In this section we compare the numerical performance of both approaches on two 2D domains, the ellipse Ω_1 with a smooth boundary, and the disk sector Ω_2 defined in the polar coordinates (r,θ) centered at the origin by Ω_2 = { (r,θ) ∈^2 | 0 < r < 1 and 0 < θ < 3π/2 }. The domain Ω_2 has non-smooth boundary, with a reentrant corner at (0,0), and serves as a standard benchmark for testing numerical methods for the elliptic boundary value problem (<ref>) that take into account the reduced smoothness of the solution at the reentrant corners. We use numerical differentiation schemes (Ψ,L,B) based on meshless finite difference formulas for both approaches, the same sets of quadrature nodes Y and Z generated by the advancing front algorithm with Z ⊂ Y, and the same non-homogeneous constraints with (,) ≡ (0,1). In the divergence approach we generate X with the usual advancing front method and spacing parameter h_X = 1.6 h, whereas in the elliptic approach we pick X = Y. We have found this simple choice to be more accurate and reliable compared to the generation of a coarser set of nodes X (Figure <ref> is specific to the divergence approach). The Runge function f_1 is centered at x_R = (0,0) in the case of the ellipse Ω_1, and at x_R = (cos(3π/4)/2,sin(3π/4)/2) in the case of the disk sector Ω_2. We have chosen the polynomial order as q = 4+k, where k is the order of the differential operator , so that the errors are expected to decay as h^4 as h→0, unless the convergence order is saturated by the low regularity of the solution u of the boundary value problem (<ref>). Figure <ref> presents relative quadrature errors as a function of spacing parameter h for both the elliptic and the divergence approach for the ellipse Ω_1 (top row) and for the disk sector Ω_2 (bottom row). Relative errors are the RMS of 8 outcomes given by distinct seeds. Comparing the plots in the top row of Figure <ref> for the smooth domain Ω_1, we observe that although the errors are smaller for the divergence approach, the decay is h^4 in both cases, as predicted by the theory presented in Sections <ref> and <ref>. In contrast to this, the plots in the bottom row for the non-smooth disk sector Ω_2 show that in the case of the elliptic approach the quadrature errors are barely decreasing as h → 0, whereas the divergence approach still delivers the expected h^4 convergence order. This is the very compelling reason why the divergence approach is recommended in Algorithms <ref> and <ref>, despite the additional complexity introduced by the numerical differentiation of a vector field instead of a scalar function. Note that the stability constants K_ and K_ of the quadrature formulas generated by the elliptic approach on the disk sector never exceed 1.1 in the tests of Figure <ref>(c). This provides further evidence that the large quadrature errors are not caused by the instability of the computed formulas, but rather by the ineffectiveness of the numerical differentiation scheme on solutions u to the boundary value problem (<ref>) on the disk sector due to their low regularity near the reentrant corner, which leads to large recovery errors ε_L(u) and ε_B(u) and influences all estimates of Section <ref> starting from (<ref>). In addition to the experiments in this section, those in Section <ref> for 4≤ q≤ 8 on the disk sector Ω_2 and an L-shaped 3D domain (see Figures <ref> and <ref>), also confirm that the divergence approach does not suffer from saturation of the convergence order of the quadrature formulas. This can be interpreted as numerical evidence in support of our conjecture that the boundary value problem (<ref>) for the divergence operator has solutions of high regularity order on Lipschitz domains with piecewise smooth boundary, whenever the right hand side is highly regular, see Section <ref> for a more detailed discussion. §.§.§ Choice of non-homogeneous constraints At least one non-homogeneous constraint (<ref>) has to be included in the linear system (<ref>), or else its minimum-norm solution will be trivial. In Table <ref> we have compared quadrature errors and stability constants for different combinations of the non-homogeneous constraints introduced in Section <ref>: (,) ≡ (1,0), (,) ≡ (0,1), (,) ≡ (1,-1), (,) ≡ (0,∂_νΦ(·,x_0)) with x_0 ∈Ω. The integration domains are the ellipse Ω_1 and the disk sector Ω_2, as defined previously. The spacing parameter is h = 0.025 and the polynomial order is q = 5. Nodes in X,Y,Z are generated by the advancing front algorithm with Z ⊂ Y, and relative errors are the RMS of 64 quadrature errors given by different seeds. The Runge function f_1 is centered at x_R = (0,0) in the case of the ellipse Ω_1, and at x_R = (cos(3π/4)/2,sin(3π/4)/2) in the case of the disk sector Ω_2. The fundamental solution is centered at x_0 = (0.1, 0.05), a point that does not belong to any axis of symmetry of the two domains. The results in the table show that quadrature errors depend quite weakly on the choice of non-homogeneous constraints. In particular, on the ellipse, whose boundary is smooth, there is essentially no difference in the errors, whereas on the disk sector, with a non-smooth boundary, a difference of one order of magnitude or more can be observed between enforcing exactness for constants and enforcing exactness for the normal derivative of the fundamental solution centered at x_0 ∈Ω. This is the reason to recommend the choice (,) ≡ (0,1) in Algorithms <ref> and <ref> whenever the measure of ∂Ω is known or can be sufficiently accurately approximated at low cost, as in the case of domains whose boundary is described by parametric patches with standard parameter domains for which accurate quadrature formulas are available. Whenever this is challenging, the approach based on the fundamental solution remains effective, although a smaller spacing parameter h may be needed to achieve the same errors on domains with non-smooth boundary. As far as stability is concerned, all the non-homogeneous constraints perform equally well. Boundary quadrature formulas are close to being positive, but no constraint can consistently deliver non-negative weights across all 64 seeds. §.§.§ Choice of minimization norm In Section <ref>, we have discussed relative merits of the norms (,)_1 = _1 + _1, (,)_2 = √(_2^2 + _2^2) as candidates for the optimization norm (,)_♯ to be used in the final step of the general framework described in Algorithm <ref>, where the quadrature weights ^* and ^* are computed by minimizing (,)_♯ over all solutions to the non-homogeneous linear system Ax=b of (<ref>). We now present numerical experiments to compare their performance and support our choice of the 2-norm for the practical algorithms of Section <ref>. The 1-norm minimization problem can be reformulated as the linear program Minimize 1^T x^+ + 1^T x^- subject to A x^+ - A x^- = b and x^+, x^- ≥ 0, with x=x^+ -x^-, and solved by e.g. the dual simplex algorithm or an interior-point method. For our numerical tests, we have used their implementations provided by the 2024a release of MATLAB's Optimization Toolbox. For the solution of the 2-norm minimization problem, we have used a direct method based on the sparse QR factorization provided by the SuiteSparseQR library <cit.>, version 4.3.3 available from <cit.>. The integration domain for these tests is the ellipse Ω_1, the polynomial order is q = 5, and the sets of nodes X,Y,Z are generated as usual by the advancing front algorithm with Z ⊂ Y. The Runge function f_1 is centered at x_R = (0,0). Table <ref> reports quadrature errors and stability constants of the formulas produced by Algorithm <ref> when minimizing the norms above, denoted in the table by L^1 and L^2. Separate rows are devoted to the 1-norm minimization computed by the dual simplex algorithm and the interior-point method, because their results are significantly different. For the same tests, Table <ref> reports further information such as the runtime t_solve of the solver, the number of the rows m of the system matrix A, and the number of nonzero weights in the quadrature formulas and . For each value of the spacing parameter h, results are averaged over 8 different seeds. Values which are meant to be integers, such as m, have been rounded to the nearest integer, to enhance readability. It turned out that the interior-point method for 1-norm minimization may stall or lose feasibility when solving the linear program (<ref>). For this reason, we have included the column "conv" in the tables that reports how many runs have converged out of 8, and all statistics are computed only for the seeds for which convergence has been achieved. As we see in Table <ref>, minimization of the 1-norm leads on average to smaller quadrature errors compared to the 2-norm, although the advantage becomes less significant as h decreases. The L^1 methods also deliver more stable formulas, especially on the boundary: the appearance of the exact value 1 for K_ indicates that a positive formula (Z,) has been found for all seeds, because Algorithm <ref> relies on the non-homogeneous constraint (,)≡(0,1). The L^2 method only delivers positive formulas on the boundary for a few seeds. The L^1 methods, however, struggle with computational efficiency and robustness. The interior-point method does not converge for any seed when h = 0.025, even if parameters such as , , and are increased significantly from their default values. The dual simplex algorithm always terminates successfully in these tests, but its running time is orders of magnitude longer compared to SuiteSparseQR. This is the reason why the 2-norm is recommended in Algorithms <ref> and <ref> for general use. The two algorithms for the minimization of the 1-norm that we have considered so far are not the only iterative algorithms that struggle with the system Ax=b: the ones for the minimization of the 2-norm, such as LSQR <cit.>, also require too many iterations to be practical, unless an effective preconditioning strategy is employed (the condition number of the minimum-norm least squares problem naturally grows as h → 0). Since out-of-the-box performance of standard left and right preconditioners such as symmetric successive over-relaxation and incomplete Cholesky factorization was not found to be satisfactory, we opted for a direct solver based on a sparse QR factorization of A^T, as implemented in the command provided by the MATLAB interface to the open source library SuiteSparseQR <cit.>. Fill-in during sparse QR factorization is mitigated by the reordering algorithms used by SuiteSparseQR, which were found to be quite effective, especially in the case of 2D domains. The QR factorization provided by SuiteSparseQR is rank-revealing, and the numerical rank of A is taken into account when computing the solution to Ax=b with minimal 2-norm. To maximize accuracy, the tolerance 10^-15 was passed to for numerical rank computation in all tests in this paper, unless stated otherwise. By the well-known properties of the linear program (<ref>), its solution is not necessarily unique, and there is at least one solution x with the number of nonzero components not exceeding the number m of rows in the matrix A. The dual simplex algorithm is known to terminate at a solution with this property, and indeed we see that ()+() is less than m in all cases when this method is used, meaning that the combined quadrature weights x = (,)^T have a significant amount of zeros, which may be seen as an advantage. Moreover, the quadrature formulas (Y,) and (Z,) are also sparse when considered individually, because () < N_Y and () < N_Z. In contrast to this, all weights coming from the interior-point method and the 2-norm minimization are always nonzero. Even sparser weights can be obtained by the dual simplex method with a larger ratio h_X/h, although the accuracy of the resulting quadrature formulas will be reduced. §.§ Convergence tests in 2D and 3D Now that numerical evidence for the choices made in Algorithms <ref> and <ref> has been given, we move on to the evaluation of relative quadrature errors for h → 0, i.e. for an increasing number of quasi-uniform quadrature nodes, on the six test domains depicted in Figure <ref>. The first two domains are the ellipse Ω_1 and the disk sector Ω_2 already defined in Sections <ref> and <ref>. The third and last 2D domain is the region enclosed by a Cassini oval, a quartic plane curve defined as the locus of points such that the product of the distances to two fixed points (-a,0) and (a,0), called foci, is a constant b^2 ∈^+: Ω_3 = { (x_1,x_2) ∈^2 |((x_1+a)^2+x_2)^2 ((x_1-a)^2+x_2)^2 - b^4 < 0 }. For a < b < a√(2), the domain Ω_3 has smooth boundary and is simply connected, although it is not convex: its shape resembles the number eight rotated sideways. For our numerical experiments, we have taken a = 0.95 and b = 1 to stress its non-convex shape and to ensure that its area is close to that of Ω_1 and Ω_2. As far as 3D domains are concerned, Ω_4 is an ellipsoid with semiaxes (1,0.7,0.7), Ω_5 is an L-shaped domain obtained by cutting one quadrant from a rectangular cuboid with sides of length (2,2,2/3), and Ω_6 is the solid torus with major radius R = 1 and minor radius r = 0.32. For all tests we use node sets X,Y,Z generated by the advancing front algorithm (see Section <ref>), with Z ⊂ Y and spacing parameter h ranging from 0.005 to 0.16 on 2D domains and from 0.025 to 0.1 on 3D domains. Note that the size of 2D domains has been chosen so that their areas are approximately equal, and the same has been done for 3D domains and their volumes. This means that the advancing front algorithm generates a similar number N_Y of internal quadrature nodes for a given spacing parameter h > 0, and so the plots of interior quadrature errors can be directly compared across different domains of the same dimension. The exact number of quadrature nodes in Y and Z and the size of the system matrix A are reported in Table <ref> for the domains Ω_1 and Ω_4, exemplifying 2D and 3D domains, respectively. The data for the other four domains is very similar to Ω_1 or Ω_4, except that the size of Z varies somewhat as it depends on the measure of ∂Ω. Variables m_MFD and m_BSP denote the number of rows of A in the case of Algorithms <ref> and <ref>, respectively. We report m_BSP for the order q=5, but the numbers for other orders used in the tests are similar. The ratios m_MFD/N and m_BSP/N are below one, and this confirms that the linear system Ax=b is underdetermined. Relative quadrature errors for each test domain and for the polynomial orders q between 4 and 8 are shown as a function of the spacing parameter h in Figures <ref> to <ref>. Results obtained by Algorithms <ref> and <ref> are shown in logarithmic scale side by side, with the same range for the horizontal and vertical axis to aid visualization and comparisons. Reference slopes corresponding to different convergence orders q-1 are shown using dotted lines. The center x_R of the Runge function f_1 for each test domain is specified in the caption of its corresponding figure, and is chosen so that x_R ∈Ω. Boundary quadrature errors for the test function g_2 were omitted for the sake of brevity, as these values are very similar to the ones for g_1 in all cases. Unlike the numerical tests in Section <ref>, relative quadrature errors are now reported for a single seed i_s = 1. Because of this, the error curves in Figures <ref>–<ref> look somewhat noisy compared to the convergence plots for e.g. the numerical solution of boundary value problems, where typically the RMS error over the entire domain is reported, which averages out the noise present in individual pointwise errors. We have chosen to plot quadrature errors for a single seed to stress that convergence is still evident despite the noise induced by the node generation process, and that no averaging with respect to the seed is needed in practice to get accurate and reliable results. We indeed observe in all the figures a high order of convergence for both methods MFD and BSP, including the non-smooth domains Ω_2 and Ω_5, with the slopes of the error plots increasing with q and roughly corresponding to the expected h^q-1 convergence order, in accordance with error estimates (<ref>), (<ref>), (<ref>) and (<ref>), if the noise induced by the node generation process is factored out. In the 2D setting, errors close to the accuracy limits of double-precision floating-point numbers are reached in all tests, with values around 10^-15 being typical for the highest order methods on the largest number of nodes considered for the tests. To reach such small errors in 2D tests, the rank detection tolerance used by SuiteSparseQR (see Section <ref>) had to be manually set to 10^-15, or else errors would plateau around 10^-12. In the 3D setting, where such high precision is not reached, the rank detection tolerance used by SuiteSparseQR was kept to its automatically assigned value (about 10^-10 for most tests), a choice motivated by better errors and stability constants for coarser sets of nodes. Comparison of the results for MFD versus BSP reveals similar behavior of the two approaches, with comparable quadrature errors across all tests for methods with the same polynomial order q. The main difference, although small, is that the MFD method tends to perform better than BSP on 3D domains with non-smooth boundary such as Ω_5, whereas the BSP method tends to perform better than MFD for the integration of Franke's function (the smoother one among f_1 and f_2). Note that the numbers in the columns m_MFD and m_BSP of Table <ref> indicate that the system matrices for MFD are much larger than those for BSP, which increases the fill-in for the direct method of SuiteSparseQR. Nevertheless, as discussed in Section <ref>, the number of nonzeros in the system matrix for MFD is comparable in 2D to that for BSP, and about a half of it in 3D. Concerning stability constants, the values of K_ and K_ are not reported in the figures, but in all cases do not grow as h → 0, and typically remain bounded below 2. For the same choices of h and q, Algorithm <ref> may produce larger stability constants, with values up to 100 on the coarsest node distributions and largest polynomial order q=8 (assuming that system (<ref>) is underdetermined), whereas Algorithm <ref> always leads to stability constants smaller than 4 in our experiments. For finer node distributions or smaller polynomial orders, both methods produce stability constants smaller than 2. In all tests, quadrature errors are dropped from the plots whenever the linear system Ax=b is overdetermined, i.e. the matrix A has more rows than columns. Moreover, in all MFD tests, quadrature errors are dropped from the plots whenever n_L > N_X, that is, when the size of the sets of influence for the discretization of exceeds the size of the set X. These two measures can be understood as rough safeguards against ill-chosen parameters h and q. In the case of Algorithm <ref>, the support of some B-splines may have small intersection with Ω, and this is known to be a source of instability in immersed methods, see e.g.<cit.>. In our numerical experiments, however, we have not observed any instability as h → 0, and a likely explanation is that the minimization of 2-norm has a regularizing effect on the computation of quadrature weights. Nevertheless, stabilization techniques from the literature on immersed methods may further improve the accuracy and stability of the BSP approach. § ACKNOWLEDGEMENTS Bruno Degli Esposti is member of the Gruppo Nazionale Calcolo Scientifico - Istituto Nazionale di Alta Matematica (GNCS-INdAM). The INdAM support through GNCS is gratefully acknowledged. abbrv
http://arxiv.org/abs/2409.02276v1
20240903202123
Optimizing Multi-User Uplink Cooperative Rate-Splitting Multiple Access: Efficient User Pairing and Resource Allocation
[ "Shreya Khisa", "Mohamad Elhattab", "Chadi Assi", "Sanaa Sharafeddine" ]
eess.SP
[ "eess.SP" ]
IEEEexample:BSTcontrol Optimizing Multi-User Uplink Cooperative Rate-Splitting Multiple Access: Efficient User Pairing and Resource Allocation Shreya Khisa, Mohamad Elhattab, Chadi Assi and Sanaa Sharafeddine September 2024 ======================================================================================================================== § ABSTRACT This paper investigates joint user pairing, power and time slot duration allocation in the uplink multiple-input single-output (MISO) multi-user cooperative rate-splitting multiple access (C-RSMA) networks in half-duplex (HD) mode. We assume two types of users: cell-center users (CCU) and cell-edge users (CEU); first, we propose a user pairing scheme utilizing a semi-orthogonal user selection (SUS) and a matching-game (MG)-based approach where the SUS algorithm is used to select CCU in each pair which assists in reducing inter-pair interference (IPI). Afterward, the CEU in each pair is selected by considering the highest channel gain between CCU and CEU. After pairing is performed, the communication takes place in two phases: in the first phase, in a given pair, CEUs broadcast their signal, which is received by the base station (BS) and CCUs. In the second phase, in a given pair, the CCU decodes the signal from its paired CEU, superimposes its own signal, and transmits it to the BS. We formulate a joint optimization problem in order to maximize the sum rate subject to the constraints of the power budget of the user equipment (UE) and Quality of Service (QoS) requirements at each UE. Since the formulated optimization problem is non-convex, we adopt a bi-level optimization to make the problem tractable. We decompose the original problem into two sub-problems: the user pairing sub-problem and the resource allocation sub-problem where user pairing sub-problem is independent of resource allocation sub-problem and once pairs are identified, resource allocation sub-problem is solved for a given pair. Resource allocation sub-problem is solved by invoking a successive convex approximation (SCA)-based approach. Simulation results demonstrate that the proposed SUS-MG-based algorithm with SCA outperforms other conventional schemes. Cooperative communications, uplink, half-duplex, RSMA, user pairing, 6G. § INTRODUCTION The significant increase in wireless traffic and the growing demand for high-speed data transmission have sparked considerable interest in innovative solutions aimed at advancing the upcoming phase of wireless communication, often referred to as the sixth generation, (6G) <cit.>. In the evolution of 6G, it is imperative to address the escalating need for ultra-high reliability, high throughput, diverse Quality of Service (QoS) requirements, ultra-low latency, and massive connectivity. These factors are pivotal in fulfilling the requirements of services such as extremely reliable and low-latency communication (eURLLC), enhanced mobile broadband (eMBB), and ultra-massive machine type communication (umMTC) <cit.>. These challenges are compounded by the rapid proliferation and widespread utilization of smartphones and tablets. The increasing number of these devices will inevitably congest the wireless spectrum further, exacerbating the scarcity of available spectrum resources. To combat the spectrum crunch and satisfy the rigorous demands of surging broadband usage, one effective approach is to explore innovative and efficient multiple access (MA) technologies, which have the potential to enhance system capacity in a cost-effective manner. Recently, rate-splitting multiple access (RSMA) has emerged as a promising contender for a non-orthogonal MA mechanism, offering flexible interference management for next-generation wireless communication <cit.>. The main principle behind RSMA is to partially treat the multi-user interference as noise and partially decode it <cit.>. Utilizing the RSMA principle, the base station (BS) splits the signals of user equipments (UEs) into common and private parts and transmits the total signal using superposition coding (SC). At the receiver side, the common stream is decoded by treating all private streams as interference, and then, the decoded common stream is removed from the total received signal utilizing successive interference cancellation (SIC) process. Meanwhile, the private streams are decoded at a particular UE by treating other private streams coming from other UEs as interference. It should be noted that the common streams are needed to be decoded by all users. On the other hand, private streams are decoded by their intended users only. This flexible nature of the interference management scheme assists RSMA to bridge the gap between space-division multiple access (SDMA), which fully treats multi-user interference as noise, and non-orthogonal multiple access (NOMA), which fully decodes the interference <cit.>. Even though RSMA has shown significant improvement in performance gain over NOMA and SDMA in terms of throughput, sum-rate, and energy efficiency <cit.>, it may suffer from performance loss, which may limit its potential gain. This is because the common stream is required to be decoded by all users, and hence, the achievable common rate is constrained by the worst-case user who possesses a poor channel gain with the BS. In order to tackle this challenge and unleash the full potential gains of RSMA, the amalgamation between cooperative communication and RSMA has been investigated, which is known as cooperative RSMA (C-RSMA) <cit.>, <cit.>, <cit.>. Specifically, in C-RSMA, the cell-center users (CCUs), which maintain a good channel gain with the BS, can assist the cell-edge users (CEU)s by relaying the decoded common stream to the CEUs to improve their signal quality. Consequently, C-RSMA has shown promising results in terms of rate region <cit.>, user fairness <cit.>, <cit.>, power consumption minimization <cit.>, network coverage extension <cit.>, and secrecy rate enhancement <cit.> in comparison to the traditional RSMA. It should be noted that all the above-mentioned works mainly focused on the C-RSMA framework in a downlink scenario, meanwhile C-RSMA framework in the uplink setup is still in its infancy stage, which motivates this study. The primary difference between uplink RSMA and downlink RSMA is in splitting the transmitted signal for each user <cit.>. Specifically, in uplink, UEs can split their signals into multiple parts without considering any common part or private part. It implies that there is no common message transmission in the uplink RSMA scenario. Particularly, according to the principle of uplink RSMA, at user-k, k ∈{1,..., K}, the message W_k to be transmitted is split into two sub-messages W_k,1 and W_k,2. This can be interpreted as creating two virtual users <cit.>. The messages W_k,1 and W_k,2 of the two virtual users are independently encoded into streams s_k,1 and s_k,2 with unit variance, i.e., E[|s_k,i|^2]=1, i = 1, 2. These two streams are then respectively allocated with certain powers, P_k,1 and P_k,2, and superposed at user-k. The main advantage of uplink RSMA over uplink NOMA lies in the flexible decoding process at the BS. Particularly, sub-messages of RSMA belonging to a particular user do not need to be decoded sequentially, and the decoding of sub-messages totally depends on the adopted decoding order. For example, sub-message 1 of user 2 can be decoded before the sub-message 2 of user 1. This flexible decoding nature of RSMA helps to decode one sub-message of a particular user with more interference and another sub-message of the same user can be decoded with less interference. On the other hand, in NOMA, as no message is split, the whole message of a particular user is decoded at the BS while considering the other user messages as interference, resulting in non-flexible interference management. Even though uplink RSMA may suffer from a higher number of SIC processes, however, SIC occurs at BS, which has high processing capabilities. In addition, in uplink RSMA, the rate of each user is the summation of split messages whereas the rate of each user in uplink NOMA comes from decoding a single message. Hence, uplink RSMA is able to achieve a higher rate than uplink NOMA. Motivated by the above-mentioned benefits, this paper investigates the integration of uplink RSMA with cooperation in a multi-user scenario where two users are paired in such a way that it maximizes the overall sum-rate of the system. §.§ State of the Art The investigation regarding the performance of rate splitting (RS) has started from the perspective of information theory which has shown to achieve the optimal sum Degree of Freedom (DoF) <cit.>. Afterward, this investigation is followed by the performance evaluation of RS in multiple-input and single-output (MISO) broadcast (BC) scenarios with imperfect Channel State Information at the Transmitter (CSIT) <cit.>. Following this, several attempts have been made to evaluate the performance of RSMA in different networks and integrate RSMA with different advanced technologies. For example, the authors in <cit.> studied the performance of RSMA in different scenarios of overloaded and underloaded networks. It also showed the performance gain that RSMA can achieve over conventional NOMA, SDMA, and OMA. It is one of the early investigations on RSMA in the downlink network. The authors in <cit.> studied the sum-rate maximization problem for wireless networks in downlink RSMA. The authors in <cit.> investigated the performance of RSMA under the imperfect CSIT due to user mobility and latency/delay in the network. Besides the conventional MISO BC framework, the advantages of RS have been further explored in satellite communications <cit.>, Cloud Radio Access Network (C-RAN) <cit.>, massive multiple-input and multiple-output MIMO <cit.>, reconfigurable intelligent surface (RIS) <cit.>, radar communications <cit.>, multi-cell coordinated multipoint joint transmission (CoMP) <cit.>, and simultaneous wireless information and power transfer (SWIPT) <cit.>. Recently, several studies have been carried out to show the performance gain of C-RSMA in both full-duplex (FD) and half-duplex (HD) modes. The authors in <cit.> studied the performance of C-RSMA in FD mode for two users' cases. The authors in <cit.> investigated the C-RSMA framework for K-users, where each CCU relays the common stream to all CEUs in HD mode. In <cit.>, an FD C-RSMA scheme in a downlink two-group multicast system was studied where CCUs can harvest energy from BS using SWIPT, and then utilize this harvested energy to relay the common stream to CEUs. It should be noted that all of the above-mentioned works studied the RSMA and C-RSMA schemes in the downlink framework. On the other hand, several works have studied the performance of RSMA in uplink. The authors in <cit.> investigated the sum rate maximization problem of RSMA in an uplink network. The authors in <cit.> analyzed the performances of different network slicing schemes in uplink based on RSMA. The authors in <cit.> investigated the sum throughput and error probability of uplink RSMA with fixed block length (FBL) coding in a two-user system. Meanwhile, authors in <cit.> studied the performance of an uplink RSMA network with two sources, in terms of outage probability and throughput. The authors in <cit.> investigated the outage performance of uplink RSMA transmission with randomly deployed users, taking both users scheduling schemes and power allocation strategies into consideration. An RIS assisted uplink RSMA system <cit.> is investigated for dead-zone users where the direct link between the users and the BS is unavailable. The authors in <cit.> investigated the user fairness of downlink multi-antenna RSMA in short-packet communications with/without cooperative (user-relaying) transmission. In addition, RSMA in uplink has been investigated in satellite communications <cit.>, RIS-assisted wireless networks <cit.>, <cit.>, <cit.>, massive MIMO <cit.>, integrated sensing and communications <cit.>, unmanned aerial vehicle (UAV)-assisted networks <cit.> and so on. However, the uplink RSMA in cooperative communications, i.e., C-RSMA, is still in its development stage. Recently, the authors in <cit.> demonstrated the effectiveness of C-RSMA and cooperative NOMA in the uplink framework where both users cooperate with each other to relay each other signals to the BS. However, this work studied only a simple two-user case scenario, and a single antenna BS was considered. Hence, it lacks considerable challenges resulting from the multi-user and multi-antenna BS settings, such as multi-user interference and designing beamforming vectors at BS. Furthermore, it is worth mentioning that this study did not take into account any pairing methods, which is vital for maximizing the advantages of cooperative communication. Motivated by this fact, to the best of our knowledge, this is the first paper that considers the C-RSMA framework in a multi-user scenario in uplink communication. In this work, we investigate the C-RSMA framework in a multi-user scenario by proposing a novel pairing policy, optimizing the power allocation of the UEs, and time slot duration allocation for communication and cooperation while minimizing inter-pair interference (IPI). §.§ Contributions To the best of our knowledge, the study of C-RSMA in a multi-user uplink network scenario has not been explored to date. To fill this research gap, we propose a semi-orthogonal user selection and a matching game (SUS-MG)-based sum rate maximization problem for uplink MISO C-RSMA framework. The main contributions of this paper are outlined as follows. * We formulate an optimization problem to maximize the sum rate of the uplink MISO C-RSMA framework by jointly optimizing user pairing and resource allocation. The formulated optimization problem results in a mixed-integer non-linear problem (MINLP). Hence, we solve the problem by invoking bi-level optimization. Specifically, we decompose the original problem into two sub-problems: the user pairing sub-problem and the resource allocation sub-problem. * In the user pairing sub-problem, we propose a semi-orthogonal user selection (SUS) <cit.> and matching game (MG)-based user pairing scheme <cit.> that can suppress IPI and create efficient pairs to maximize the system's performance. Specifically, utilizing the SUS algorithm, we determine the CCU in each pair by reducing IPI. Afterward, an MG-based strategy is used to determine the CEU in each pair considering the highest channel gains between all CCUs and corresponding CEU as a utility function. * In the resource allocation sub-problem, we employ a low-complexity successive convex approximation (SCA)-based algorithm to optimize the transmit power of each relaying UE and the allocation of time slots for communication with the BS and UE cooperation within a given pair. In this framework, the communication takes place in two transmission phases: the direct transmission (DT) phase and the cooperative transmission (CT) phase. During the DT phase, all CEUs broadcast their signals and they are received at the BS and CCUs. Meanwhile, during the CT phase, each CCU transmits the decoded signal of its paired CEU to the BS and also superimposes its own signal. In addition, we utilize maximum ratio combination (MRC) equalization to recover the signal at the BS by considering the IPI. * Through an extensive experiment, we evaluate the impact of splitting messages in CCUs and CEUs where we have demonstrated that splitting one user message in each pair is enough to achieve better performance than splitting both user messages in each pair. Through experiments, we have demonstrated the effects of the different decoding orders on the average sum rate in the uplink C-RSMA framework. From extensive simulations, we have found a best-performing decoding order for uplink MISO C-RSMA. Finally, our simulation results demonstrated that the proposed SUS-MG-SCA algorithm achieves higher performance over random pairing and other conventional MA schemes for different values of power budget constraints at both CCUs and CEUs, and QoS constraints at each UE. §.§ Paper Organization and Notations The rest of the paper is organized as follows. Section <ref> presents the system model. Section <ref> presents the decoding order and the achievable data rate analysis. Section <ref> discusses the formulated optimization problem and the solution roadmap. Meanwhile, Section <ref> and <ref> provide the details of the proposed solution approach. Finally, the simulation results and the conclusion are discussed in Sections <ref> and <ref>, respectively. A summary of key symbols is provided in Table <ref>. Matrices and vectors are denoted by bold-face lower-case and upper-case letters, respectively. For any complex-valued vector x, ||x|| refers to the norm of vector x, (.)^H represents Hermitian transpose, E{} is the expectation operator of a random variable, and ℛ{x} is the real part of the complex term x. § SYSTEM MODEL §.§ Network Model We consider an uplink transmission of a single-cell C-RSMA system consisting of one BS with N antennas, K single-antenna UEs where 𝒦=[1,2,3,…,K] as shown in Fig. 1. The BS serves K users in the same frequency-time resource block. Additionally, the RSMA technique is invoked as the MA scheme, which enables multiple users to share the same channel resources in order to enhance the system's spectral efficiency. Two types of UEs are considered for this model: CCUs and CEUs. Specifically, CCUs sustain a good channel condition with the BS, and the set of all CCUs are denoted by 𝒰=[1,2,…,U], where U=K/2. On the other hand, CEUs experience poor channel conditions with the BS, and the set of all CEUs can be denoted as 𝒱=[1,2,…, V], where V=K/2. Note that the CCUs are capable of acting as HD relays and can forward the messages of the CEUs to the BS. Therefore, one CCU and one CEU should be paired in an efficient way so that the CCU is able to assist in improving the signal quality of its paired CEU. A pair of CCU and CEU can be denoted as (u,v), ∀ u ∈𝒰, and ∀ v ∈𝒱. We denote the wireless channel link between BS and each CCU-u as h_u = [h_u,1, h_u,2,…,h_u,N]^T ∈ℂ^N × 1, where h_u,n represents the channel response of the wireless link from the CCU-u to the n-th antenna element of the BS. Meanwhile, the wireless channel link between the BS and each CEU can be denoted as h_v= [h_v,1, h_v,2,…,h_v,N] ∈ℂ^N × 1, respectively. Note that each link h_k,n is represented as a product of path loss factor, τ, and small scale fading, ξ. Particularly, each channel response between n-th antenna element and UE-k can be represented as h_k,n= √(τ_k,n)ξ_k,n. Since the antenna elements are located in very close proximity to each other, without loss of generality, the path loss factor for the same UE is assumed to be equal. Specifically, τ_k,1=τ_k,2=τ_k,3=…≜τ_k,N. We assume that the small-scale fading factor ξ_k,n is independent and identically distributed such that E{|ξ_k,n|^2}=1,∀ k ∈𝒦, ∀ n ∈ N. In addition, in our model, we consider IPI in our signal transmission which results from the interference coming from members from other pairs except the corresponding pair. §.§ Transmission model The whole communication takes place in two-tim slots. The first slot is referred to as the direct transmission (DT) phase, and the second slot is called the cooperative transmission (CT) phase. As illustrated in Fig <ref>, the time slot allocation for the two phases may not be equal. DT phase occurs in δT time duration. Meanwhile, CT phase occurs in (1-δ)T duration. A detailed description of the two phases is provided below. 1. Direct transmission (DT) phase: Here, CEUs broadcast their signals which are received by the BS and CCUs. Moreover, the signal of CEUs suffers from IPI at the BS. Particularly, the signal of each CEU is interfered by the signals of all other CEUs in the system as we assume that all CEUs are transmitting at the same time. After receiving the signal from its paired CEU, each CCU decodes the signal of its paired CEU. 2. Cooperative transmission (CT) phase: During this phase, we utilize non-regenerative decode-and-forward (NDF) protocol. Specifically, the definition of NDF protocol states that after the relay node receives the signal from the source node, it decodes the received signal. However, it re-encodes the signal with a codebook generated independently from that of the source node and transmits it in the second channel of the source node <cit.>. It is important to note that in the NDF protocol, a time frame of T seconds is divided into two-time slots: δ T, and (1-δ)T. Hence, utilizing NDF protocol, each CCU re-encodes the decoded message with a codebook that is different from its paired CEU. Afterward, each CCU superimposes its own signal and forwards the total signal to the BS. However, the signal of each CCU interferes with the signals of other CCUs located in different pairs resulting in IPI. §.§ Signal Model By utilizing the uplink RSMA principle, the messages of all CCUs are split into two sub-messages, and the set of sub-messages is denoted as ℬ=[1,2]. However, the messages of CEUs are kept without splitting. Afterward, all the sub-messages of CCUs and messages of CEUs are encoded independently, and hence, generate streams s_u,b, ∀ u ∈𝒰, ∀ b ∈ℬ and s_v, ∀ v ∈𝒱. For simplicity, we analyze the signal model of a single pair (u,v), which is provided below: 1. Signal transmission during DT phase: During the DT phase, the CEU-v broadcasts its signal which is received by the both BS and CCU-u. The signal received by the BS due to the transmissions of CEU-v, ∀ v ∈𝒱 at time slot δ T can be expressed as, y_BS^[1]=∑_v ∈𝒱(h_v√(P_v)s_v)+ n_v. Meanwhile, the signal that is received by CCU-u due to the transmission of a CEU-v at time slot δ T can be presented by, y_v→ u^[1]=h_v,u√(P_v)s_v+I_v' → u+n_v,u, where I_v' → u=∑_v' ∈𝒱, v' ≠ vh_v',u√(P_v')s_v'. 2. Signal transmission during CT phase: During the CT phase, which is denoted by time slot (1-δ)T, utilizing NDF protocol, the CCU-u re-encodes the decoded signal utilizing a different codebook than the original one. Afterward, CCU-u superimposes its own signal and forwards the superimposed signal to the BS. Hence, after the CT phase, the signal received by the BS can be given by, y_BS^[2]= ∑_u ∈𝒰h_u(∑_b ∈ℬ(√(P_u,b)s_u,b)+√(P_u,v̂)s_u,v̂) +n_u, where P_u,v̂ represents the transmission power of CCU-u to transmit the message of CEU-v that is received during the DT phase. §.§ Signal recovery at BS The BS receives signals from all K users at the end of the two transmission phases. In order to recover each signal at the BS, at first, we invoke the MRC equalization to separate the signals of different UEs. Afterward, the SIC is utilized to remove the decoded signal from the total received signal. Details about the recovery process are provided below: 1. MRC Equalization: In the first step of the recovery process, the MRC equalization is utilized in order to separate signals of different UEs. In this process, the received signal is multiplied by the conjugate transpose of the CSI. [We assume perfect channel state information (CSI) at the BS. Therefore, the BS perfectly knows every wireless link to any UE <cit.>.] For example, we multiply h_u^H which is associated with the CCU-u with the total received signal at the CT phase. After MRC operation, the restored message of CCU-u can be denoted by, ŝ_u,b=1/Nh_u^H(y_BS^[2]), ŝ_u,b= ||h_u||^2 /N(∑_b ∈ℬ(√(P_u,b)s_u,b)+√(P_u,v̂)s_u,v̂)+I_u',BS+h_u^H/Nn_u, where I_u',BS = ∑_u' ∈𝒰, u' ≠ u(h_u^Hh_u'/N∑_b ∈ℬ(√(P_u',b)s_u',b)+√(P_u',v̂)s_u',v̂). Note that the MRC receiver maximizes the user’s signal power without excessively boosting the noise. The MRC can achieve near-optimal performance with a massive number of antennas <cit.>. Moreover, the MRC attains much lower complexity in comparison to linear minimum mean square error (LMMSE) and zero-forcing (ZF) equalizers <cit.>, because it does not involve the calculation of the matrix inverse <cit.>. Hence, the MRC becomes the most efficient linear equalizer as long as sufficient degrees of freedom can be leveraged by a large number of antennas <cit.>. It should be noted that when the BS contains a large number of antennas, ||h_k||^2/N is close to τ_k,n, i.e. 1/Nh_k^Hh_k τ_k,n where denotes the almost sure convergence <cit.>. This phenomenon is called channel hardening <cit.>. The channel hardening refers to the situation when the randomness of the channel fading coefficients decreases due to the existence of large antenna arrays <cit.>. Hence, the received useful signal s_k,b is scaled by a real-valued coefficient P_k,bτ_k,b. Meanwhile, when the number of antennas is very large, N→∞, 1/Nh_k^Hh_k' 0. This phenomenon implies that the channel responses of different users tend to be quasi-orthogonal among each other when the number of BS antennas is large enough. Hence, the interference coming from other users is reduced <cit.>. However, in a real-life practical scenario, it is not always possible to have a very high number of (infinite) antennas and it is not possible to completely diminish the interference coming from different users. Considering this, in our work, we have taken the effect of the IPI into consideration. 2. SIC Process and Decoding Order: After we separate the signal of a particular UE using MRC equalization, the sub-messages/ messages of that UE are decoded based on their decoding order such that the users with low decoding order are decoded first and the users with high decoding order are decoded later. This process is provided in detail in the following section. § DECODING ORDER AND ACHIEVABLE RATE ANALYSIS §.§ Decoding Order The decoding order is an important factor in decoding the received signals during the CT phase. Assuming that the decoding orders of sub-messages/messages at the BS are denoted as the set π_BS = [π_u,b, π_u,v̂, π_v| ∀ u ∈𝒰, ∀ b ∈ℬ, ∀ v ∈𝒱] respectively, and the messages/sub-messages are decoded in the ascending order. π_u,b denotes the decoding order of the sub-message s_u,b. Meanwhile, π_u,v̂ and π_v denotes the decoding order of the sub-message s_u,v̂ and message s_v. Particularly, sub-message s_u,b will be decoded first, if the decoding order is π_u,b < π_u',b' where {u' ≠ u, b' ≠ b} by treating remaining sub-messages/messages as interference. After the decoding of s_u,b is completed, it is removed from the total received signal using SIC, and then the next sub-message/message is decoded according to the adopted decoding order. §.§ Achievable Rate at CCU due to the Transmission of CEU Each CCU receives the signal from the CEUs during the DT phase. Hence, the achievable rate to decode the received message s_v from the CEU-v at CCU-u in a pair (u,v) can be denoted by, R_v→ u^[1]=δlog_2(1+|h_v,u|^2P_v/Î_v' → u+σ^2), whereÎ_v' → u=∑_v' ∈𝒱, v' ≠ v|h_v',u|^2P_v's_v'. §.§ Achievable Rate at the BS The BS utilizes decoding order set π_b to decode the sub-messages/messages successively after the end of the two transmission phases, specifically, at the end of the CT phase. However, all sub-messages of a particular user do not need to be decoded sequentially. Decoding of the sub-messages is independent of the user and depends on the adopted decoding order. Particularly, if decoding order of stream s_u,1 is π_u,1 and the decoding order of stream s_u',1 is π_u',1 and π_u,1 < π_u',1, then s_u,1 will be decoded before s_u',1, where u ≠ u'. The details of the impact of different decoding orders have been provided in Section VI. Meanwhile, during the CT phase, a total of 𝒰_s=U × 3 sub-messages are received at the BS. After decoding is done, utilizing SIC, the decoded message is removed from the total received signal. Utilizing Eqn. (<ref>), we can find the achievable rate to decode the signal of s_u,1 at the BS that is received during the CT phase from CCU-u and it can be denoted by, R_u,1^[2]=(1-δ) log_2(1+||h_u||^4P_u,1/N^2/(_(π_u,2∈π_b)||h_u||^4P_u,2/N^2)+(_(π_u,v̂∈π_b)||h_u||^4P_u,v̂/N^2)+Î_u',BS+|h_u^Hn_u|^2/N^2), R_u,2^[2]=(1-δ) log_2(1+||h_u||^4P_u,2/N^2/((π_u,v̂∈π_b)||h_u||^4P_u,v̂/N^2)+Î_u',BS+|h_u^Hn_u|^2/N^2), where Î_u',BS =∑_u' ∈𝒰, u ≠ u'(|h_u^Hh_u'|^2/N^2∑_b ∈ℬP_u',b+P_u',v̂'̂) and π_u,1 < π_u,2, π_u,v̂. After successful decoding, and utilizing the SIC, s_u,b is removed from the total received signal. During the CT phase, CCU also forwards the signal of its paired CEU to the BS. Hence, the achievable rate to decode the messages of CEU-v due to the transmission of CCU-u can be denoted as, R_v^[2]=(1-δ) log_2(1+||h_u||^4P_u,v̂/N^2/Î_u',BS+|h_u^Hn_u|^2/N^2). Meanwhile, BS also decodes the message s_v that is received during the DT phase from CEU-v. Hence, the achievable rate to decode the message, s_v can be given by, R_v^[1]=δlog_2(1+||h_v||^4P_v/N^2/Î_v',BS+|h_v^Hn_v|^2/N^2), where Î_v',BS=∑_v' ∈𝒱, v' ≠ v|h_v^Hh_v'|^2/N^2P_v'. Therefore, we can calculate the total achievable rate for the CCU-u as follows, R_u=∑_b ∈ℬR_u,b^[2]. After two transmission phases, the total achievable rate for the CEU-v to decode a message, s_v can be given by, R_v^tot=R_v^[1]+R_v^[2], However, the total achievable rate of the CEU-v after two transmission phases at the BS should not exceed the achievable rate of the CEU-v at the CCU-u during DT phase, which is denoted as follows, R_v=min(R_v^tot, R_v→ u^[1]). § PROBLEM FORMULATION AND SOLUTION ROADMAP §.§ Problem Formulation The main objective is to maximize the sum rate while meeting each UE's QoS requirement in terms of minimum data rate. Therefore, the sum rate maximization problem with joint pairing, power allocation, and time slot allocation can be formulated as follows: 𝒫: max_Ψ, P, δ ∑_u ∈𝒰 ∑_v ∈𝒱Ψ_u,v (R_u+R_v), s.t. Ψ_u,v∈{0,1}, ∀u ∈𝒰, ∀v ∈𝒱, ∑_u ∈𝒰Ψ_u,v= 1, ∀v ∈𝒱, ∑_v ∈𝒱Ψ_u,v= 1, ∀u ∈𝒰, R_u ≥R_th,u, ∀u ∈𝒰, min(R_v^tot, R_v→u^[1]) ≥R_th,v, ∀v ∈𝒱, ∀u ∈𝒰, P_u,b, P_v, P_u,v̂ ≥0, ∀u ∈𝒰, ∀v ∈𝒱, ∀b ∈ℬ, ∑_b ∈ℬP_u,b+P_u,v̂≤P_u^max, ∀u ∈𝒰, P_v ≤P_v^max, ∀v ∈𝒱, 0≤δ≤1 , where P=[P_u,b, P_u,v̂,P_v| ∀ u ∈𝒰,∀ v ∈𝒱, ∀ b ∈ℬ] denotes the transmit power of all UEs, and Ψ denotes the user pairing policy, which holds a binary value of 0 or 1. Specifically, Ψ_u,v=1 implies that CCU-u is paired with CEU-v. Meanwhile, Ψ_u,v=0 represents that CCU-u and CEU-v are not paired. Constraints (<ref>) and (<ref>) ensure that each UE from each group can be paired with only one UE from the other group. Constraints (<ref>) and (<ref>) ensure that each UE has an achievable rate greater than a minimum achievable rate in order to guarantee the QoS. Constraints (<ref>) and (<ref>) refer to the transmission power budget of CCU-u and CEU-v. Finally, (<ref>) represents the time slot duration constraint for the NDF protocol. §.§ Solution Roadmap Due to the intractability of problem 𝒫, it is very hard to solve it directly. In order to tackle this issue, we adopt a bi-level optimization-based process that divides the original problem into two sub-problems: 𝒫_outer and 𝒫_inner. In the first sub-problem P_outer, for given values of transmit powers at the CCUs and CEUs and the time slot duration, we optimize the user pairing policy by developing a low-complexity algorithm that considers semi-orthogonality among the CCUs and chooses the channel with the highest gain between a CCU and a CEU to create the best C-RSMA pairs. In the second sub-problem 𝒫_inner, for a given user pair, we optimize the transmit powers of CCUs and CEUs and the time slot duration by designing an SCA-based low-complexity algorithm in order to maximize the sum rate of the whole system. It should be noted that to optimize the time slot allocation, we use an exhaustive search approach. Particularly, we calculate the sum rate of the system for different values of δ, which ranges from 0 to 1. We then choose the value of δ of those results, which provides the highest sum rate for the system. Finally, the overall problem is solved in three steps: first, for given values of transmit power and time slot duration, we select a CCU for a pair utilizing the SUS algorithm, which will assist in piggybacking the signals of its paired CEU in the CT phase. Next, we select CEU for each pair using a low-complexity MG-based algorithm, considering maximum channel gain between CCU-CEU as a utility function of the matching game. Finally, power allocation of the UEs in a given pair is performed utilizing an SCA-based algorithm in an iterative manner. Specifically, it can be seen that when pairing is done, Ψ_u,v=1 and (P^*,δ^*) becomes the optimal solution of the power and time slot duration allocation. Meanwhile, when Ψ_u,v=0, then (P^*,δ^*) becomes zero. Thus, for given values of P^*, δ^*, we can write the outer optimization problem as follows: 𝒫_outer:   max_Ψ ∑_u ∈𝒰 ∑_v ∈𝒱Ψ_u,v (R_u(P^*, δ^*) + R_v(P^*, δ^*)) , s.t. (<ref>)-(<ref>). Meanwhile, for a given user pairing policy Ψ^*, the inner optimization problem can be as follows: 𝒫_inner: max_P,δ ∑_u ∈𝒰∑_v ∈𝒱Ψ_u,v^*(R_u(P, δ)+R_v(P, δ)), s.t. (<ref>)-(<ref>). In the following section, we will give the details of the solution approaches of the two sub-problems. § UES PAIRING: SEMI-ORTHOGONALITY AND TWO-SIDED ONE-TO-ONE MATCHING GAME-BASED APPROACH Our algorithm for user pairing operates through a two-stage process. Initially, we pick a CCU for each pair using the SUS algorithm. This SUS algorithm considers channel orthogonality among CCUs which assists in reducing IPI. Afterward, we choose a CEU for each pair using an MG-based algorithm that factors in the impact of the channel gains between all CCUs and the corresponding CEU. This channel gain effect serves as a utility function for our selection process and helps to choose CEUs which can maximize the sum rate. The whole process is given below in detail. 1.CCU selection: We employ a SUS-based algorithm to determine the CCU-u for a pair (u,v). The SUS algorithm is first proposed in <cit.> to design multiple-input multiple-output (MIMO) beamforming. The main idea behind the SUS algorithm is that it tries to choose users with superior channel states and aligned beam directions by using the degree of channel orthogonality among users. The SUS algorithm for CCU selection is provided in Algorithm 1. Our proposed SUS algorithm iteratively selects a subset of size 𝒰 with user channels {h_u| u ∈𝒰} that are semi-orthogonal to each other and with relatively large channel gains with the BS. These selected users constitute the group of CCUs. The iteration procedure continues until U number of users are selected or 𝒰_0 becomes empty. In Algorithm 1, 𝒰 represents the set for chosen CCUs, meanwhile, 𝒰_0 denotes the set of users that are not chosen as CCUs yet. From the algorithm's steps 3 through 9, for each user u ∈𝒰_0 the component of h_u is orthogonal to the subspace covered by [g_1, g_2,…,g_j-1]. Afterward, we select the best user with the maximum argument in step 10. It should be noted that in step 12, we specify a SUS factor θ which ensures that only the users with semi-orthogonal channels remain in the set 𝒰_0. The value of the SUS factor θ falls between 0 to 1. Therefore, in step 7, only a small portion of user channel h_u will be projected to the subspace spanned by [g_1, g_2,…,g_j-1]. Hence, the users chosen in this way become semi-orthogonal to each other which assists in reducing the IPI, and also relatively large channel gains help them to serve as CCUs. 2.CEU selection: In this subsection, we choose a CEU for each pair by utilizing an MG-based algorithm. We model our CCU-CEU pairing problem as a two-sided one-to-one matching game problem. In fact, the matching game theory is well adapted for a scenario in which two sets of players are paired off in order to produce outcomes that are advantageous to both parties. We set one set of users as proposers and the other sets of users as selectors. Here, we model CEUs as proposers and CCUs as selectors. Definition 1: Matching tuple: A one-to-one two-side matching Ψ is a mapping from all the members of 𝒰 into the 𝒱 satisfying the following conditions such that * Ψ(v) ∈𝒰, Ψ(u) ∈𝒱, * Ψ(u) = v ⇔Ψ(v) = u, ∀ u ∈𝒰, ∀ v ∈𝒱, * |Ψ(v)| = 1, |Ψ(u)| = 1, ∀ u ∈𝒰, ∀ v ∈𝒱. Condition (a) indicates that the matching partner of one set is a member of another set, (b) indicates that if u matches with v then v matches with u as well, and finally, (c) suggests that each CCU can match with only one CEU and vice versa. Definition 2: Preference utility: In a matching game, the design of preference utility assists in finding the best possible match, which can maximize the objective function. We design the utility function of our proposed matching game based on the channel gains between CEU-v and all CCUs in 𝒰. We denote the preference utility of any tuple of the CCU-CEU pair (u,v) by considering the channel gain between the CEU-v and all the CCUs in order to be considered to be a potential matched tuple. Hence, we can present the preference utility of a matching tuple (u,v) as follows: Υ(u,v)=arg max{|h_v,u|}, ∀ u ∈𝒰, ∀ v ∈𝒱. The main idea behind choosing such utility function comes from the objective of 𝒫, where each matching tuple constructs a CCU-CEU pair such that the sum-rate of the overall system is maximized. Since every CCU and CEU sustain distinctive channel gains with the BS and the channel gain between each CCU and CEU is unique, the utility function results in a unique utility value for every pair of CCU-CEU. In every matching theory, an important factor is the change of the matching pair over time. This is because there exists a competition among CEUs to be paired with an individual CCU. Specifically, if there exists a matching tuple CCU-CEU (u,v) and CCU-u receives a proposal from CEU-v' for pairing, CCU-u chooses CEU-v' over CEU-v if and only if the preference utility Υ(u,v') > Υ(u,v). In this situation, CCU-u rejects CEU-v and creates a tuple with CEU-v'. Definition 3: Preference List: Each participant can create his own descending-ordered preference profile by assessing the utilities of the various tuples. Using this preference profile, CEU can identify its preference from the set of CCUs. Let ℙ_v=[u_1,u_2,…,u_U] is a preference list of CEU-v, where u_1 is considered as the most preferred user to be paired with CEU-v. On the other hand, u_U represents the least preferred user to be paired with CEU-v. In our proposed scheme, each CEU-v prepares a preference list of its preferred CCU based on that channel gain between CEU-v and all CCUs. If CEU-v sustains the highest channel gain with CCU-u, then CCU-u is put into the first place of CEU-v's preference profile. Definition 4: Blocking pair: The pair (u,v) ∈ (𝒰×𝒱) is said to block in a matching Ψ if the utility of (u,v) is higher than all other possible pairings according to v's preference profile. For example, u is listed as a preferred CCU at preference profile ℙ_v of CEU-v. At the same time u is also listed as a preferred CCU at the preference profile ℙ_v' of CEU-v'. In addition, CCU-u prefers CEU-v over CEU-v' because it sustains the highest preference utility of all possible matching pairs. In this case, (u,v) blocks the matching of (u,v') and (u,v) should be matched together in all situations. Definition 5: Stable matching: A matching Ψ is defined as pairwise stable if it is not blocked by any blocking pair. In our proposed scheme, we aim to seek stable matching, and the concept of stable matching is as follows: each CEU tries to match with its most preferable CCU and the CCU tends to choose the CEU that can maximize the total utility. First, each CEU proposes itself to its most favorable CCUs. Each CCU then receives offers from the CEU. Based on CCU's own preference, it can accept or reject the offer. It should be noteworthy that each CCU may receive offers from multiple CEUs. In order to remove the conflict between the users, we invoke the concept of a blocking pair. Specifically, if a blocking pair exists in the system, CCU will stay paired with that particular CEU regardless of whatever proposals are coming over. However, if there does not exist any blocking pair, then matched tuple (u,v) is considered stable. The detailed SUS-MG-based pairing policy is provided in Algorithm 2. § POWER AND TIME SLOT DURATION ALLOCATION FOR EACH C-RSMA PAIR In this section, our objective is to maximize the sum-rate at all UEs for given values of time slot duration δ. We solve the power allocation problem utilizing the SCA-based approach. 𝒫_inner is a non-convex optimization problem due to the existence of objective at (<ref>) and constraints (<ref>), and (<ref>). To handle the non-convexity in the objective, we introduce an auxiliary variable Λ=[Λ_u|∀ u ∈𝒰, Λ_v | ∀ v ∈𝒱] and the objective and constraints (<ref>), and (<ref>) can be written as follows, 𝒫̂_inner: max_Λ,P,δ∑_u ∈𝒰∑_v ∈𝒱(Λ_u+Λ_v), ∑_b ∈ℬ(1-δ)log_2 (1+α_u,b) ≥Λ_u, Λ_u≥ R_th,u, δlog_2 (1+β_v)+ (1-δ)log_2 (1+β_u) ≥Λ_v, δlog_2(1+ω_v) ≥Λ_v, Λ_v ≥ R_th,v, ||h_u||^4P_u,b/N^2/||h_u||^4P_u,b'/N^2+(||h_u||^4P_u,v̂/N^2)+Î_u',BS+|h_u^Hn_u|^2/N^2≥α_u,b, ||h_v||^4P_v/N^2/Î_v',BS+|h_v^Hn_v|^2/N^2≥β_v, ||h_u||^4P_u,v̂/N^2/Î_u',BS+|h_u^Hn_u|^2/N^2≥β_u, |h_v,u|^2P_v/Î_v' → u+σ^2≥ω_v where α=[α_u,b| ∀ u ∈𝒰, ∀ b ∈ℬ], β=[β_u, β_v| ∀ u ∈𝒰, ∀ v ∈𝒱, ∀ b ∈ℬ] and ω=[ω_v| ∀ v ∈𝒱]. However, (<ref>) is still non-convex. Hence, we introduce slack variables γ=[γ_u,b| ∀ u∈𝒰, ∀ b ∈ℬ] and replace the interference in (<ref>) with this slack variable. We can rewrite (<ref>) as follows, 1/N^2(||h_u||^4P_u,b/γ_u,b) ≥α_u,b, γ_u,b≥||h_u||^4P_u,b'/N^2+(||h_u||^4P_u,v̂/N^2)+Î_u',BS+|h_u^Hn_u|^2/N^2, Since (<ref>) is still non-convex, according to arithmetic and geometric means (AGM) inequality <cit.> for any non-negative variables x,y and z, and if xy ≤ z then the 2xy ≤ (ax)^2+(y/a)^2 ≤ 2z, where the first inequality holds if and only if a=√(y/x). Based on this, equations 1/N^2* ||h_u||^4P_u,b≥α_u,bγ_u,b, 1/N^2*2 ||h_u||^4P_u,b≥ (α_u,b*ϕ_u,b)^2+(ϕ_u,b/γ_u,b)^2, where ϕ_u,b=√(γ_u,b/α_u,b) and ϕ_u,b should be updated iteratively. (<ref>), (<ref>), and (<ref>) can be handled similarly as (<ref>) by AGM inequality. Based on the above discussions and approximations, we can rewrite 𝒫̂_inner as follows, 𝒫̂_inner: max_Λ,P,δ, α, γ, β, η, μ∑_u ∈𝒰∑_v ∈𝒱(Λ_u+Λ_v), s.t. c_1: 1/N^2* 2*||h_v||^4P_v≥ (β_v*ϕ_v)^2+(ϕ_v/μ_v)^2, c_2: μ_v ≥Î_v',BS+|h_v^Hn_v|^2/N^2, c_3: 1/N^2* 2*||h_u||^4P_u,v̂≥ (β_u*ϕ_u)^2+(ϕ_u/μ_u)^2, c_4: μ_u ≥Î_u',BS+|h_u^Hn_u|^2/N^2, c_4:2*|h_v,u|^2*P_v≥ (ω_v*ϕ_v)^2+(ϕ_v/η_vu)^2, c_5: η_vu≥Î_v'→ u+σ^2, (<ref>)-(<ref>), (<ref>), (<ref>). where μ=[μ_v, μ_u|∀ v ∈𝒱, ∀ u ∈𝒰], ζ=[ζ_v| ∀ v ∈𝒱], η=[η_vu| ∀ v ∈𝒱]. The problem denoted as 𝒫̂_inner is a convex second-order cone program (SOCP), which can be efficiently addressed using various convex optimization solvers like YALIMP or CVX. Based on the above analysis, the proposed SCA-based algorithm 𝒫̂_inner is provided in Alg. 3. The overall algorithm for the proposed system is provided in Alg. 4. §.§ Computational complexity analysis In order to measure the computational complexity of Algorithm 4, we need to analyze the complexity of the 𝒫_outer and 𝒫_inner. 𝒫_outer is solved using the SUS-MG algorithm. Hence, it depends on the complexity of SUS and MG individually. Meanwhile, 𝒫_inner is solved utilizing the SCA-based method in every step of the exhaustive search to find an optimized δ. Hence, the complexity of 𝒫_inner depends on the step size of the exhaustive search and proposed SCA-based approach. The SUS algorithm iterates over all K user channels, and chooses the channel orthogonal to the sub-space spanned by the already selected user channels in an iterative manner. Hence, the computational complexity of the SUS algorithm is 𝒪(K^2). However, the orthogonality threshold θ accelerates the convergence by reducing the search space which results in a much lower complexity in practice. The complexity of the proposed matching-based algorithms depends on CEU's preference profile creation and CEU-CCU's proposing and selecting process. Specifically, each CEU proposes itself to the CCU based on its preference profile. The CEU can propose itself to its preferred CCU and the CCU can accept or reject the proposal based on the preference utility. The sorting for creating preference profiles is based on quick-sort and its complexity is 𝒪(nlog (n)). For the proposing-selecting process, there is no more than U number of CCUs, and one CEU-v in each cell can perform the proposing process with U number of CCUs. Therefore, the maximum number proposing-selecting operation is VU. Let us assume that N_it represents the total number of iterations if there exists no blocking pair. Hence, the total complexity of the SUS-MG algorithm can be calculated as 𝒪(K^2+N_itVU). It is important to highlight that we perform an exhaustive search within the range of values between 0 and 1, with a step size of 0.1. During each step of this exhaustive search, we utilize Algorithm 3 to find a solution. Our SCA-based algorithm is a SOCP that has the complexity of (S_1^2S_2), where S_1 = (9 + N)K is the total number of variables and S_2 = 14K is the total number of constraints. Thus, the total complexity of Algorithm 3 is 𝒪(JN_t^2K^3.5log_2(1/ϵ)) where J represents the total number of steps for exhaustive search. Hence, the total complexity of the overall algorithm is 𝒪(K^2+N_itVU+JN^2K^3.5log_2(1/ϵ)). § SIMULATION RESULTS & DISCUSSIONS In this section, extensive simulations are carried out to evaluate the performance of the proposed uplink C-RSMA MISO system. The simulation parameters are summarized in Table II. The channel model includes small-scale fading and path loss. Particularly, the small-scale fading follows the Rayleigh distribution with unit variance. Unless otherwise specified, we assume that channel gains h_u, h_v, h_v,u follow the exponential distribution λ_u, λ_v, λ_v,u. The values of λ_u, λ_v, λ_v,u are 15 dB, 7dB, and 12 dB respectively. Moreover, we assume that the path loss factor is calculated in terms of channel disparities between the BS and UEs such that τ=0.1 represents the high channel disparity and τ=1 represents a low channel disparity <cit.>. To calculate the channel disparities of UEs, we decrease the channel disparity uniformly from 1 with step size 1/K. For example, when K=6, τ_1=1, τ_2 = 0.83, τ_3 = 0.66, τ_4 = 0.49, τ_5 = 0.32, and τ_6 = 0.15. For the sake of comparison, we compare the following strategies with our proposed system. * C-RSMA Random: In this approach, both the CCU and CEU selection processes is random. The cooperation is performed between CCU and CEU in NDF HD mode. It should be noted that the power allocation is performed using the proposed SCA approach. * C-NOMA fixed δ=0.5 SUS-MG: In this approach, CCU and CEU are selected using the proposed approach, and cooperation is performed between CCU and CEU in decode-and-forward (DF) HD mode. The power allocation is performed using the proposed SCA approach. However, in this approach, we adopted C-NOMA instead of C-RSMA as the MA scheme <cit.>. * RSMA SUS-MG: In this approach, we utilize our proposed SUS-MG strategy to create a CCU-CEU pair and power allocation to UEs is performed using SCA. However, there is no cooperation takes place between UEs and we adopt the general RSMA scheme <cit.>. * NOMA SUS-MG: In this approach, a general uplink NOMA strategy without cooperation is adopted where we utilize our proposed SUS-MG and SCA-based scheme to create a CCU-CEU pair and power allocation to UEs <cit.>. §.§ Convergence of proposed scheme Fig. <ref> shows the convergence behavior of our proposed algorithm with the system parameters: number of UEs K=6, R_th,1 = 0.5 bps / Hz, and R_th,2 = 0.1 bps / Hz, number of antennas at BS N=8, a power budget of the CCUs is 23 dBm and the power budget of the CEUs is 20 dBm. We have plotted the graph with the sum rate (our objective) versus the number of iterations it takes to converge the proposed algorithm. It can be observed that the proposed uplink HD C-RSMA algorithm converges within around 4–5 iterations. §.§ Impact of message splitting on UEs We investigate the impact of message splitting on CCUs and CEUs to investigate the advantage of C-RSMA in uplink. We performed an experiment where we took four scenarios of C-RSMA and two scenarios of RSMA to choose the best possible case for splitting or not splitting of messages. Details of the investigated splitting process are provided below: * Scheme 1, C-RSMA 2K-1 split: In this scheme, we split all CCU messages into two sub-messages and all CEU messages into two sub-messages except one CEU message. Particularly, one CEU's message is kept without splitting. * Scheme 2, C-RSMA 2K split: In this scheme, we split all CCU and CEU messages into two sub-messages. * Scheme 3, C-RSMA no split on CEUs: In this scheme, all CCU messages are split into two sub-messages. Meanwhile, all CEU messages are kept without splitting. * Scheme 4, No split on CCUs and CEUs (C-NOMA NDF)): In this scheme, we do not split the messages of CCUs and CEUs at all. This scheme is equivalent to C-NOMA with NDF protocol. * Scheme 5, RSMA 2K split: In this scheme, we consider a general RSMA scheme with no cooperation. All CCU and CEU messages have been split into two sub-messages. * Scheme 6, RSMA 2K-1 split: In this scheme, we consider a general RSMA scheme with no cooperation. All CCU messages have been split into two sub-messages. Meanwhile, all CEU messages except one CEU have been split into two sub-messages. For Fig. <ref>, in terms of decoding order at the BS, we followed decoding order 3 (details of decoding order presented in the following sub-section). Fig. <ref> shows the performance comparison among the four above-mentioned schemes for C-RSMA and two schemes of RSMA while varying the power budget of CEUs. It can be seen that when the power budget of CEUs is low, Scheme 1 and Scheme 2 achieve a lower sum rate than Scheme 3 and Scheme 4. This is because when the power budget of CEUs is low, Scheme 1 and Scheme 2 do not benefit from splitting user messages into multiple parts by leveraging the flexible interference management process during the decoding. More particularly, when the power budget of CEUs is low, the sub-messages fail to transmit any useful information by overcoming the interference in the DT phase. However, when the power budget of CEUs becomes high, splitting user messages gives more benefits during the DT phase as it can overcome intra- and inter-pair interference and can leverage the benefits of a flexible decoding process at the BS. On the other hand, Scheme 3 and Scheme 4, when the power budget is low, achieve almost similar performance. However, when we increase the power budget of CEUs, the rate of Scheme 4 tends to drop, and at 20 dBm, it drops below all other schemes. This is because as no message is split in Scheme 4, during the decoding process, the messages are decoded as a whole. Hence, it fails to leverage the benefits of flexible decoding as C-RSMA. On the other hand, all three other schemes can benefit from the flexible decoding process due to the splitting of messages. Particularly, when a user message is split into two sub-messages/streams and the sub-messages need not be decoded sequentially; hence, one sub-message is decoded with higher interference, and another one can be decoded with lower interference, which results in a higher rate.On the other hand, one can see that the sum rate of general RSMA with 2K (Scheme 5) and 2K-1 split (Scheme 6) overlaps with each other. It proves that 2K-1 splitting of messages can achieve the capacity region. Hence, 2K splitting for RSMA is unnecessary. However, from our observation from the above experiment, this phenomenon does not hold for C-RSMA. Moving on to Fig. <ref>, we adopted decoding order 1 (details of decoding order 1 is given in the following sub-section) to evaluate the performance of four C-RSMA schemes. It can be seen from the figure that three C-RSMA schemes achieve almost similar performance due to the flexible decoding process of RSMA. However, Scheme 4 achieves lower performance than all other schemes. This is because decoding order 1 adopts a decoding process where CEUs messages are decoded first. Hence, the decoding of CEU messages suffers from higher interference from the CCUs, resulting in a lower sum rate. On the other hand, even though CEU messages are not split in Scheme 3, the split of CCU messages helps Scheme 3 achieve better performance utilizing decoding order 1. From our above experiments and observations, we can conclude that in uplink C-RSMA, it is better to split only one user message in each pair and keep other user messages without splitting to get better benefit from the splitting and flexible interference management process. §.§ Impact of decoding orders in uplink C-RSMA We heuristically investigate the decoding order of the sub-messages/messages at the BS for the uplink C-RSMA framework. We investigate three decoding orders of sub-messages in order to choose the decoding order of sub-messages that provides the higher sum rate. In the following description, we denote a pair as uv_l and their index number as [1,2,…, L], L is the total number of pairs. Details of the investigated three decoding orders are provided below: * Decoding order 1: The decoding order followed at BS to decode the sub-messages/messages of two phases for K=6 users are given as: π_BS=[π_u,v̂ (uv_1), π_v (uv_1) <π_u,v̂ (uv_2), π_v (uv_2) < π_u,v̂(uv_3),π_v (uv_3) < π_u,1(uv_1)<π_u,1(uv_2)<π_u,1(uv_3)<π_u,2(uv_1) <π_u,2(uv_2)<π_u,2(uv_3)]. The above decoding order suggests that at BS, the message of CEU of pair uv_1, is decoded first. Since s_u,v̂ and s_v correspond to the same message, they will be decoded together. Then, it is removed from the total received signal using SIC. In this way, we decode all the messages of all CEUs of all pairs. Then, the original sub-messages of all CCUs of the CT phase are decoded. It should be noted that each time one sub-message/message is decoded, it is removed utilizing SIC from the total received signal. Hence, the next sub-message/message to be decoded encounters less interference. * Decoding order 2: For this decoding order, we followed the following decoding order at BS: π_BS=[π_u,1 (uv_1) < π_u,2 (uv_1) < π_u,v̂ (uv_1), π_v (uv_1)<π_u,1 (uv_2) < π_u,2 (uv_2) < π_u,v̂ (uv_2), π_v (uv_2) < π_u,1 (uv_3) < π_u,2 (uv_3) < π_u,v̂ (uv_3),π_v (uv_3)]. Specifically, in this decoding order, we decode the sub-messages pairwise sequentially. First, we decode all sub-messages of CCU-u of pair uv_1. Then, we decode the message of CEU-v of pair uv_1. Then, we decode the sub-messages of the next pair, and so on. Each time we decode one sub-message, it is removed using SIC. * Decoding order 3: For this decoding order, we adopted the following decoding order: π_BS = [π_u,1 (uv_1) < π_u,1 (uv_2) < π_u,1 (uv_3) < π_u,2 (uv_1)< π_u,2 (uv_2) < π_u,2 (uv_3) < π_u,v̂ (uv_1), π_v (uv_1) < π_u,v̂ (uv_2), π_v (uv_2)< π_u,v̂ (uv_3),π_v (uv_3) ]. Each time we decode one sub-message, it is removed utilizing SIC. Fig. <ref> depicts the impact of decoding order versus the power budget of CEUs. It can be seen from the figure that decoding order 1 and decoding order 3 achieves better performance than decoding order 2. It is because when we decode the sub-messages/messages utilizing decoding orders 1 and 3, uplink C-RSMA can leverage the benefits of flexible interference management while decoding at the BS. On the other hand, when we decode the sub-messages of a particular user sequentially, it cannot leverage the benefits of uplink RSMA, resulting in a less achievable sum rate. Similar to Fig.<ref>, it can be seen from Fig.<ref> that as we increase the power budget of CCUs, the average sum rate increases, and decoding orders 1 and 3 achieve higher sum rates. With the above observations, we can conclude that it is better not to decode the sub-messages sequentially to better benefit from the C-RSMA-based approaches. Based on the above discussion, in our other simulation results, we adopted decoding order 3 as our decoding order to evaluate the impact of different parameters. §.§ Impact of varying the transmit power of CEUs Fig. <ref> represents the average sum rate of the proposed approach and the baseline schemes as we vary the power of CEU in each pair. One can see from this figure that the average sum rate of the network increases as we increase the power budget of CEU. In the beginning, when the power budget of CEUs is around 10 dBm, the increase of the sum rate remains modest. When the power budget starts increasing more than 15 dBm, we can notice a significant jump in the sum rate. This is because as we increase the power budget of the CEUs, during the DT phase, the CEUs can transmit signals with more power, resulting in an increased achievable rate. Particularly when the power budget of the CEUs is high enough, it can overcome the bad channel condition with the BS, and CCUs can also achieve an improved signal quality. Our proposed scheme achieves better performance in terms of sum rate among all other schemes until 18 dBm. This is due to the RSMA-based frameworks having the freedom to achieve better signal quality as they can play with interference levels. Meanwhile, C-RSMA with random achieves lower gain due to the random pairing schemes. On the other hand, non-cooperative RSMA achieves lower gains when the power budget of CEUs is low, and when the power budget of CEUs becomes high, it outperforms the C-RSMA scheme. This is because when the power budget of CEUs is low, they cannot overcome the poor channel condition of CEUs. However, when CEUs have enough power budget, they can overcome the bad effects of poor channel conditions, and even without cooperation, they can achieve higher gains. §.§ Impact of varying the transmit power of CCUs Fig. <ref> presents the average sum rate achieved by the proposed scheme with other compared schemes versus the power budget at the CCUs in each pair. It can be seen from Fig. <ref> that as we increase the transmit power from 10 dBm to 23 dBm, the average sum rate of all the strategies starts to increase. This is because as we increase the power budget of CCUs, the CCUs can help to transmit with more power during the CT phase, which helps to boost the average sum rate of both CCUs and CEUs at the BS. Meanwhile, the sum-rate of non-cooperative techniques remains modest as the power budget of CEUs is not high and hence, non-cooperative techniques cannot overcome the poor channel condition between CEUs and BS. §.§ Impact of varying the rate threshold of CEU Fig. <ref> demonstrates the average sum rate over the rate threshold of the CEUs for our proposed and all other schemes. As we increase the rate threshold of CEUs, the sum rate starts to decrease for all the strategies. This is because as we increase the rate threshold of the CEUs, the available power budget of the CEU is not sufficient to meet the high data rate requirements by overcoming the bad effects of the poor channel condition. However, it can be seen that the cooperative schemes achieve higher rates than the non-cooperative ones. More particularly, when the CEUs rate threshold exceeds 0.4 bps/Hz, non-cooperative schemes fail to achieve any sum rate, and the solution becomes infeasible. Meanwhile, cooperative schemes achieve better performance even at higher rate thresholds. However, in all cases, our proposed C-RSMA scheme with pairing achieves the best performance in both low- and high-rate requirements. § CONCLUSION In this paper, the problem of sum-rate maximization for the uplink C-RSMA in a multi-user scenario is investigated by jointly optimizing user pairing and power allocation at the UEs subject to the constraints of transmit power at the UEs and the required QoS in terms of the minimum achievable data rate. Due to the non-convexity of the joint optimization problem, we adopted the bi-level optimization, which decouples the problem into two sub-problems. The first sub-problem is the user pairing problem where a CCU and a CEU are paired by adopting the SUS-MG algorithm. Particularly, each CCU is selected utilizing the SUS algorithm which assures a semi-orthogonality among CCUs exists in order to reduce the interference as much as possible. Afterward, a CEU is paired with a CCU considering an MG-based algorithm where the channel gains between the users are considered as a preference utility. In the second sub-problem, the power allocation is performed per pair by invoking a SCA-based low-complexity algorithm. Our simulation results demonstrated that our proposed approach achieved the best average sum-rate compared to other conventional schemes. IEEEtran 10 url@samestyle 9349624 W. Jiang, B. Han, M. A. Habibi, and H. D. Schotten, “The road towards 6g: A comprehensive survey,” IEEE Open Journal of the Communications Society, vol. 2, pp. 334–366, 2021. 8869705 W. Saad, M. Bennis, and M. Chen, “A vision of 6g wireless systems: Applications, trends, technologies, and open research problems,” IEEE Network, vol. 34, no. 3, pp. 134–142, 2020. 9831440 Y. Mao, O. Dizdar, B. Clerckx, R. Schober, P. Popovski, and H. V. Poor, “Rate-splitting multiple access: Fundamentals, survey, and future research trends,” IEEE Communications Surveys & Tutorials, vol. 24, no. 4, pp. 2073–2126, 2022. 9771468 S. Khisa, M. Almekhlafi, M. Elhattab, and C. Assi, “Full duplex cooperative rate splitting multiple access for a miso broadcast channel with two users,” IEEE Communications Letters, vol. 26, no. 8, pp. 1913–1917, 2022. 9852986 O. Abbasi and H. Yanikomeroglu, “Transmission scheme, detection and power allocation for uplink user cooperation with noma and rsma,” IEEE Transactions on Wireless Communications, vol. 22, no. 1, pp. 471–485, 2023. 9627180 T. Li, H. Zhang, X. Zhou, and D. Yuan, “Full-duplex cooperative rate-splitting for multigroup multicast with swipt,” IEEE Transactions on Wireless Communications, vol. 21, no. 6, pp. 4379–4393, 2022. 8846761 J. Zhang, B. Clerckx, J. Ge, and Y. Mao, “Cooperative rate splitting for miso broadcast channel with user relaying, and performance benefits over cooperative noma,” IEEE Signal Processing Letters, vol. 26, no. 11, pp. 1678–1682, 2019. 9123680 Y. Mao, B. Clerckx, J. Zhang, V. O. K. Li, and M. A. Arafah, “Max-min fairness of k-user cooperative rate-splitting in miso broadcast channel with user relaying,” IEEE Transactions on Wireless Communications, vol. 19, no. 10, pp. 6362–6376, 2020. 9257190 Z. Yang, M. Chen, W. Saad, W. Xu, and M. Shikh-Bahaei, “Sum-rate maximization of uplink rate splitting multiple access (rsma) communication,” IEEE Transactions on Mobile Computing, vol. 21, no. 7, pp. 2596–2609, 2022. 7555358 H. Joudeh and B. Clerckx, “Sum-rate maximization for linearly precoded downlink multiuser miso systems with partial csit: A rate-splitting approach,” IEEE Transactions on Communications, vol. 64, no. 11, pp. 4847–4861, 2016. 7972900 E. Piovano and B. Clerckx, “Optimal dof region of the k -user miso bc with partial csit,” IEEE Communications Letters, vol. 21, no. 11, pp. 2368–2371, 2017. mao2018rate Y. Mao, B. Clerckx, and V. O. Li, “Rate-splitting multiple access for downlink communication systems: Bridging, generalizing, and outperforming sdma and noma,” EURASIP journal on wireless communications and networking, vol. 2018, pp. 1–54, 2018. 9461768 Z. Yang, M. Chen, W. Saad, and M. Shikh-Bahaei, “Optimization of rate allocation and power control for rate splitting multiple access (rsma),” IEEE Transactions on Communications, vol. 69, no. 9, pp. 5988–6002, 2021. 9491092 O. Dizdar, Y. Mao, and B. Clerckx, “Rate-splitting multiple access to mitigate the curse of mobility in (massive) mimo networks,” IEEE Transactions on Communications, vol. 69, no. 10, pp. 6765–6780, 2021. 9324793 Z. Lin, M. Lin, T. de Cola, J.-B. Wang, W.-P. Zhu, and J. Cheng, “Supporting iot with rate-splitting multiple access in satellite and aerial-integrated networks,” IEEE Internet of Things Journal, vol. 8, no. 14, pp. 11 123–11 134, 2021. 9445019 A. A. Ahmad, Y. Mao, A. Sezgin, and B. Clerckx, “Rate splitting multiple access in c-ran: A scalable and robust design,” IEEE Transactions on Communications, vol. 69, no. 9, pp. 5727–5743, 2021. 9847599 D. Shambharkar, S. Dhok, A. Singh, and P. K. Sharma, “Rate-splitting multiple access for ris-aided cell-edge users with discrete phase-shifts,” IEEE Communications Letters, vol. 26, no. 11, pp. 2581–2585, 2022. 9531484 C. Xu, B. Clerckx, S. Chen, Y. Mao, and J. Zhang, “Rate-splitting multiple access for multi-antenna joint radar and communications,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 6, pp. 1332–1347, 2021. 8756668 Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting multiple access for coordinated multi-point joint transmission,” in 2019 IEEE International Conference on Communications Workshops (ICC Workshops), 2019, pp. 1–6. 9849099 M. R. Camana, C. E. Garcia, and I. Koo, “Rate-splitting multiple access in a miso swipt system assisted by an intelligent reflecting surface,” IEEE Transactions on Green Communications and Networking, vol. 6, no. 4, pp. 2084–2099, 2022. 10190330 Y. Liu, B. Clerckx, and P. Popovski, “Network slicing for embb, urllc, and mmtc: An uplink rate-splitting multiple access approach,” IEEE Transactions on Wireless Communications, vol. 23, no. 3, pp. 2140–2152, 2024. 9970313 J. Xu, O. Dizdar, and B. Clerckx, “Rate-splitting multiple access for short-packet uplink communications: A finite blocklength analysis,” IEEE Communications Letters, vol. 27, no. 2, pp. 517–521, 2023. 9676684 S. A. Tegos, P. D. Diamantoulakis, and G. K. Karagiannidis, “On the performance of uplink rate-splitting multiple access,” IEEE Communications Letters, vol. 26, no. 3, pp. 523–527, 2022. 10167483 H. Lu, X. Xie, Z. Shi, H. Lei, N. Zhao, and J. Cai, “Outage performance of uplink rate splitting multiple access with randomly deployed users,” IEEE Transactions on Wireless Communications, vol. 23, no. 2, pp. 1308–1326, 2024. 9912342 M. Katwe, K. Singh, B. Clerckx, and C.-P. Li, “Rate splitting multiple access for sum-rate maximization in irs aided uplink communications,” IEEE Transactions on Wireless Communications, vol. 22, no. 4, pp. 2246–2261, 2023. 9991090 Y. Xu, Y. Mao, O. Dizdar, and B. Clerckx, “Max-min fairness of rate-splitting multiple access with finite blocklength communications,” IEEE Transactions on Vehicular Technology, vol. 72, no. 5, pp. 6816–6821, 2023. 9790069 H. Kong, M. Lin, Z. Wang, J.-Y. Wang, W.-P. Zhu, and J. Wang, “Combined robust beamforming with uplink rsma for multibeam satellite systems,” IEEE Transactions on Vehicular Technology, vol. 71, no. 9, pp. 10 167–10 172, 2022. 10102273 Q. Sun, H. Liu, S. Yan, T. A. Tsiftsis, and J. Yuan, “Joint receive and passive beamforming optimization for ris-assisted uplink rsma systems,” IEEE Wireless Communications Letters, vol. 12, no. 7, pp. 1204–1208, 2023. 10014691 M. Katwe, K. Singh, B. Clerckx, and C.-P. Li, “Improved spectral efficiency in star-ris aided uplink communication using rate splitting multiple access,” IEEE Transactions on Wireless Communications, vol. 22, no. 8, pp. 5365–5382, 2023. 10032157 H. Jiang, L. You, A. Elzanaty, J. Wang, W. Wang, X. Gao, and M.-S. Alouini, “Rate-splitting multiple access for uplink massive mimo with electromagnetic exposure constraints,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 5, pp. 1383–1397, 2023. 10119024 C. Hu, Y. Fang, and L. Qiu, “Joint transmit and receive beamforming design for uplink rsma enabled integrated sensing and communication systems,” in 2023 IEEE Wireless Communications and Networking Conference (WCNC), 2023, pp. 1–6. 10059199 H. Bastami, H. Behroozi, M. Moradikia, A. Abdelhadi, D. W. K. Ng, and L. Hanzo, “Large-scale rate-splitting multiple access in uplink uav networks: Effective secrecy throughput maximization under limited feedback channel,” IEEE Transactions on Vehicular Technology, vol. 72, no. 7, pp. 9267–9280, 2023. 9110852 Y. Fu, M. Zhang, L. Salaün, C. W. Sung, and C. S. Chen, “Zero-forcing oriented power minimization for multi-cell miso-noma systems: A joint user grouping, beamforming, and power control perspective,” IEEE Journal on Selected Areas in Communications, vol. 38, no. 8, pp. 1925–1940, 2020. 9737471 M. Elhattab, M. A. Arfaoui, and C. Assi, “Joint clustering and power allocation in coordinated multipoint assisted c-noma cellular networks,” IEEE Transactions on Communications, vol. 70, no. 5, pp. 3483–3498, 2022. 5502106 R. Hasanizadeh and T. N. Davidson, “Jointly optimal power and resource allocation for orthogonal ndf relay systems with qos constraints,” in 2010 IEEE International Conference on Communications, 2010, pp. 1–5. 5595728 T. L. Marzetta, “Noncooperative cellular wireless with unlimited numbers of base station antennas,” IEEE Transactions on Wireless Communications, vol. 9, no. 11, pp. 3590–3600, 2010. 8472883 J. Beiranvand and H. Meghdadi, “Analytical performance evaluation of mrc receivers in massive mimo systems,” IEEE Access, vol. 6, pp. 53 226–53 234, 2018. 6457363 H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser mimo systems,” IEEE Transactions on Communications, vol. 61, no. 4, pp. 1436–1449, 2013. 9069181 S. Willhammar, J. Flordelis, L. Van Der Perre, and F. Tufvesson, “Channel hardening in massive mimo: Model parameters and experimental assessment,” IEEE Open Journal of the Communications Society, vol. 1, pp. 501–512, 2020. 7294693 E. Björnson, E. G. Larsson, and M. Debbah, “Massive mimo for maximal spectral efficiency: How many users and pilots should be allocated?” IEEE Transactions on Wireless Communications, vol. 15, no. 2, pp. 1293–1308, 2016. 1603708 T. Yoo and A. Goldsmith, “On the optimality of multiantenna broadcast scheduling using zero-forcing beamforming,” IEEE Journal on Selected Areas in Communications, vol. 24, no. 3, pp. 528–541, 2006. 7946258 Y. Xu, C. Shen, Z. Ding, X. Sun, S. Yan, G. Zhu, and Z. Zhong, “Joint beamforming and power-splitting control in downlink cooperative swipt noma systems,” IEEE Transactions on Signal Processing, vol. 65, no. 18, pp. 4874–4886, 2017. 10225434 A. Muhammad, M. Elhattab, M. A. Arfaoui, and C. Assi, “Optimizing age of information in ris-empowered uplink cooperative noma networks,” IEEE Transactions on Network and Service Management, vol. 21, no. 1, pp. 897–907, 2024. 8876877 Z. Wei, L. Yang, D. W. K. Ng, J. Yuan, and L. Hanzo, “On the performance gain of noma over oma in uplink communication systems,” IEEE Transactions on Communications, vol. 68, no. 1, pp. 536–568, 2020.
http://arxiv.org/abs/2409.02173v1
20240903180001
Primordial Black Hole Hot Spots and Out-of-Equilibrium Dynamics
[ "Jacob Gunn", "Lucien Heurtier", "Yuber F. Perez-Gonzalez", "Jessica Turner" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.HE" ]
=1
http://arxiv.org/abs/2409.03596v1
20240905145406
Constant Approximating Disjoint Paths on Acyclic Digraphs is W[1]-hard
[ "Michał Włodarczyk" ]
cs.DS
[ "cs.DS" ]
A Practical Approach to Evaluating the Adversarial Distance for Machine Learning Classifiers K. Grunthal [email protected] V. Venkatraman Krishnan 1 P. C. C. Freire 1 M. Kramer 1 M. Bailes 8,9 S. Buchner 7 M. Burgay 5 A. D. Cameron 8,9 C.-H.R. Chen 1 I. Cognard 2,3 L. Guillemot 2,3 M. E. Lower 6 A. Possenti 5 G. Theureau 2,3,4 September 9, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT In the Disjoint Paths problem, one is given a graph with a set of k vertex pairs (s_i,t_i) and the task is to connect each s_i to t_i with a path, so that the k paths are pairwise disjoint. In the optimization variant, Max Disjoint Paths, the goal is to maximize the number of vertex pairs to be connected. We study this problem on acyclic directed graphs, where Disjoint Paths is known to be W[1]-hard when parameterized by k. We show that in this setting Max Disjoint Paths is W[1]-hard to c-approximate for any constant c. To the best of our knowledge, this is the first non-trivial result regarding the parameterized approximation for Max Disjoint Paths with respect to the natural parameter k. Our proof is based on an elementary self-reduction that is guided by a certain combinatorial object constructed by the probabilistic method. § INTRODUCTION The Disjoint Paths problem has attracted a lot of attention both from the perspective of graph theory and applications <cit.>. Both decision variants, where one requires the paths to be either vertex-disjoint or edge-disjoint, are known to be NP-hard already on very simple graph classes <cit.>. This has motivated the study of Disjoint Paths through the lens of parameterized complexity. Here, the aim is to develop algorithms with a running time of the form f(k)· n^(1), where f is some computable function of a parameter k and n is the input size. A problem admitting such an algorithm is called fixed-parameter tractable (FPT). In our setting, k is the number of vertex pairs to be connected. On undirected graphs, both variants of Disjoint Paths have been classified as FPT thanks to the famous Graph Minors project by Robertson and Seymour <cit.> (see <cit.> for later improvements). This was followed by a line of research devoted to designing faster FPT algorithms on planar graphs <cit.>. On directed graphs, there is a simple polynomial transformation between the vertex-disjoint and the edge-disjoint variants, so these two problems turn out equivalent. Here, the problem becomes significantly harder: It is already NP-hard for k=2 <cit.>. The situation is slightly better for acyclic digraphs (DAGs) where Disjoint Paths can be solved in time n^(k) <cit.> but it is W[1]-hard <cit.> (cf. <cit.>) hence unlikely to be FPT. In addition, no n^o(k)-time algorithm exists under the assumption of the Exponential Time Hypothesis (ETH) <cit.>. Very recently, it has been announced that Disjoint Paths is FPT on Eulerian digraphs <cit.>. It is also noteworthy that the vertex-disjoint and edge-disjoint variants are not equivalent on planar digraphs as the aforementioned reduction does not preserve planarity. Indeed, here the vertex-disjoint version is FPT <cit.> whereas the edge-disjoint version is W[1]-hard <cit.>. In the optimization variant, called Max Disjoint Paths, we want to maximize the number of terminals pairs connected by disjoint paths. The approximation status of this problem has been studied on various graph classes <cit.>. On acyclic digraphs the best approximation factor is (√(n)) <cit.> and this cannot be improved unless P=NP <cit.>. A different relaxation is to allow the algorithm to output a solution in which every vertex appears in at most c paths (or to conclude that there is no vertex-disjoint solution). Kawarabayashi, Kobayashi, and Kreutze <cit.> used the directed half-integral grid theorem to design a polynomial-time algorithm for directed Disjoint Paths with congestion c=4 for every k. In other words, such a relaxed problem belongs to the class XP. Subsequently, the congestion factor has been improved to c=3 <cit.> and c=2 <cit.>. Hardness of FPT approximation. For problems that are hard from the perspective of both approximation and FPT algorithms, it is natural to exploit the combined power of both paradigms and consider FPT approximation algorithms. Some prominent examples are an FPT approximation scheme for k-Cut <cit.> and an FPT 2-approximation for Directed Odd Cycle Transversal <cit.> parameterized by the solution size k. However, several important problems proved to be resistant to FPT approximation as well. The first hardness results in this paradigm have been obtained under a relatively strong hypothesis, called Gap-ETH <cit.>. Subsequently, an (1)-approximation for k-Clique was shown to be W[1]-hard <cit.> and later the hardness bar was raised to k^o(1) <cit.>. In turn, k-Dominating Set is W[1]-hard to f(k)-approximate for any function f <cit.> and W[2]-hard to (1)-approximate <cit.>. More results are discussed in the survey <cit.>. Proving approximation hardness under Gap-ETH is easier compared to the assumption FPTW[1] because Gap-ETH already assumes hardness of a problem with a gap. Indeed, relying just on FPTW[1] requires the reduction to perform some kind of gap amplification, alike in the PCP theorem <cit.>. Very recently, the so-called Parameterized Inapproximability Hypothesis (PIH) has been proven to follow from ETH <cit.>. This means that ETH implies FPT approximation hardness of Max 2-CSP parameterized by the number of variables within some constant approximation factor c > 1, which has been previously used as a starting point for parameterized reductions <cit.>. It remains open whether PIH can be derived from the weaker assumption FPTW[1]. Lampis and Vasilakis <cit.> showed that undirected Max Vertex-Disjoint Paths admits an FPT approximation scheme when parameterized by treedepth but, assuming PIH, this is not possible under parameterization by pathwdith. See <cit.> for more results on approximation for Max Disjoint Paths under structural parameterizations. Bentert, Fomin, and Golovach <cit.> considered the Max Vertex-Disjoint Shortest Paths problem where we additionaly require each path in a solution to be a shortest path between its endpoints. They ruled out FPT(k) approximation with factor k^o(1) for this problem assuming FPTW[1] and with factor o(k) assuming Gap-ETH. Our contribution. We extend the result by Slivkins <cit.> by showing that Max Disjoint Paths on acyclic digraphs does not admit an FPT algorithm that is a q-approximation, for any constant q. We formulate our hardness result as W[1]-hardness of the task of distinguishing between instances that are fully solvable from those in which less than a 1/q-fraction of the requests can be served at once. Since a q-approximation algorithm could be used to tell these two scenarios apart, the following result implies hardness of approximation. We refer to a pair (s_i,t_i) as a request that should be served by a path connecting s_i to t_i. theoremthmMain Let q ∈ be a constant. It is W[1]-hard to distinguish whether for a given instance of k-: * all the requests can be served simultaneously, or * no set of k/q requests can be served simultaneously. Our proof is elementary and does not rely on coding theory or communication complexity as some previous W[1]-hardness of approximation proofs <cit.>. Instead, we give a gap-amplifying self-reduction that is guided by a certain combinatorial object constructed via the probabilistic method. Techniques. A similar parameterized gap amplification technique has been previously applied to the k-Steiner Orientation problem: given a graph G with both directed and undirected edges, together with a set of vertex pairs (s_1,t_1), …, (s_k,t_k), we want to orient all the undirected edges in G to maximize the number of pairs (s_i,t_i) for which t_i is reachable from s_i. The problem is W[1]-hard and the gap amplification technique can be used to establish W[1]-hardness of constant approximation <cit.>. The idea is to create multiple copies of the original instance and connect them sequentially into many layers, in such a way that the fraction of satisfiable requests decreases as the number of layers grows. What distinguishes k-Steiner Orientation from our setting though is that therein we do not require the (s_i,t_i)-paths to be disjoint. So it is allowed to make multiple copies of each request (s_i,t_i) and connect the t_i-vertices to the s_i-vertices in the next layer in one-to-many fashion. Such a construction obviously cannot work for Dag Disjoint Paths. Instead, will we construct a combinatorial object yielding a scheme of connections between the copies of the original instance, with just one-to-one relation between the terminals from the consecutive layers. Imagine a following construction: given an instance I of k-we create 2k copies of I: I^1_1, …, I^1_k and I^2_1, …, I^2_k. Next, for each i ∈ [k] we choose some permutation π_i [k] → [k] and for each j ∈ [k] we connect the sink t_j in I^1_i to the source s_i in I^2_π_i(j). See <Ref> on page fig:construction. Then for each (i,j) ∈ [k]^2 we request a path from the source s_j in I^1_i to the sink t_i in I^2_π_i(j). Observe that if I is a yes-instance then we can still serve all the requests in the new instance. However, when I is a no-instance, then there is a family ℱ of 2k many k-tuples from [k]^2 so that each tuple represents k requests that cannot be served simultaneously. Each tuple corresponds to some k requests that have to be routed through a single copy of I, which is impossible when I is a no-instance. We can now iterate this argument. In the next step we repeat this construction k times (but possibly with different permutations), place such k instances next to each other, and create the third layer comprising now k^2 copies of I. Then for each i ∈ [k] we need a permutation π_i [k^2] → [k^2] describing the connections between the sinks from the second layer to the sources from the third layer. Again, if I is a no-instance, we obtain a family ℱ of 3k^2 many k-tuples from [k]^3 corresponding to subsets of requests that cannot be served simultaneously. We want to show that after d = f(k) many iterations no subset A of 50% requests can be served. In other words, the family ℱ should always contain a tuple contained in A, certifying that A is not realizable. This will give a reduction from the exact version of to a version of with gap 1/2. The crux of the proof is to find a collection of permutations that will guarantee the desired property of ℱ. It is convenient to think about this construction as a game in which the first player chooses the permutations governing the connections between the layers (thus creating an instance of ) and the second player picks a subset A of 50% requests. The first player wins whenever the family ℱ of forbidden k-tuples includes a tuple contained in A. We need to show that the first player has a single winning strategy against every possible strategy of the second player. We will prove that a good strategy for the first player is to choose every permutation independently and uniformly at random. In fact, for a sufficiently large d and any fixed strategy A of the second player, the probability that A wins against a randomized strategy is smaller than 2^-k^d. Since the number of possible strategies for the second player is at most 2^k^d (because there are k^d requests), the union bound implies that the first player has a positive probability of choosing a strategy that guarantees a victory against every strategy of the second player. This translates to the existence of a family of permutations for which the gap amplification works. § PRELIMINARIES We follow the convention [n] = {1,2,…,n} and use the standard graph theoretic terminology from Diestel's book <cit.>. We begin by formalizing the problem. Max Disjoint PathsA digraph D, a set 𝒯 of k pairs (s_i,t_i) ∈ V(D)^2.kFind a largest collection 𝒫 of vertex-disjoint paths so that each path P∈𝒫 is an (s_i,t_i)-path for some (s_i,t_i) ∈𝒯. We refer to the pairs from 𝒯 as requests. A solution 𝒫 is said to serve request (s_i,t_i) if it contains an (s_i,t_i)-path. The condition of vertex-disjointedness implies that each request can be served by at most one path in 𝒫. A yes-instance is an instance admitting a solution serving all the k-requests. Otherwise we deal with a no-instance. (Max) is a variant of (Max) Disjoint Paths where the input digraph is assumed to be acyclic. Notation for trees. For a rooted tree T and v ∈ V(T) we denote by (v) the set of direct descendants of v. A vertex v in a rooted tree is a leaf if (v) = ∅. We refer to the set of leaves of T as L(T). The depth of a vertex v ∈ V(T) is defined as its distance from the root, measured by the number of edges. In particular, the depth of the root equals 0. The set of vertices of depth i in T is called the i-th layer of T. For v ∈ V(T) we write T^v to denote the subtree of T rooted at v. We can additionally specify an integer ℓ≥ 1 and write T^v,ℓ for the tree comprising the first ℓ layers of T^v. In particular, the tree T^v,1 contains only the vertex v. For k,d ∈ we denote by T_k,d the full k-ary rooted tree of depth d. We have |L(T_k,d)| = k^d. A subset A L(T_k,d) is called a q-subset for q∈ if |A| ≥|L(T_k,d)| / q. Fixed parameter tractability. We provide only the necessary definitions here; more information can be found in the book <cit.>. A parameterized problem can be formalized as a subset of Σ^* ×ℕ. We say that a problem is fixed parameter tractable (FPT) if it admits an algorithm solving an instance (I, k) in running time f(k)· |I|^(1), where f is some computable function. To argue that a parameterized problem is unlikely to be FPT, we employ FPT-reductions that run in time f(k)· |I|^(1) and transform an instance (I,k) into an equivalent one (I',k') where k' = g(k). A canonical parameterized problem that is believed to lie outside the class FPT is k-Clique. The problems that are FPT-reducible to k-Clique form the class W[1]. Negative association. We introduce the following concept necessary for our probabilistic argument. There are several definitions capturing negative dependence between random variables; intuitively it means that when one variable takes a high value then a second one is more likely to take a low value. Negative association formalizes this idea in a strong sense. A collection of random variables X_1, X_2, …, X_n ∈ is said to be negatively associated if for every pair of disjoint subsets A_1, A_2 [n] and every pair of increasing functions f_1 ^|A_1|→, f_2 ^|A_2|→ it holds that f_1(X_i | i ∈ A_1)· f_2(X_i | i ∈ A_2)≤f_1(X_i | i ∈ A_1)·f_2(X_i | i ∈ A_2). We make note of several important properties of negative association. Consider a collection of random variables X_1, X_2, …, X_n ∈ that is negatively associated. Then the following properties hold. * For every family of disjoint subsets A_1, …, A_k [n] and increasing functions f_1, …, f_k, f_i ^|A_i|→, the collection of random variables f_1(X_i | i ∈ A_1), f_2(X_i | i ∈ A_2), …, f_k(X_i | i ∈ A_k) is negatively associated. * If random variables Y_1, …, Y_n are negatively associated and independent from X_1, …, X_n then the collection X_1, …, X_n, Y_1, …, Y_n is negatively associated. * For every sequence (x_1, x_2, …, x_n) of real numbers we have X_i ≤ x_i | i ∈ [n]≤∏_i=1^n X_i ≤ x_i. Let n,k ∈. For i ∈ [k] let 𝒳^i = (X_1^i, …, X_n^i) be a sequence of real random variables that are negatively associated. Suppose that 𝒳^1, …, 𝒳^k are independent from each other. Then the random variables (∑_i=1^k X^i_1, …, ∑_i=1^k X^i_n) are negatively associated. By <Ref>(<ref>) we know that the union 𝒳^1 ∪…∪𝒳^k forms a collection of nk random variables that are negatively associated. We divide it into n disjoint subsets of the form ({X_j^1, …, X_j^k})_j=1^n and apply <Ref>(<ref>) for the increasing function f ^k →, f(x_1,…,x_k) = ∑_i=1^k x_i. Negative association occurs naturally in situations like random sampling without replacement. A scenario important for us is when an ordered sequence of numbers is being randomly permuted. Intuitively, observing a high value at some index removes this value from the pool and decreases the chances of seeing high values at the remaining indices. Consider a sequence (x_1, x_2, …, x_n) of real numbers. Let Π [n] → [n] be a random variable representing a permutation of the set [n] chosen uniformly at random. For i ∈ [n] we define a random variable X_i = x_Π^-1(i). Then the random variables X_1, X_2, …, X_n are negatively associated. § THE REDUCTION Our main objects of interest are collections of functions associated with the nodes of the full k-ary rooted tree. Such a function for a node v gives an ordering of leaves in the subtree of v. A scheme for T_k,d is a collection of functions, one for each node in T_k,d, such that the function f_v associated with v ∈ V(T_k,d) is a bijection from L(T^v_k,d) to [|L(T^v_k,d)|]. Let (k,d) denote the family of all schemes for T_k,d. We will now formalize the idea of connecting multiple copies of an instance. On an intuitive level, we construct a d-layered instance by taking k many (d-1)-layered instances and adding a new layer comprising k^d-1 copies of the original instance I. Then we map the sinks in the layer (d-1) to the sources in the layer d according to k bijections read from a scheme. These mappings govern how we place the arcs towards the layer d and which vertex pairs form the new request set. We need a scheme β∈(k,d) to arrange all the arcs between the layers. In order to simplify the notation we introduce the following convention. Suppose that an instance J is being build with multiple disjoint copies of an instance I = (D,k,𝒯), referred to as I_1, I_2, …. Then we refer to the copy of the vertex s_i ∈ V(D) (resp. t_i) in I_j as I_j[s_i] (resp. I_j[t_i]). Given an instance I = (D,k,𝒯) of and a scheme β = (f_v)_v ∈ V(T_k,d)∈(k,d) we construct an instance J_k,d(I,β) = (D',k^d,𝒯') of . The elements of 𝒯' will be indexed by the leaves of T_k,d as (s_v,t_v)_v ∈ L(T_k,d) while the elements of 𝒯 (in the instance I) are indexed by 1,…,k as (s_i,t_i)_i∈[k]. If d=1, we simply set J_k,1(I,β) = I, ignoring β. We index 𝒯 by L(T_k,1) in an arbitrary order. Consider d > 1. Let r be the root of T_k,d with (r) = {u_1, …, u_k}. For i ∈ [k] let β_i be the truncation of β to the nodes in the subtree T^u_i_k,d and J_i = (D_i, k^d-1, 𝒯_i) be the instance J_k,d-1(I, β_i). We take a disjoint union of J_1,…,J_k and k^d-1 copies of I referred to as I_1,I_2,… (see <Ref>). These k^d-1 copies of I form layer d. Recall that for i ∈ [k] the bijection f_u_i maps L(T^u_i_k,d) to [k^d-1]. For each i ∈ [k] and v ∈ L(T^u_i_k,d) we insert an arc from J_i[t_v] to I_f_u_i(v)[s_i]. Then we add the pair (J_i[s_v], I_f_u_i(v)[t_i]) to 𝒯'. This pair is assigned index ι(v) in 𝒯' where ι is the natural embedding L(T^u_i_k,d) → L(T_k,d). Note that whenever D is acyclic then D' is acyclic as well so the procedure indeed outputs an instance of . It is also clear that when I admits a solution serving all the k requests, it can be used to serve all the requests in J_k,d(I,β). Let k,d ∈ and β∈(k,d). If I = (D,k,𝒯) is a yes-instance of then J_k,d(I,β) is a yes-instance as well. The case when I is a no-instance requires a more careful analysis. We introduce the notion of a collision that certifies that some subset of requests cannot be served. Let k,d ∈, A L(T_k,d), and β = (f_v)_v ∈ V(T_k,d)∈(k,d). We say that u ∈ V(T_k,d) forms a collision with respect to (A,β) if A contains elements a_1,…,a_k such that: * for each i ∈ [k] the node a_i is a descendant of u_i ∈(u) where u_1,…,u_k are distinct, * f_u_1(a_1) = f_u_2(a_2) = … = f_u_k(a_k). Let k,d ∈, A L(T_k,d), and β = (f_v)_v ∈ V(T_k,d)∈(k,d). Suppose that there exists a collision with respect to (A,β). Let I = (D,k,𝒯) be a no-instance of . Then no solution to the instance (D',k^d,𝒯') = J_k,d(I,β) can simultaneously serve all the requests {(s_v,t_v)_v ∈ A}. We will prove the lemma by induction on d. In the case d=1 we have J_k,1(I,β) = I and the only possibility of a collision is when A = L(T_k,1) so {(s_v,t_v)_v ∈ A} is the set of all the requests. By definition, we cannot serve all the requests in a no-instance. Let us assume d > 1 from now on. First suppose that the collision occurs at the root r ∈ V(T_k,d). Let (r) = {u_1, …, u_k}. Then there exists A' = {a_1,…,a_k} A such that a_i is a descendant of u_i and f_u_1(a_1) = f_u_2(a_2) = … = f_u_k(a_k). We refer to this common value as x = f_u_i(a_i). We will also utilize the notation from <Ref>. Observe that in order to serve the request (s_a_i,t_a_i) in D' the path P_i starting at s_a_i = J_i[s_a_i] must traverse the arc from J_i[t_a_i] to I_x[s_i] as every other arc leaving D_i leads to some I_y with y x having no connection to t_a_i = I_x[t_i]. Furthermore, the path P_i must contain a subpath connecting I_x[s_i] to I_x[t_i] in I_x. Since the same argument applies to every i ∈ [k], we would have to serve all the k requests in I_x. But this is impossible because I_x is a copy of I which is a no-instance. Now suppose that the collision does not occur at the root. Then it must occur in the subtree T^u_i_k,d for some i ∈ [k]. For every v ∈ A being a descendant of u_i, any path P_v serving the request (s_v,t_v) in D' must contain a subpath P'_v in D_i from J_i[s_v] to J_i[t_v] as again it must leave D_i through the vertex J_i[t_v]. By the inductive assumption, we know that we cannot simultaneously serve all the requests (s_v,t_v)_v ∈ A ∩ L(T^u_i_k,d) in the smaller instance J_i. The lemma follows. Consider some pair (s_i, t_i) ∈𝒯'. We can associate with it a collection of d triples (I_j,x_j), where I_j is a copy of I inside J_k,d(I,β) and x_j ∈ [k]. If d=1 the collection contains just the triple (D, s_i, t_i). If d > 1 we consider the copy of I containing t_i and start with the pair (x,t_i) where x is the source corresponding to the sink Let u ∈ V(T_k,d be the node forming the collision, with (u) = {u_1, …, u_k}. We know that there exist A' = {a_1,…,a_k} A such that a_i is a descendant of a_i and f_u_1(a_1) = f_u_2(a_2) = … = f_u_k(a_k). We will refer to the latter number as x. Let us focus on the moment in the construction process of J_k,d(I,β) when u is considered. We have just created instances J_1, …, J_k corresponding to the subtrees of u_1, …, u_k. As the next step, we insert k^d-1 copies of I. For each i ∈ [k] the vertex D_i[t_a_i] gets connected to I_x[s_i]. Alice and Bob play a game on T_k,d. The strategy of Alice is a subset A L(T_k,d). We say that Alice plays a q-strategy if |A| ≥|L(T_k,d)/q = k^d/q. The strategy of Bob is a scheme β∈(k,d). Bob wins if there exists a collision with respect to (A,β). We can now state our main technical theorem. Recall that a subset A L(T_k,d) is called a q-subset if |A| ≥|L(T_k,d)|/q = k^d/q. theoremthmBob Let k,d,q ∈ satisfy d ≥ k · (4q)^4klog k. Then there exists β∈(k,d) such that for every q-subset A L(T_k,d) there is a collision with respect to (A,β). The proof is postponed to <Ref> which abstracts from the Disjoint Paths problem and focuses on random permutations. With <Ref> at hand, the proof of the main result is easy. * We are going to give an FPT-reduction from the exact variant of k-, which is W[1]-hard <cit.>, to the variant with a sufficiently large gap. To this end, we present an algorithm that, given an instance I = (D,k,𝒯), runs in time f(k,q) · |I| and outputs an instance J = (D',k',𝒯') such that: * k' depends only on k and q, * if I is a yes-instance then J is a yes-instance, and * if I is a no-instance then no solution to J can simultaneously serve at least k'/q requests. Obviously, being able to separate these two cases for J (all requests vs. at most 1/q-fraction of requests) is sufficient to determine whether I is a yes-instance. We set d = k · (4q)^4klog(k) accordingly to <Ref>. It guarantees that there exists a scheme β∈(k,d) such that for every q-subset A L(T_k,d) there is a collision with respect to (A,β). Observe that such a scheme can be computed in time f(k,q) because d is a function of (k,q) and the size of the family (k,d) is a function of (k,d). The same holds for the number of all q-subsets A L(T_k,d). Therefore, we can simply iterate over all β∈(k,d) and check for each q-subset A whether there is a collision or not. The instance J is defined as J_k,d(I,β). A direct implementation of <Ref> takes time f(k,d) · |I|. <Ref> says that if I is a yes-instance, then J is as well, whereas <Ref> ensures that if I is a no-instance, then for each set of k'/q requests (corresponding to some q-subset A L(T_k,d) which must have a collision with β) no solution can simultaneously serve all of them. This concludes the correctness proof of the reduction. We remark that <Ref> works in a more general setting, where q is not necessarily a constant, but a function of k. This enables us to rule out not only an (1)-approximation in FPT time, but also an α(k)-approximation for some slowly growing function α(k) →∞. However, the value of the parameter k' becomes k^d for d = Ω(q^klog k) so q ends up very small compared to the new parameter k'. This is only sufficient to rule out approximation factors of the form α(k) = (log k)^o(1). A detailed analysis of how to adjust such parameters is performed in <cit.>. § CONSTRUCTING THE SCHEME This section is devoted to the proof of <Ref>. Before delving into the rigorous analysis, we sketch the main ideas behind the proof. Outline. We use the probabilistic method to prove the existence of a scheme having a collision with every q-subset of leaves in T_k,d. We will show that for a sufficiently large d choosing each bijection at random yields a very high probability of a collision with any fixed q-subset. Specifically, the probability that a collision does not occur should be less than 2^-k^d. Since the number of all q-subsets of a k^d-size set is bounded by 2^k^d, the union bound will imply that the probability that a collision does not occur for at least one q-subset is strictly less than one, implying the existence of the desired scheme. Let us fix a q-subset A L(T_k,d). Suppose there is a vertex u ∈ V(T_k,d) such that for every child y of u the fraction of leaves in T^y_k,d belonging to A is at least 1/q. Let ℓ denote [|L(T^y_k,d)|]. For each such child we choose a random bijection from L(T^y_k,d) to [ℓ]. The probability that each of these k bijections maps an element of A to a fixed index x ∈ [ℓ] is at least q^-k. Such events are not independent for distinct x but we will see that they are negatively associated, which still allows us to upper bound the probability of no such event happening by (1-q^-k)^ℓ (see <Ref>). How to identify such a vertex u? First, it is sufficient for us to relax the bound 1/q assumed above to 1/(4q). Observe that for each layer in T_k,d there must be many vertices v satisfying |A ∩ L(T^v_k,d)| ≥1/2q |L(T^v_k,d)|. Suppose that v does not meet our criterion: this means that it has a child v' with less than 1/(4q)-fraction of the A-leaves in its subtree. But then the average fraction of the A-leaves among the remaining children is higher than the fraction for v. Consequently, we can choose a child of v with a higher fraction and repeat this argument inductively. We show that after (klog(q)) many steps this process must terminate so we are guaranteed to find a vertex for which every child has at least a 1/(4q)-fraction of the A-leaves. This is proven in <Ref>. Finally, to obtain a large probability of a collision we must show that there many such vertices u with a large sum of their subtrees' sizes. This will allows us to multiply the aforementioned bounds of the form (1-q^-k)^ℓ with a large sum of the exponents ℓ. By applying the argument above to a single layer in T_k,d we can find such a collection with the sum of their subtrees' sizes being k^d divided by some function of k and q. But we can also apply it to multiple layers as long as they are sufficiently far from each other (so that the vertices found by the inductive procedure are all distinct). Therefore, it suffices to take d large enough so that the number of available layers surpasses the factors in the denominator, which depend only on k and q. This is analyzed in <Ref>. We begin with a probabilistic lemma stating that randomly permuting k large subsets of a common universe yields a large chance of creating a non-empty intersection of these sets. Let k, z, ℓ∈ and X_1, …, X_k be subsets of [ℓ] of size at least ℓ / z each. Next, let Π_1, …, Π_k [ℓ] → [ℓ] be independent random variables with a uniform distribution on the family of all permutations over the set [ℓ]. Then Π_1(X_1) ∩Π_2(X_2) ∩…∩Π_k(X_k) = ∅≤exp(-ℓ / z^k). For i ∈ [k] and j ∈ [ℓ] let Y^i_j = 1 if j ∈Π_i(X_i) and Y^i_j = 0 otherwise. By <Ref> the variables (Y^i_1, …, Y^i_ℓ) have negative association for each i ∈ [k]. Note that 𝔼Y^i_j≥ 1/z. Next, let Z_j = ∑_i=1^k Y^i_j for j ∈ [ℓ]. <Ref> ensures that the variables Z_1, …, Z_ℓ also enjoy negative association. Condition Π_1(X_1) ∩Π_2(X_2) ∩…∩Π_k(X_k) = ∅ is equivalent to max(Z_j)_j=1^ℓ≤ k - 1. We have Z_j ≤ k-1 = 1 - Z_j = k = 1 - ∏_i=1^kj ∈Π_i(X_i)≤ 1 - 1/z^k. max(Z_j)_j=1^ℓ≤ k-1≤∏_j=1^ℓZ_j ≤ k-1≤ (1 - 1/z^k)^ℓ = (1 - 1/z^k)^z^k· (ℓ / z^k)≤exp(-ℓ / z^k). In the first inequality we used <Ref>(<ref>). The last one holds because (1-1/m)^m < 1/e for all m ≥ 2. Notation. We introduce some additional notation to work with the tree T_k,d. For a vertex v ∈ V(T_k,d) let (v) denote the size of the set L(T^v_k,d). Note that (v) = k^d-h where h is the depth of v. Next, for a set A L(T_k,d), we will write _A(v) = |A ∩ L(T^v_k,d)| / (v). When A is clear from the context, we will omit the subscript. Let k, d, q, τ∈ satisfy k,q≥ 2 and d ≥τ≥ 2k ·log(q). Next, let v ∈ V(T_k,d) be of depth at most d-τ and A L(T_k,d) satisfy _A(v) ≥1/q. Then there exists a vertex u ∈ V(T^v,τ_k,d) such that for each y ∈(u) it holds that _A(y) ≥1/2q. Suppose the claim does not hold. We will show that under this assumption for each i ∈ [τ] there exists v_i ∈ V(T^v,i_k,d) with (v_i) ≥1/q· (1 + 1/2k-2)^i-1. Then by substituting i = τ > (2k-2) ·log(q) and estimating (1 + 1/m)^m >2 (for all m ≥ 2) we will arrive at a contradiction: (v_τ) ≥1/q·1 + 1/2k-2^(2k-2) ·log(q) > 1/q· 2^log(q)≥ 1. We now construct the promised sequence (v_i) inductively. For i = 1 we set v_1 = v which obviously belongs to T^v,1_k,d and satisfies (v) ≥1/q. To identify v_i+1 we consider (v_i) = u_1, u_2, …, u_k. We have (v_i) = 1/k·∑_j=1^k (u_j). We define v_i+1 as the child of v_i that maximizes the value of (see <Ref>). By the assumption, one of the children satisfies (u_j) < 1/2q. Then (v_i+1) is lower bounded by the average value of among the remaining k-1 children, which is at least 1/k-1(v_i)· k - 1/2q. We have (v_i) ≥(v_1) ≥1/q so (v_i)· k - 1/2q≥(v_i)· k - (v_i)/2. We check that v_i+1 meets the specification: (v_i+1) ≥(v_i)/k-1·k - 1/2 = (v_i)·1 + 1/2k-2≥1/q·1 + 1/2k-2^i In the last inequality we have plugged in the inductive assumption. The lemma follows. To apply <Ref> we need to identify many vertices satisfying _A(v) ≥1/2q. To this end, we will utilize the following simple fact. Let a_1, a_2, …, a_ℓ∈ [0,1] be a sequence with mean at least x for some x ∈ [0,1]. Then at least xℓ/2 elements in the sequence are lower bounded by x/2. Suppose that |{a_i ≥x/2| i ∈ [ℓ]}| < xℓ/2. This leads to a contradiction: ∑_i=1^ℓ a_i < 1·xℓ/2 + x/2·ℓ = xℓ. We will use the lemmas above for a fixed layer in the tree T_k,d to identify multiple vertices v meeting the requirements of <Ref>. For each such v we can find a close descendant u of v for which we are likely to observe a collision. The value ℓ in <Ref>, governing the probability of a collision, corresponds to the number of leaves in the subtree of u, i.e., (u). Since this value appears in the exponent of the formula, we need a collection of such vertices u in which the total sum of (u) is large. Let k, d, q ∈ satisfy k,q ≥ 2, d ≥ 4kq. If A L(T_k,d) is a q-subset then there exists a set F V(T_k,d) with the following properties. * For each v ∈ F and u ∈(v) it holds that _A(u) ≥1/4q. * The sum ∑_v∈ F(v) equals at least d · k^d· (4q)^-3k log(k). Let F_i V(T_k,d) be i-th layer of T_k,d, i.e., the set of vertices of depth i; we have |F_i| = k^i and (v) = k^d-i for each v ∈ F_i. Since their subtrees are disjoint, we can see that ∑_v∈ F_i |A ∩ L(T^v_k,d)| = |A|. Therefore ∑_v∈ F_i(v) / |F_i| ≥1/q. By <ref> at least 1/2q fraction of the vertices in F_i must satisfy (v) ≥1/2q. Let us denote this subset as F^+_i. Let τ = ⌈ 2k ·log (2q) ⌉ and M = ⌊ d / τ⌋. We define F^+ = F^+_0 ∪ F^+_τ∪ F^+_2τ∪…∪ F^+_(M-1)τ. Observe that for each pair u, v ∈ F^+ the trees T^u,τ_k,d, T^v,τ_k,d are disjoint. We apply <ref> with q' = 2q to each v ∈ F^+ to obtain a vertex γ(v) ∈ V(T^v,τ_k,d) satisfying condition (1). The disjointedness of these subtrees ensures that the vertices γ(v)_v ∈ F^+ are distinct. We define F = {γ(v) | v ∈ F^+}. Now we take care of condition (2). Let us fix j ∈ [0, M-1]. Since γ(v) ∈ V(T^v,τ_k,d) for v with the depth jτ, we infer that the depth of γ(v) is at most (j+1)τ-1 so |(γ(v))| ≥ k^d+1-(j+1)τ. We have established already that |F^+_jτ| ≥|F_jτ|/2q = k^jτ/2q. The assumption d ≥ 4kq implies d ≥τ so we can simplify M = ⌊ d / τ⌋≥ d / (2τ). We estimate the sum within each layer F^+_jτ and then multiply it by M. ∑_v∈ F^+_jτ(γ(v)) ≥k^jτ/2q· k^d+1-(j+1)τ = k^d + 1 - τ/2q ∑_v∈ F(v) = ∑_j = 0^M-1∑_v∈ F^+_jτ(γ(v)) ≥d · k^d + 1 - τ/2τ· 2q To get rid of the ceiling, we estimate τ≤ 2k ·log (4q). Then k^τ≤ k^2k log (4q) = (4q)^2k log(k). We also use a trivial bound τ≤ 4kq. We can summarize the analysis by ∑_v∈ F(v) ≥d · k^d + 1 - τ/2τ· 2q = d · k^d+1/k^τ· 4qτ≥d · k^d+1/(4q)^2k log(k)· 16kq^2≥d · k^d/(4q)^3k log(k) Now we combine the gathered ingredients to show that a random scheme yields a high probability of a collision with any fixed q-subset. At this point we also adjust d to be larger then the factors depending on k and q. Let k,d,q ∈ satisfy d ≥ k · (4q)^4klog k. Consider some q-subset A L(T_k,d). Suppose that we choose the scheme β = (f_v)_v ∈ V(T_k,d)∈(k,d) by picking each bijection f_v L(T^v_k,d) → [|L(T^v_k,d)|] uniformly and independently at random. Then the probability that (A,β) has no collision is at most exp(-k^d). We apply <Ref> and use the obtained set F V(T_k,d) to analyze the probability of getting a collision. Consider u ∈ F with (u) = {u_1,…,u_k} and let C_u denote the event that (A,β) has a collision at u. For each i ∈ [k] we have (u_i) = (u)/k and we know from <Ref>(1) that _A(u_i) ≥ 1/(4q). For each i ∈ [k] a random bijection f_u_i is chosen between L(T^u_i_k,d) and [(u_i)]. This can be interpreted as first picking an arbitrary bijection to [(u_i)] and then combining it with a random permutation over [(u_i)]. We apply <Ref> with z = 4q to infer that the probability of getting no collision at u is upper bounded by C_u≤exp-(u_i)/z^k = exp-(u)/k·(4q)^k. Since the sets ((u))_u ∈ F are pairwise disjoint, the corresponding events C_u are independent. We can thus upper bound the probability of getting no collision at all by the product ∏_u∈ F C_u. Next, by <Ref>(2) and the assumption on d we know that ∑_u ∈ F(u) ≥ d· k^d· (4q)^-3k log k≥ k^d+1· (4q)^k . We combine this with the previous formula to obtain ⋃_u∈ F C_u = ⋂_u∈ F C_u = ∏_u∈ F C_u≤exp-∑_u∈ F(u)/k·(4q)^k≤exp(-k^d). We are ready to prove <Ref> (restated below) and thus finish the proof of the reduction. * We choose the scheme β by picking each bijection uniformly and independently at random. For a fixed q-subset A let C_A denote the event that (A,β) witnesses a collision. In these terms, <Ref> says that C_A≤exp(-k^d). Let 𝒜 be the family of all q-subsets A L(T_k,d); we have |𝒜| ≤ 2^k^d. By the union bound, the probability that there exists a q-subset with no collision with β is ⋃_A ∈𝒜 C_A≤∑_A ∈𝒜 C_A≤ 2^k^d· (1/e)^k^d < 1. Consequently, there is a positive probability of choosing a scheme β having a collision with every q-subset. In particular, this means that such a scheme exists. § CONCLUSION We have shown that no FPT algorithm can achieve an (1)-approximation for Max Disjoint Paths on acyclic digraphs. However, our reduction blows up the parameter significantly so it does not preserve a running time of the form f(k)n^o(k). It is known that such a running time is unlikely for the exact variant of the problem <cit.>. This leads to a question whether Max admits an (1)-approximation that is faster than n^(k). Our proof yields an alternative technique for gap amplification in a parameterized reduction based on the probabilistic method (extending the restricted version appearing in <cit.>), compared to reductions relying on coding theory <cit.> or communication complexity <cit.>. Can this approach come in useful for proving that Parameterized Inapproximability Hypothesis (PIH) follows from FPTW[1]? plainurl
http://arxiv.org/abs/2409.03224v1
20240905034310
On chip high-dimensional entangled photon sources
[ "Tavshabad Kaur", "Daniel Peace", "Jacquiline Romero" ]
quant-ph
[ "quant-ph" ]
[ Amy X. Zhang September 5, 2024 ===================== § ABSTRACT High-dimensional quantum entanglement is an important resource for emerging quantum technologies such as quantum communication and quantum computation. The scalability of metres-long experimental setups limits high-dimensional entanglement in bulk optics. Advancements in quantum technology hinge on reproducible, and reconfigurable quantum devices — including photon sources, which are challenging to achieve in a scalable manner using bulk optics. Advances in nanotechnology and CMOS-compatible integration techniques have enabled the generation of entangled photons on millimeter-scale chips, significantly enhancing scalability, stability, replicability, and miniaturization for real-world quantum applications. In recent years we have seen several chip-scale demonstrations with different degrees of freedom including path, frequency-bin, time-bin, and transverse modes, on many material platforms. A complete quantum photonic integrated circuit requires the generation, manipulation, and detection of qudits, involving various active and passive quantum photonic components which further increase the degree of complexity. Here, we review and introduce the nonlinear optical processes that facilitate on-chip high-dimensional entangled photon sources and the currently used material platforms. We discuss a range of current implementations of on-chip high-dimensional entangled photon sources and demonstrated applications. We comment on the current challenges due to the limitations of individual material platforms and present future opportunities in hybrid and heterogeneous integration strategies for the next generation of integrated quantum photonic chips. § INTRODUCTION A qubit, the quantum counterpart of a classical bit, can be extended to higher dimensions—qudits. For quantum computation, the higher dimension (d>2) provides a larger state space for representing and processing quantum information <cit.>. The higher dimensions allow for simultaneous control of multiple operations, a decrease in circuit complexity, simplification of the experimental setup, enhancement of algorithm efficiency, and an increase in computational speed <cit.>. For quantum communications, qudit-based technology improves security against eavesdroppers, tolerates high bit error rates, and offers better error correction capabilities to improve the integrity of quantum information processing <cit.>. High-dimensional quantum information can be encoded on various physical platforms such as Rydberg atoms <cit.>, trapped ions <cit.>, cold atomic ensembles <cit.>, superconducting devices <cit.>, spin systems <cit.>, defects in solid-state devices <cit.>, nuclear magnetic resonance <cit.>, molecular magnets <cit.>, quantum dots <cit.>, and photonic systems <cit.>. Among these physical platforms, photons are attractive because: they operate as qudits even at room temperature, they interact weakly with the environment, they can be controlled with relatively mature technology, and they can be transmitted across distant nodes more readily compared with matter-based systems <cit.>. Photons facilitate quantum information encoding in different degrees of freedom through continuous-variable (CV) and discrete-variable (DV) approaches, encouraging the transition from theoretical quantum photonic concepts to application-ready technology. In CV quantum information processing (QIP), encoding in quantized amplitude and phase quadratures of electromagnetic fields forms Gaussian states (vacuum states, coherent states, and squeezed states) <cit.>. Discrete-variable QIP, on the other hand, is based on the creation and detection of single photon states encoded in various discrete two-dimensional and high-dimensional degrees of freedom of single photons (qubits and qudits), namely, path <cit.>, polarisation <cit.>, frequency <cit.>, time <cit.> and transverse modes <cit.>. The separation between CV and DV has narrowed in recent times with the introduction of hybrid approaches that aim to systematically surmount the inherent limitations of either approach <cit.>. Regardless of the encoding used, the scalability of photonic systems can be improved by integrating various optical components into a single chip. Among such components, photon sources seek to greatly benefit from enhanced efficiencies offered by coherent pumping of multiple sources in photonic integrated circuits. Discrete-variable encoding requires single photon sources that ideally meet two criteria: (1) deterministic or “on-demand" generation, and (2) indistinguishability. There are currently no sources that can fully satisfy both these requirements. Photon generation can be divided into deterministic and probabilistic approaches. Deterministic single photon sources based on quantum dots, color/defect centers, single atoms, single ions, single molecules, and atomic ensembles, emit single photons following the two-level atomic energy diagram: a photon is produced each time the atom decays from the upper energy state to the lower state <cit.>. When photon emission occurs through a single optical transition, it prevents the generation of more than one photon in the process <cit.>. On the other hand, probabilistic sources are based on non-linear parametric processes, generating photon pairs with inherent correlations in time and energy to naturally allow for a “herald photon" which signals the existence of the other (heralded) photon. While the distinction between a deterministic and a probabilistic source is conceptually clear, this distinction blurs in practice. Deterministic sources become more probabilistic as the coupling efficiencies to other systems (e.g. fibers) decreases, while the success probability of probabilistic sources can be increased by multiplexing many low-probability, but high-fidelity heralded single photons <cit.>. Each of these approaches has advantages and drawbacks. In particular, probabilistic sources are more naturally extended to higher-dimensions compared to deterministic sources. In this review article, we will only address the probabilistic generation of photons. Deterministic photon sources have been reviewed in <cit.>. Probabilistic quantum light sources based on spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM) nonlinear optical processes enable the generation of photon pairs in both bulk or integrated platforms. For SPDC a single pump photon is converted into a pair of photons, typically labelled signal and idler and each roughly half the energy of the pump. Alternatively, in SFWM two pump photons are annihilated to produce the signal-idler photon pair. In both cases the properties of the generated photons (i.e. frequency, polarisation or transverse mode) are such that energy and momentum are conserved. The appeal of SPDC (χ^(2)) and SFWM (χ^(3)) nonlinear processes for the generation of heralded single-photons <cit.> and entangled photon pairs on nonlinear material platforms <cit.> lies in their effective operation at room temperature, relatively low preparation and maintenance costs. Nonlinear processes have achieved near-unity levels of indistinguishability, purity, and entanglement fidelity, with pair generation rates approaching GHz <cit.>. These characteristics make SPDC and SFWM sources highly attractive for quantum information applications, such as secure communications <cit.>, quantum information processing <cit.>, quantum sensing <cit.>, and metrology <cit.>. Initial experiments for generating photons were performed in controlled laboratory settings on optical benches on the scale of a few metres. Advances in nanotechnology, development of materials, and fabrication techniques enabled going to the chip scale in order to improve scalability and stability for real-world applications. Traditional down-conversion sources in bulk optics are typically inefficient, requiring high optical intensities or external cavities to enhance the generation rates. However the transition to integrated circuits with the ability to integrate cavity structures, such as microring, microdisk and photonic crystals, provides significant optical confinement and high generation rates across many sources. The rate of generation and detection of photons is highly dependent on the propagation loss, degree of control, and the ability to operate over a broad wavelength range. Quantum information processing in quantum photonic integrated circuits (QPICs) utilizes a combination of active and passive components such as grating couplers, directional couplers, waveguides, ring resonators, Mach-Zehnder interferometers (MZI), multimode interferometers (MMI), multiplexers-demultiplexers, phase shifters, polarization splitters, and delay lines. Collectively, these components influence the various degrees of freedom (DoFs) of photons giving rise to qudit encodings. As the density of components increases, scalability as well as the ability to minimise crosstalk among the components become important. Different material platforms and structures have emerged to accommodate the rise in integration levels. Silicon is a leading material that allows for high-density integration, with its high refractive index contrast and established fabrication processes. Silicon is transparent across telecommunication wavelengths and has strong optical nonlinear properties for quantum state generation. Silicon QPICs have progressed from the first CNOT gate and two-photon quantum interference demonstrations <cit.> to a fully reconfigurable circuit with over 550 components enabling multidimensional entanglement <cit.>. However, there are are other suitable materials such as silicon nitride, ultra-rich silicon nitride, lithium niobate, III-V semiconductors, silicon carbide, doped silica, and hydex. The limitations of each material prevent any platform from offering the required features for specific quantum applications. However, the field of QPICs has gone a long way in designing and solving the associated problems that come with each material, in some cases with the help of inverse design <cit.> and machine learning <cit.>. All on-chip quantum photonics, from sources to detectors, will inevitably come and enable future technologies like quantum computing and quantum communication. This review discusses various quantum light sources on-chip. Quantum light sources based on SPDC are promising candidates for their high pair generation rate and high signal-to-noise ratio, while the ease in phase-matching is the desirable feature of SFWM due to much closer interacting frequencies. The remainder of this paper is structured as follows. In Section II, we describe the theoretical framework for photon pair generation based on the principles of nonlinear optics and introduce the material platforms of interest. In Section III, we discuss current demonstrations for integrated high-dimensional photon sources across a range of DoFs. In Section IV, we briefly review some relevant applications and methods for system-level integration of emerging quantum photonic platforms. In Section V, we conclude by discussing current and future challenges, and the opportunities for chip-scale applications with high-dimensional entangled states. § PHOTON GENERATION WITH NONLINEAR OPTICS Nonlinear optics relates to the study of interactions between light fields that are mediated via a dielectric medium, in particular, the exchange of energy between waves of different frequencies <cit.>. These nonlinear processes are divided into two categories: parametric and non-parametric processes. In parametric processes, second-harmonic generation (SHG), sum- or difference-frequency generation (SFG or DFG), third-harmonic generation (THG), four-wave mixing (FWM) and optical parametric amplification (OPA), energy transfer occurs only among the waves. While in non-parametric processes such as stimulated Raman scattering (SRS), stimulated Brillouin scattering (SBS), and two-photon absorption (TPA), photon energy is not conserved with part of the wave energy transferred either from or into the medium. Importantly, in parametric processes the quantum state is maintained which is not the case for non-parametric processes <cit.>. In the presence of an external electric field varying rapidly in time E(t), a dielectric material experiences an induced dielectric polarisation P(t). For weak fields the response is linear such that the dipole moment per unit volume is given by P^(1)(t)=ϵ_0χ^(1)E(t) where ϵ_0 is the permittivity of free space and χ^(1) is the linear susceptibility which relates to the linear refractive index (n_0) by n_0 = √(1+χ^(1)). On the other hand, in cases where the electric field strength is strong, such as in a bright optical beam, the material response is nonlinear giving rise to higher order terms described by means of a Taylor expansion of electric field strength, P = P^(1)+ P^(2)+ P^(3)... =ϵ_0(χ^(1)E+χ^(2)E+χ^(3)E+...) =P^(1)+P_nonlinear where P^(i) and χ^(i) are the i^th order polarization and susceptibilities, respectively. Note we have dropped the time dependence for simplicity. The electric fields and dielectric polarisation are given as vectors accounting for the case in which the susceptibilities become tensors of rank i+1. The generation of quantum-correlated photon pairs relies on spontaneous parametric processes: spontaneous parametric down-conversion (SPDC) and spontaneous four-wave mixing (SFWM), resulting from χ^(2) and χ^(3) material nonlinearities, respectively <cit.>. The phase of these dipole oscillations relies on the phases of the incident fields. Hence to enhance the efficiency of the parametric process, the dipoles must act like a phased array - giving rise to a phase matching condition. Generally, phase matching is often a significant challenge (particularly for χ^(2) processes) due to dispersion, limiting the practical applications of the parametric processes. However, with dispersion engineering <cit.>, optical waveguides can achieve phase matching over a broader bandwidth by carefully managing the dispersion properties of signal and idler wavelengths significantly detuned from the pump. §.§ Spontaneous Parametric Down-Conversion (SPDC) A non-centrosymmetric crystalline material has non-zero even-order susceptibilities owing to asymmetric electronic function <cit.>. Therefore, second-order nonlinear optical processes involving three wave components occur when both energy and momentum conservation conditions are satisfied. The χ^(2) nonlinearity of the material facilitates SPDC, where a photon from the strong pump beam (ω_p) excites the electron to the virtual level corresponding to ħω_p and then spontaneously decays into two photons: signal (ħω_s) and idler (ħω_i). Depending on the phase matching condition the generated photons may be either degenerate in which case ω_s = ω_i such that the signal and idler are at half the pump frequency ω_s,i=1/2ω_p, or alternatively non-degenerate where ω_s≠ω_i. In either case, the energy and momentum of the entire three-photon process are conserved: Energyconservation:ħω_p=ħω_s+ħω_i, Momentumconservation: Δ k=k_s+k_i-k_p≈0, where Δk is the phase mismatch, ħ is the reduced Planck constant and k_p,s,i are magnitudes of wave-vectors of the pump, signal and idler. In SPDC, phase-matching is achieved by two main approaches: using birefringence and quasi-phase-matching (QPM) <cit.>. Birefringent phase matching is commonly implemented in bulk crystals as a result of the small phase mismatch experienced. In bulk optics, entangled-photon pairs are generated via SPDC process for various quantum communication protocols, including quantum key distribution and teleportation. The current state-of-art for bulk optics SPDC sources could be found in Refs. <cit.>. However, owing to advancements at the nano-scale, SPDC waveguide source based on lithium niobate (LN) <cit.>, periodically poled lithium niobate (PPLN) <cit.>, and aluminum-nitride (AlN) <cit.> have been demonstrated on-chip for seamless integration of photon sources, single-photon detectors, and waveguide circuits. The SPDC process probabilistically generates a given number of photon pairs such that pair generation rate (PGR) is shown to have following dependence <cit.>, PGR_SPDC ∝4 P_p/9 ε_0^2 c^2 A_e f f(χ^(2) L)^2 sinc^2(Δ k L/2) where PGR_SPDC represents the pair generation rate for SPDC, P_p is the pump power, A_eff is the mode overlap area, L is the waveguide length, and χ^(2) is the effective value of the second-order nonlinearity tensor. The sinc term accounts for the phase mismatch among the wave components, which limits the effective waveguide length to approximately the coherence length L_c = 2/Δ k. Importantly the PGR scales with 1/A_eff and quadratically with length such that the PGR can be significantly enhanced when moving from bulk to integrated optics as a result of the increased mode confinement and long interaction lengths. Increasing the interaction length L, typically comes at the expense of phase matching bandwidth due to the sinc^2 term in which the phase mismatch Δ k is present. This trade off can be alleviated through dispersion engineering in order to maintain small phase mismatch over longer interaction lengths. §.§ Spontaneous Four Wave-Mixing (SFWM) SFWM is involves four wave components, and is a third-order nonlinear process. Using degenerate pumping, two pump photons with the same frequency ω_p generate a pair of photons at frequencies ω_s and ω_i. The χ^(3) nonlinearity of the material facilitates SFWM, where two pump photons (ω_p) excites the electron to 2ħω_p virtual level and then decays by spontaneously emitting photon pairs: signal (ħω_s) and idler (ħω_i). The energy and momentum of the entire four-photon process are conserved, Energyconservation:2ħω_p=ħω_s+ħω_i, Momentumconservation: Δ k=k_s+k_i-2k_p≈0, where Δ k represents the phase mismatch, ħ is the reduced Planck constant and k_p,s,i are magnitudes of wave-vectors of the pump, signal and idler. The strength of χ^(3) nonlinearity of the material and pump beam characteristics strongly affect the energy and momentum conservation principles and the attributes of the single photons. In more detailed treatments, the phase-matching condition includes additional terms related to nonlinear phase accumulation from effects like the Raman scattering, free carrier effect, and Kerr effect<cit.>. Assuming a Δ k phase mismatch, the PGR for SFWM can be written as <cit.>, PGR_SFWM∝|3 ω_pχ^(3)/2 n_0^2 ε_0 c^2 A_eff P_p L|^2 sinc^2(Δ k L/2)=|2 ω_p n_2/c A_eff P_p L|^2 sinc^2(Δ k L/2), where ω_p is the pump frequency, n_2 is the nonlinear refractive index of the waveguide material, c is the speed of light, n_0 is the refractive index at the pump frequency, andA_eff is the mode interaction overlap area. This expression shows the quadratic dependence of photon-pair generation rate on the pump power, nonlinearity of the material, and the inverse of the mode size. Similar to SPDC, the PGR in SFWM also has quadratic dependence on the waveguide length and the sinc term accounts for the phase mismatch among the wave components. The χ^(3) coefficient, dictating nonlinear susceptibility, is determined by material properties such as atomic arrangement symmetry and atomic properties. In comparison to SPDC, the χ^3 nonlinearity is typically much weaker. Although the PGR of both SPDC and SFWM scale with increasing pump power P_p which comes at the expense of increasing the probability of multiphoton terms. However, theoretical frameworks suggest that multiplexing multiple sources can mitigate this issue, potentially overcoming the challenges associated with increased multiphoton probabilities. §.§ Figures of Merit The photon pairs generated by SPDC and SFWM processes are usually characterized by some important parameters to evaluate their performance as photon sources. In this section, we provide a brief introduction to the figures of merit <cit.> used to quantitatively characterize photon pair sources within this review. For a more comprehensive discussion on source properties see Ref. <cit.>. * Single Counting Rate:- In the absence of losses and noise photons, the count rate of signal (idler) photons is identical to the pair generation rate expressed as <cit.>, S_c=A_1(γ P_pL)^2σ_0/σ_pI_sc where A_1 is a constant, γ=2 π n_2 / λ A_eff is the nonlinear efficiency, and I_sc is a double integral value (the detailed process can be found in <cit.>). σ_0 is the filtering bandwidth of the signal photon and σ_p is the pump pulse bandwidth. As indicated in Eq. 7, the single photon counting rate is determined by the square of the pump's peak power. This serves as a key indicator for assessing the impact of noise photons in the system such as spontaneous Raman scattering (SpRS), which scales linearly with pump power. Additionally, the single counting rate is influenced by the ratio of filtering bandwidth to pump bandwidth, with wider filter bandwidth enhancing capture of signal photons, while increased pump bandwidth reduces the interaction time for pump photons. A similar expression can be obtained for CW pump. * Coincidence Counting Rate or Brightness:- Coincidence events refers to the simultaneous detection of the signal and idler photon, which is expressed as, C_c=A_2(γ P_pL)^2σ_0^2/σ_p√(σ_0^2+σ_p^2)I_cc. Similar to single counting rate, the coincidence counting rate also has a quadratic dependence on the pump power because of the annihilation of two pump photons in the SFWM process. This marks a significant distinction from SPDC photon-pair sources, where the rates of both single and coincidence counting linearly correlate with pump power. The major difference between the single counting rate and the coincidence counting rate depends on the ratio between σ_0 and σ_p, which is discussed in detail in Ref. <cit.>. * Coincidence-to-accidental ratio (CAR):- This parameter evaluates the performance of a source in generating photon pairs, accounting for the presence of noise photons. It represents the signal-to-noise ratio of the source, calculated as the ratio between the net coincidence count rate and the accidental coincidences count rate <cit.>. The net coincidence (C_net) is the difference of the raw coincidence rate (C_raw) and the accidental coincidence rate (A). The accidental coincidence rate includes contributions from the detector dark count rate and from higher order terms of the nonlinear process (which can be minimised by pumping at a suitably low pump power). C_r a w=(η_c, sη_d, s)(η_c, iη_d, i) r+A =C_n e t+A, where η_c/d,s, η_c/d,i and η_s,i are collection, detection, and total efficiencies of signal and idler photon counting measurements, respectively. Therefore, the net CAR can be written as CAR=C_net/A * Indistinguishability:- It is an important measure representing degree of identicalness among the single-photons emitted from multiple identical photon sources. Two-photon interference effects, experimentally demonstrated by the Hong-Ou-Mandel experiment <cit.> are often used for characterizing the indistinguishability of the photon, as shown in Figure <ref>. When the incoming photons are identical in all degrees of freedom and their wavefunctions completely overlap in time on the beam splitter, a drop in coincidence counts is seen due to their bunching together at the output <cit.>. This HOM dip gives visibility (V_HOM) as a quantifier of indistinguishability, which can drop to zero in the case of complete indistinguishability. Experimentally, this is expressed as <cit.>, V_H O M=C_c(τ→∞)- C_c(τ=0)/C_c(τ→∞), where C_c is the rate of coincidence counts and τ is the time delay between the two photons. This traditional two photon experiment measuring 2-fold coincidence counts is extended to measure 4-fold coincidence counts using either 4 photons, (2 of which are heralded) or two photons with three beamsplitters in Ref. <cit.>. Often on-chip the HOM experiment is performed using the Mach-Zehnder interferometer (MZI) <cit.> where the complete indistinguishability corresponds to 100% visibility of the interference fringes. §.§ Material Platforms Engineering photonic devices which facilitate efficient SDPC and SFWM processes in order to yield high-quality entangled photon sources depends greatly on the chosen material platform. In the ideal case for such sources one would choose a material with large nonlinearity, low propagation loss across a wide transparency window and minimal contributions of parasitic effects such as two-photon absorption (TPA), free-carrier absorption (FCA) or photorefractive effects as well as benefits from a mature fabrication process to enable large-scale devices. In practice, no single material is meets all requirements. Recent advancements in photonic material platforms have widened the scope for QPIC implementations, incorporating various active and passive components. Furthermore, compatibility with low-loss optical fibers and the ability for hybrid integration of high-quality lasers and detectors, particularly around telecommunications wavelengths, is crucial for achieving utility scale devices incorporating on-chip entangled photon sources. Currently, notable platforms for high-dimensional entangled photon pair generation include: Silicon-on-Insulator (SOI), silicon nitride (Si_3N_4, SiN), ultra-rich-silicon nitride (USRN), Hydex, and lithium niobate (LiNbO_3, LN). In the following paragraphs we will briefly discuss the properties of the aforementioned material platforms as they relate to high-dimensional quantum light sources. Among the mentioned material platforms, SOI is arguably the most mature owing to its CMOS compatibility, driving many recent breakthroughs in integrated photonics <cit.>. Silicon waveguides offer several advantages for on-chip quantum light generation. Firstly, silicon's large nonlinear susceptibility (χ^(3)∼ 2.8 × 10^-18 <cit.>) combined with tight modal confinement, enables efficient SFWM process with modest pump powers <cit.>. These properties combined with the reliability and scalability of the CMOS foundry processes, has enabled large-scale demonstrations with single chips containing hundreds to thousands of individual components including arrays of coherently pumped sources and large interferometers for programmable quantum state preparation and measurement <cit.>. Despite these advantages, Si's material properties also lead to several limitations for QPICs. Higher propagation losses resulting from the complex refractive index and increased susceptibility to scattering based on sidewall roughness due to the high refractive index contrast limit overall device transmission - whereas TPA limits photon-pair generation rates at telecommunications wavelengths, and also prohibits the use of shorter wavelengths <cit.>. Additionally, due to the absence of an intrinsic Pockels effect, electro-optic (EO) modulators in Si are typically those based plasma dispersion effects, however these suffer from increased losses due to the FCA <cit.>. Alternatively, an effective χ^(2) can be induced via the DC-Kerr effect <cit.> which doesn't suffer from increased losses and also operates at cryogenic temperatures <cit.>. Thermo-optic phase shifters (TOPS) can provide a lower loss alternative based on the thermo-refractive effect, although at the expense of slower modulation speeds <cit.>. As fabrication processes have matured, hybrid integration with other materials naturally possessing an EO effect such as LN or barium titanate (BaTiO_3, BTO) has become increasingly popular <cit.>. In addressing the limitations of Si, SiN has emerged as a promising alternative CMOS-compatible material. In comparison to SOI platform, SiN has a lower material loss and larger optical bandgap <cit.> however, it also has a smaller thermo-optic coefficient greatly reducing TOPS efficiency <cit.>. Plasma-enhanced chemical vapor deposition (PECVD) and low-pressure chemical vapor deposition (LPCVD) techniques have been widely studied for their production high-quality, near stoichiometric Si_3N_4 layers with low intrinsic losses <cit.>, which combined with the continuous advances in fabrication processes <cit.> has enabled demonstrations of ultra-low propagation (≤) losses in waveguide structures <cit.>. Despite its lower third-order nonlinearity in comparison to Si, the larger energy bandgap renders it virtually immune to TPA in the telecom bands <cit.> enabling the use of high Q-factor cavities to thereby enhancing the overall efficiency and overcoming the reduced nonlinearity. Improvement in the SiN film deposition <cit.> and damascene processes <cit.> have provided thick, crack-free films enabling dispersion engineering at telecom wavelengths which have enhanced various nonlinear processes. An analysis of the dispersion properties of Si_3N_4 waveguides has been conducted for the effective phase-matching of SFWM process by Hong et.al.<cit.>. Si_3N_4 resonators have successfully shown the entangled-photon pair generation for various applications such as in quantum sensing and networking <cit.>. As a compromise between both Si and SiN, USRN has been explored as a method of enhancing the third-order nonlinear coefficient, while still maintaining a wider bandgap. However, the fabrication process of USRN needs significant development to reduce propagation losses in the waveguides <cit.>. Like Si, the lack of an intrinsic EO effect in SiN has seen the move towards hybrid material platforms in particular with a number of works incorporating LN <cit.> and BTO <cit.> thin-films, including at demonstrations at cryogenic temperatures <cit.>. In a similar fashion, hybrid architectures with SOI or III-V layers have become prevalent in order to augment SiN to overcome some of its limitations <cit.>. Lithium niobate, renowned for its strong EO effect and second-order nonlinearity, has been a cornerstone of high-speed modulators for decades <cit.>. Similarly, SPDC has long been demonstrated using PPLN waveguides based on Ti in-diffusion <cit.> or proton exchange processes <cit.> in bulk. Due to the weak index contrast of these diffused waveguides chip scale demonstrations have largely been restricted to no more than a half-a-dozen components. Along with complexity, the nonlinear efficiency of these devices has also been limited. Developments in wafer bonding over the last decade have enabled commercial availability of high-quality smart-cut wafers with submicrometer thick LN films, spurring significant advances in Lithium Niobate-on-Insulator (LNOI) platform. In particular demonstrations of low propagation losses at visible <cit.> and telecom wavelengths <cit.> has resulted in significant interest for quantum applications. A relatively large χ^(3) nonlinearity for SFWM facilitates the generation of Kerr combs <cit.>, while using PPLN waveguides with a large second-order nonlinear coefficient enables efficient SPDC process <cit.>. SPDC efficiency in LNOI waveguides has typically been plagued by imperfect phase matching due to variations in film thickness and sub-micron domain widths, though recently the demonstrated efficiency has been greatly improved though adaptive poling methods – achieving close to theoretical efficiency <cit.>. Alternatively, the efficiency of SPDC can be further enhanced by cavity-based structures <cit.>. The enhanced nonlinear efficiency offered by LNOI waveguides has subsequently reduced the limitations of photorefractive (PR) effects at higher optical intensities previously often found in bulk of or diffused waveguide demonstrations. While LNOI devices can still suffer PR effects, its reduction through annealing and removal of the oxide cladding layer has also recently been studied <cit.>. The successful generation of entangled photon pairs in LN platform <cit.> is serving as the pathway for the fully reconfigurable QPICs <cit.>. Despite these advantages, the high cost of LNOI wafers compared to CMOS-compatible wafers poses a challenge, making it difficult to scale up the fabrication process for high-volume production. Recently, quantum light sources based on CMOS-compatible semiconductor materials have also been demonstrated on silicon carbide (SiC) to host a variety of promising colour centres <cit.>. SiC significantly enhances emission from color centers and also exhibits χ^(2) and χ^(3) optical nonlinearities for efficient optical frequency conversion <cit.>. The integration of EOMs, high-Q microresonators, and photonic crystal nanocavities represents a milestone in the advancement of SiC photonic integrated devices <cit.>. Recent studies have shown SiC exhibits an n_2 comparable to Si in the 4H polytype <cit.> enabling demonstration of photon pair source based on SFWM <cit.>. While the commercial availability of these wafers is still limited, it's expected to improve in coming years. SiCs adoption has been limited in part due to the commercial availability of high-quality SiC wafers through wafer bonding. It's expected this availability will improve in coming years positioning it as a potential future competitor to LN given the improved resistance to photorefractive effects <cit.> and non-zero χ^(2), χ^(3) and EO coefficients. High-index glass (Hydex), which is a doped fused silica glass, is another CMOS compatible material <cit.> with refractive index in the range of 1.5 to 1.9 <cit.>. Hydex integrated waveguides exhibit high nonlinearity and low linear and nonlinear losses, making them promising for nonlinear all-optical signal processing applications <cit.>. Other glasses like chalcogenide glasses, which include As_2S_3 and As_2Se_3, also exhibit high nonlinearities and have attracted considerable attention <cit.>. They have excellent transparency in the mid-IR region and are suitable for photon pair generation via SFWM <cit.>. Another group of CMOS-incompatible platforms comprises III-V semiconductor materials <cit.>, including GaAs <cit.>, AlGaAs <cit.>, InP <cit.>, InAs <cit.>, AlN <cit.>, and InSb <cit.>. These materials are utilized for generating light across a wide spectrum, from visible to telecommunication wavelengths, owing to their direct bandgap. Among them, AlGaAs is particularly notable for its high third-order nonlinearity and capability to mitigate the effects of TPA through adjustments in the Al concentration. However, despite these advantages, achieving system-level demonstrations for quantum information processing on this platform remains challenging due to its CMOS incompatibility and limited availability of optical components. § HIGH-DIMENSIONAL ENTANGLED PHOTON SOURCES High-dimensional entanglement is advantageous for various quantum technologies such as quantum computation and communication. High-dimensional entanglement enhances the security against potential eavesdropping attempts in quantum key distribution (QKD) <cit.>. It also increases the information capacity and improves the error tolerance in measurement-based quantum computing <cit.>. Such applications need a source of high-dimensional entanglement, which comes for free via the conservation of energy and momentum in nonlinear processes like SFWM and SPDC. High-dimensional entanglement has been often associated with the entanglement of multiple qubits, e.g. the three-particle Greenberger–Horne–Zeilinger (GHZ) states <cit.>. A photon has properties that are naturally amenable to a qudit description, e.g. path, frequency-bin, time-bin, transverse mode, and entanglement in these properties lead to high-dimensional entanglement even with just two photons. The Hilbert space becomes richer as the photon number increases as in multi-photon multi-DOF entanglement. In practice, the number of modes that can be coherently generated, measured, and controlled restrict the dimensionality of entanglement. There have been plenty of demonstrations of high-dimensional entanglement in bulk optics made possible by the significant χ^(2) nonlinearity of beta-barium borate (BBO)<cit.> and periodically poled potassium titanyl phosphate (ppKTP)<cit.> crystals. The advancements in quantum technologies hinge on producing consistent and reproducible quantum devices which is challenging to achieve using bulk optics. Integrated photonics offer advantages in scalability, phase stability, replicability, and miniaturization over bulk optical devices, hence the motivation for putting entangled photon sources on chip. On-chip high-dimensional entanglement via SPDC is achieved using commercially available PPLN <cit.> and AlN waveguides <cit.>. High-dimensional entanglement can also be achieved on-chip via SFWM in SOI <cit.> and SiN <cit.> platforms. Polarisation is a convenient degree of freedom for investigating entanglement using bulk optics because of the availability of high-brightness entanglement sources. In 1995, a high-intensity type-II SPDC source of polarisation-entangled photon pairs was made by combining two type-I crystals ("sandwich crystal") <cit.>. The first on-chip polarisation-entangled source via SPDC was demonstrated on a waveguide integrated on a PPLN substrate in 2001 <cit.>, with a remarkable brightness of [per-mode = symbol]250. Later works used Bragg-reflection waveguides <cit.> and quasi-phase matched waveguides <cit.> to improve phase matching and efficiency at the expense of complex waveguide designs. The phase matching condition is more readily achieved in SFWM, and polarisation entangled photons have been generated from CMOS-compatible silicon waveguide devices <cit.>. Beyond Si, AlGaAs waveguides exploiting non-vanishing polarisation-mode dispersion, demonstrated polarization-entangled photon pairs via an orthogonally-polarized SFWM process <cit.>. High-dimensional entanglement using polarisation is made possible by having multiple photons, as in a GHZ state. The GHZ state has inspired the development of high-dimensional graph states and cluster states which are relevant to quantum computing <cit.>. Multiple polarisation-entangled photon pair sources on a single chip enable the generation of multi-photon entangled states. A four-photon polarization-encoded quantum states created via degenerate SFWM within a spiralled Si waveguide in Sagnac configuration has been demonstrated <cit.>, achieving a detection rate of 0.34 at a modest pump power of 600 and a fidelity of 0.78±0.02. Path is another degree of freedom that is convenient to implement on an integrated platform—each waveguide corresponds to a possible path that the photon takes. Hong-Ou-Mandel interference has been demonstrated by photons at 1.5μm, generated by SFWM in two independent Si wire waveguides <cit.>. Indistinguishable photons like these can be fed to complex photonic circuits on-chip to generate high-dimensional entangled states. Reconfigurability of the circuit on chip is crucial to enabling programmable generation and processing of quantum information <cit.>. Photon sources and reconfigurable elements have been demonstrated with SPDC in LN combined with controllable EO phase shifter achieving 92.9 ± 0.9 % visibilty across the C- and L-bands <cit.>. On-chip SFWM and programmable phase-shifters have also been demonstrated in Si achieving a very high visibility of 100.0 ± 0.4 % <cit.>. Microring resonators have also been used to enhance the efficiency of SFWM. The combination of microring resonators and a programmable MZIs have enabled the production of N00N states that have a 96± 2.1 % visibility <cit.>. The microring resonator on this chip had high brightness (1×10^5 photons/s/mW^2/GHz) and the measured CAR is greater than 500. The entanglement of path-encoded photons have been characterised via quantum state tomography and violation of a CHSH-Bell inequality in <cit.>. Aside from microring resonators, nanostructured photonic crystal slab waveguides (PCSWs) can also amplify nonlinear interactions by leveraging slow light <cit.>. Generation of photons in PCSWs have been demonstrated in <cit.>. Scaling up the high-dimensional entanglement in path is achieved by integrating multiple sources on-chip. A foundry-fabricated chip was used by Manfreda-Schulz et. al. <cit.> to demonstrate entanglement of photons from four interferometrically coupled dual Mach–Zehnder microring resonators as photon pair sources. An impressive 15×15-dimensional entanglement has been achieved in a chip that integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources <cit.>. The circuit generates multidimensional bipartite entangled state across 16 optical modes by coherently pumping 16 photon pair sources. The generated photon pairs are separated by asymmetric MZI filters, routed through crossers for local state manipulation, and coupled off the chip by grating couplers to be detected by superconducting nanowire detectors <cit.>. A further demonstration for a very large scale integration has been done by Bao et.al.<cit.>, involving a monolithic integration of 2,446 components on a single chip. The 12 mm×15 mm ‘Boya’-graph-based device combines arrays of integrated 32 SFWM photon-pair sources with other linear optical elements to erase the which-source information and to generate multiphoton, multidimensional graph states. Qudits can also be encoded using frequency. Frequency modes that readily travel in an optical fibre offer a practical avenue to scale up high-dimensional entanglement. Quantum frequency combs (QFCs) feature multiple phase-stable frequency modes within a single spatial mode <cit.>. Quantum information in these quantum frequency modes is a rapidly growing area of research <cit.>. High-dimensional frequency-bin entangled photon pair generation at telecom wavelengths via SPDC in PPLN waveguide, together with coherent control of eleven frequency bins was demonstrated in <cit.>. The on-chip parallel processing of QFCs utilizing an integrated aluminium nitride (AlN) platform in Sagnac configuration was demonstrated by Zhang et al. <cit.>, achieving high-visibility quantum interference and high-fidelity state control across all frequency modes. This advancement enables the deterministic separation of photon pairs in QFCs without spectral filtering, demonstrating a high-dimensional HOM interference. QFCs are also generated via SFWM in SOI micro-ring resonators, such as in <cit.> which showed 21-pairwise correlations in frequency bins spanning from 1.3 to 1.8. The compatibility of SOI with CMOS processes is advantageous, but the significant TPA at telecom wavelengths poses a fundamental limitation. This motivated studies of other CMOS-compatible platforms, e.g. high-index silica glass and SiN <cit.>. The generation of high-purity photons with large CAR values rely on narrow spectral bandwidths of the pump in the SFWM process. This requirement was addressed by Kues et.al. <cit.> and Roztocki et.al. <cit.> which used a passive mode-locked laser system relying on a nested-cavity configuration. A passive mode-locked laser was used in <cit.> for SFWM in a micro-ring resonator in high-index silica glass, showing on-chip generation of entangled qutrit states (d=3, fidelity=80.9%) and entangled ququart states (d=4, fidelity=76.6%). This demonstration also required reducing the free spectral range (FSR) of the resonator and programmable filters with a higher frequency resolution. A SiN micro-ring resonator with a FSR of 50 GHz was used in <cit.> to demonstrate on-chip qubit and qutrit frequency-bin entanglement in a frequency comb consisting of 40 mode pairs. Although frequency modes offer compatibility with telecom networks, quantum interference and the measurement of superposition states is a challenge because these require a nonlinear optical process <cit.>. An active “frequency beam splitter" which resulted in an interference visibility of 95± 2 % was developed in <cit.> by exploiting Bragg-scattering four-wave mixing in an optical fibre. Combined with phase-shifting, the frequency beamsplitter can be the basis of high-fidelity two-photon operations in the frequency domain. While the frequency beamsplitter in <cit.> can provide larger frequency shifts from a nonlinear process, it is more common to use EO frequency mixing, which is more convenient but capable of imparting only small frequency shifts. It then becomes important to have tightly spaced frequency bins, while also maintaining high brightness—two competing constraints that cannot be addressed by a single micro-ring resonator because of the inherent trade-off between these two requirements. Being able to program several integrated micro-ring resonators such that they each cover different frequency bins becomes beneficial. In <cit.>, four identical rings in SOI achieved a high brightness of 0.63 ± 0.15 MHz/(mW)^2 per comb line with a bin spacing of 15 GHz. As shown Fig. <ref>D, each ring is pumped at a different wavelength with mutually coherent pumps to generate entangled states up to a dimension of 16, yielding fidelities above 85% for maximally entangled Bell states. A similar work using two SOI micro-ring resonators can generate all four of the maximally entangled Bell states achieving a fidelity of 97.5% and purity close to 100% <cit.>. In both these works, tuneable MZIs facilitated the manipulation of field intensity and the relative phase for each ring during the SFWM process. The decoupling of the generation rate from the frequency bin spacing will enable more dense integration, with increased number of coherently excited rings leading to more complex quantum states for quantum information processing. The next step in improving the scalability of these on-chip sources is to incorporate the pump laser and the subsequent filtering also on-chip, making these sources less bulky and practical to use outside laboratories. A fully integrated source of frequency-entangled photons was demonstrated in <cit.>. Their design integrates a laser cavity, a tuneable noise suppression filter (>55 dB) utilizing the Vernier effect, and a SiN microring for the generation of entangled photon pairs. The chip achieves a pair generation rate of 8,200 counts s^-1, with state fidelity of 99% and interference visibility of 96%. With the advent of advanced frequency-entangled sources, advanced characterisation techniques need to be developed. While joint spectral intensity measurements show the correlations among the frequency-bin pairs, these are insensitive to phase coherence and hence not useful for characterising entanglement. The active frequency mixing required to do projective measurements involves strong filtering of the input quantum state <cit.>. The alternative, which is to design programmable qudit gates for quantum state tomography inevitably increases in complexity as the dimension increases <cit.>. In <cit.>, rather than measuring in the standard informationally complete bases, the complex and random frequency mixing behaviour of EOMs was exploited in conjunction with pulse shaping. The result are randomised operations, which together with the coincidence measurements can be fed to a Bayesian algorithm to obtain the density matrix. The Bayesian approach is numerically more complicated than the standard quantum state tomography, but its applicability to quantum state tomography of a bipartite high-dimensional entangled state generated on a SiN micro-ring resonator has been demonstrated in <cit.> (Fig. <ref>E). With this technique, the density matrix of an 8×8-dimensional Hilbert space—the highest dimension to date for frequency bins—was obtained. Temporal degree of freedom is another property of photons that can be entangled. An 11-dimensional time-bin entangled state was generated via SPDC in KNbO_3 nonlinear crystal, with visibility of 91±6% <cit.>. The footprint of the photon pair generating source was reduced by utilising AlGaAs Bragg-reflection waveguides, which generated high-dimensional time-bin entanglement with a visibility of 94.2±9% on a chip <cit.>. The first on-chip high-dimensional entanglement using 200 silicon photonic crystal nanocavities was proposed by Takesue et.al. <cit.> (Fig. <ref>A), demonstrating a high-dimensional time-bin entangled photon source with much smaller footprint. The work by Fang et.al. <cit.>, aimed at achieving high-capacity quantum communication using a silicon nanowire waveguide photon pair source, demonstrated the distribution of time-bin entangled photons independently into 3(time)×14(wavelength) channels. Frequency-bin encoding involves measuring the frequency/wavelength correlations without any information about the time of arrival of photons, while time-bin encoding measures arrival time correlations of photons. Time-frequency encoding combines both, capturing correlations in both the domains. In frequency-bin and time-bin entanglement, the precise frequency measurements increase uncertainty in arrival times and vice versa, making joint time-frequency measurements essential for a complete characterisation of entangled states. Time-energy entanglement is tested with the Franson interferometer by sending pairs of photons through unbalanced interferometers with different path lengths. By measuring the interference fringes resulting from the phase differences between the interferometer arms, nonlocal quantum correlations are verified <cit.>. Some examples of experimental demonstrations of time-frequency/ time-energy entanglement can be found in <cit.> High-dimensional time-energy entanglement generation on fiber-ppKTP was reported by Cheng et.al. <cit.> with a visibility of 99.8%. The first experimental realization of qudit cluster states using time-frequency entanglement, was demonstrated by Reimer et.al. <cit.> via SFWM in a Hydex microring using a series of mode-locked pulses, to perform high-dimensional one-way quantum processing (Fig. <ref>B). Beyond spectral and temporal degrees of freedom, the transverse modes present in multimode optical waveguides enhances parallelism and scalability compared to single-transverse-mode configurations. Using such spatial modes are reminiscent of optical communication systems that enhance information capacity via spatial multiplexing techniques <cit.>. Entanglement in transverse DoF can be converted to path and polarization entanglement, offering control over multiple degrees of freedom simultaneously <cit.>. High-dimensional spatial mode entanglement via type-II SPDC process was demonstrated by Bharadwaj et.al <cit.> using a three-waveguide directional coupler in a PPLN substrate. The width and the height of the three-waveguide coupler <cit.> were designed to achieve a two-photon output state occupying three different transverse spatial modes. Mohanty et.al. <cit.> designed multimode SiN waveguides supporting three spatial modes (TE_0, TE_1, TE_2) and demonstrated tunable quantum interference between pairs of photons in different transverse spatial modes using a grating structure along the multimode waveguide (Fig. <ref>A). The demonstration of intermodal four-wave mixing process in integrated multimode silicon waveguides <cit.> has laid a foundation for future advancements in silicon photonics. Building on these works, Feng et. al. <cit.> made an advancement by reporting the first-ever on-chip multimode SFWM silicon waveguide photon pair source (Fig <ref>B), paving the way towards higher-dimensional quantum systems. Transverse-mode entangled photon pairs were verified across various frequency channels within ≈2 THz bandwidth, achieving a 0.96±0.01 high-fidelity Bell state. Besides high-dimensional encoding in any one DoF, the simultaneous entanglement in multiple DoF—hyperentanglement—enhances the quantum information processing by encoding more information per photon. Hyperentangled states increase channel capacity and noise resistance, improving QKD protocols <cit.>. The first experimental demonstration of a 36-dimensional quantum system entangled in polarisation, spatial mode and time-energy was done in <cit.>, generating hyperentanglement. Hyperentangled photon pairs have since been generated in various configurations, including frequency-polarisation <cit.>, polarisation-energy-time <cit.>, path-frequency <cit.>, polarisation-spatial modes <cit.>, implemented on integrated platforms through the SPDC process. Hyperentanglement was also shown in Bragg reflection waveguides <cit.> and AlGaAs ridge waveguides <cit.>. Recent years have seen much exploration of hyperentangled photon pair generation via micro-ring cavities as demonstrated by Suo et al. <cit.>. In this work <cit.>, a scheme of hyper-entangled polarization and energy-time photon pair generation based on a silicon micro-ring cavity achieving >94% visibility has been demonstrated. Work by Vendromin et al. <cit.> introduced a system comprising four SiN microring resonators on a chip (Fig. <ref>B-C), capable of generating polarization and frequency-bin entangled photon pairs. In general, photonic structures are described by physical geometric dimensions such as zero-, one-, two-, and three- dimensional structures. When an additional degree of freedom—synthetic dimension—is combined with the geometrical dimensions, it enables the exploration of higher-dimensional synthetic spaces within simpler, lower-dimensional physical structures <cit.>. Synthetic space is formed by utilizing various photonic modes such as frequency, OAM, or temporal modes, and coupling these modes together to form a lattice structure <cit.>. Synthetic space is not attached to any DOF, it is something that is uniquely convenient to implement in photons because of the photonic engineering which is possible. Based on such an approach, quantum-correlated synthetic crystal is demonstrated by Javid et al. <cit.> which is based on a coherently controlled broadband QFC produced in a LNOI microresonator incorporating a PPLN region for QFC generation and an electrode for EOM (Fig. <ref>F). This approach leverages the time–frequency entanglement within the comb modes to significantly expand the dimensionality of the synthetic space to 400×400 synthetic lattice with electrically controlled tunability. § APPLICATIONS Numerous experiments have explored the generation and distribution of high-dimensional entangled states using on-chip SFWM and SPDC. Future advancements in optical telecommunications involving quantum photonics are anticipated to utilize low-loss optical fiber networks and high-speed photonic interconnect technologies. Consequently, current research in quantum photonics predominantly focuses on scalable and reliable 1550 nm sources, modulators, circuits, and detectors. However, the integration of generation, manipulation, and measurement sections onto a single chip creates new challenges. The manipulation and measurement of high-dimensional entangled states on-chip necessitates both photon sources with high brightness and the integration of various active optical components. Quantum light applications range from quantum communication and computing to imaging and sensing. Below, we highlight recent demonstrations utilizing chip-scale quantum light sources for generating high-dimensional entanglement. To address the maturity and scalability of silicon photonic sources to generate multidimensional quantum entanglement, Wang et.al. <cit.> demonstrated 16 identical spiral waveguides and over 550 optical components on a single chip. Their work demonstrates that compact high-dimensionally-entangled photon sources, and certification of randomness and entangled states via Bell inequalities are possible on a fully integrated platform, paving the way for high-dimensional quantum communication. A hardware platform supporting the integration of various quantum information carrier components is required to implement quantum algorithms. The number of integrated components on a single chip has seen exponential growth, currently reaching a record of 2,500 components for monolithic integration. Bao et. al. <cit.> integrated an array of 32 spiral SFWM degenerate photon-pair sources to show the generation of genuine multipartite multidimensional quantum entanglement. Interconnects are required for distributed quantum computing regardless of the physical platform that does the quantum computation. Interconnects are also important for quantum networks that feature several nodes for effective entanglement distribution. For both quantum computing and quantum communication, interconnects are necessary for architectural flexibility. Recent advancements demonstrate interconnection between multiple chips, pointing to the feasibility of large-scale practical entanglement distribution. Wang et al. <cit.> achieved the conversion between path and polarization entanglement across chips by demonstrating chip-to-chip entanglement distribution using two spiral waveguides. The integration of microring resonators and programmable quantum photonic circuits have facilitated chip-to-chip quantum teleportation and entanglement swapping of frequency-encoded quantum states <cit.>. Hybrid multiplexing using polarisation- and mode-encoding can also be used for distributing multiple multidimensional entangled states across multiple chips linked by few-mode fibres <cit.>. Quantum key distribution is a major quantum technology that provides unprecedented security of the keys. Because QKD is largely about transmission of keys, QKD is naturally photonic. Increased quantum information capacity can be achieved using qudits. Using integrated sources greatly improves the scalability of QKD, as in <cit.> and <cit.> which used entangled frequency modes which are compatible with telecom optical fibres. Although theoretically secure, implementations of QKD are open to loopholes that undermine security. Measurement-device-independent QKD (MDI-QKD) has been proposed to tackle the imperfections related to the measurement devices <cit.>. Quantum computation is another application for which a photonic system is attractive. Quantum computation is possible using just linear optics <cit.>. Post-selection is used to prepare entangled states, which is possible with gate teleportation, but is very resource-intensive. While deterministic gate requirements pose practical challenges, the measurement-based quantum computing (MBQC) model offers a more resource-efficient alternative. Any circuit-based quantum computation can be mapped to MBQC <cit.>. Reimer et.al. <cit.> utilized SFWM within a microring resonator to implement three-level, four-partite cluster states formed by two photons in the time and frequency domain for high-dimensional one-way quantum operations. Operating on transverse modes on a silicon photonic chip, <cit.> demonstrated a two-qubit quantum gate pointing to the possibility of universal transverse mode-encoded quantum operations Fig. <ref>C. Operating on frequency modes, Lu et.al. <cit.> designed a CNOT gate for a QFC coming from a PPLN waveguide. Building on <cit.> more modes were used in <cit.> to encode qudits in time and frequency DoFs, reporting deterministic two-qudit gates on the chip silicon nitride microresonator. A programmable qudit-based quantum processor was demonstrated by Chi et.al <cit.> on a silicon-photonic integrated circuits. This implementation allowed more than one million configurations that demonstrate high-fidelity quantum state preparation, operation, and measurement, benchmarked via different quantum algorithms. The goal for many quantum computing systems is to demonstrate a quantum computational advantage—some problems are more efficiently solved by a quantum computer than with a classical computer. It is expected that the competition between the best classical strategies and what can be achieved using a quantum computer will continue. Boson sampling is one class of problems for which we have seen such a competition emerge. Boson sampling refers to sampling probability distributions of the output when an n- boson state undergoes linear scattering—a universal quantum computer is not necessary. Because photons interact only linearly in a linear-optical quantum network, boson sampling is suited to photonic systems and there have been several photonic experiments <cit.>. Wang et.al. <cit.> implemented high performance multiphoton boson sampling with quantum dot–micropillar single-photon sources. The setup validated boson sampling for three-, four- and five-photons, achieving 4.96 kHz, 151 Hz and 4 Hz sampling rates, respectively. Another experiment by Wang et.al. <cit.> scaled up the boson sampling with 20 photons injected into a 60-mode interferometer, with the output Hilbert space reaching 3.7 × 10^14. On chip, Paesani et.al. <cit.> reported the generation of quantum states of light with up to eight photons, implementing the standard, scattershot, and Gaussian boson sampling protocols in the same silicon chip. The advancements in wafer-scale fabrication processes are driving a significant shift, with large-scale circuits beginning to transition towards utility-scale devices. Such progression highlights the growing feasibility and potential of integrating high-dimensional entangled photon sources into practical quantum technologies, paving the way for broader applications in quantum communication and computing. § OUTLOOK AND CONCLUSION Photonic integration presents a robust strategy for the miniaturization and scaling of current state-of-the-art quantum technologies <cit.>. This integration is critical for achieving the fault tolerance and error correction necessary for the realization of practical and scalable quantum computing systems <cit.>. The wide interest in photonic quantum computing, from both academia and industry, will fuel future advances in both photonic hardware and software (e.g. algorithms and benchmarking that are uniquely suitable to photons) <cit.>. Regardless of the photonic quantum technology, entangled photon sources are important. Having these sources miniaturised on a chip is beneficial for real-world applications. The source of entangled photons is just one (albeit important) component. To achieve widespread practical applications, components for processing and detecting the entangled quantum states should be integrated on chip too. This article summarised recent developments in generating entangled qudits on-chip. These developments heavily rely on integration. No single material platform excels in all metrics necessary for quantum applications. Achieving full system integration on a single chip involves combining photon sources, single-photon detectors, lasers, and modulators into a unified platform. This integration is essential for creating robust, compact, scalable, and high-performance quantum photonic circuits. The mature silicon-on-insulator CMOS fabrication techniques enabled the integration of a large number of photonic components monolithically on a single silicon chip <cit.>. However, the two-photon absorption on increasing the pump power makes it difficult to enhance the pair generation rate in Si. Conversely, the negligible two-photon absorption in SiN and fast on-chip modulators in LN favouring the next generation of integrated quantum photonics. These factors drive the adoption of hybrid and heterogeneous integration strategies <cit.> that leverage the strengths of each platform to achieve optimal performance across the various components necessary for quantum computing <cit.>. There is another technique—inverse design—which has significantly mitigated fabrication and technical challenges in silicon nanophotonics by utilizing computational approaches to discover optimal optical structures based on desired functional characteristics <cit.>. Inverse design employs algorithmic techniques, such as genetic and gradient-based algorithms, to optimize structures over a vast design space, enabling the creation of devices with superior performance metrics <cit.>. Inverse-designed passive components, such as mode multiplexers and beam splitters for silicon photonic circuits, are already established. Expanding inverse design to include active devices, such as modulators and lasers—which frequently limit performance in optical systems—would significantly enhance monolithic integration and advance the capabilities of integrated quantum photonics <cit.>. Over recent decades, the field has overcome numerous technological and manufacturing challenges. This rapid progression has fueled immense anticipation for the realization of large-scale integrated nonlinear photonics in quantum computing <cit.>, quantum communication <cit.>, neuromorphic computing <cit.>, and quantum optics <cit.>. In the upcoming years, qudit-based technology needs further research and development to precisely control integrated sources along with other photonic components to showcase quantum operations on a single photonic device. elsarticle-num
http://arxiv.org/abs/2409.03744v1
20240905175651
Halving the Cost of Quantum Algorithms with Randomization
[ "John M. Martyn", "Patrick Rall" ]
quant-ph
[ "quant-ph" ]
Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA IBM Quantum, MIT-IBM Watson AI Lab, Cambridge, Massachusetts 02142, USA IBM Quantum, MIT-IBM Watson AI Lab, Cambridge, Massachusetts 02142, USA § ABSTRACT Quantum signal processing (QSP) provides a systematic framework for implementing a polynomial transformation of a linear operator, and unifies nearly all known quantum algorithms. In parallel, recent works have developed randomized compiling, a technique that promotes a unitary gate to a quantum channel and enables a quadratic suppression of error (i.e., ϵ→ O(ϵ^2)) at little to no overhead. Here we integrate randomized compiling into QSP through Stochastic Quantum Signal Processing. Our algorithm implements a probabilistic mixture of polynomials, strategically chosen so that the average evolution converges to that of a target function, with an error quadratically smaller than that of an equivalent individual polynomial. Because nearly all QSP-based algorithms exhibit query complexities scaling as O(log(1/ϵ))—stemming from a result in functional analysis—this error suppression reduces their query complexity by a factor that asymptotically approaches 1/2. By the unifying capabilities of QSP, this reduction extends broadly to quantum algorithms, which we demonstrate on algorithms for real and imaginary time evolution, phase estimation, ground state preparation, and matrix inversion. MIT-CTP/5756 Halving the Cost of Quantum Algorithms with Randomization Patrick Rall September 9, 2024 ========================================================= § INTRODUCTION Classical randomness plays a pivotal role in the design of quantum protocols and algorithms. In the near-term, randomized benchmarking <cit.> is central to calibrating and assessing the quality of quantum gates, and quasi-probability methods like probabilistic error cancellation and noise twirling can help reduce noise <cit.>. Similarly, random circuit sampling is central to quantum supremacy experiments <cit.>, and randomized measurements provide a powerful probe into the properties of complex quantum systems <cit.>. As we progress towards quantum advantage and early fault-tolerant quantum hardware, many lines of research aim to reduce the requirements of traditional quantum algorithms by incorporating classical randomness <cit.>. Randomized compiling is a key example of leveraging classical randomness to improve quantum computation <cit.>. As its name suggests, this process randomly compiles gates at execution time, or equivalently, promotes a unitary gate to a quantum channel that is a probabilistic mixture of unitaries. Remarkably, randomized compiling can quadratically suppresses gate errors without increasing the cost of circuit synthesis. Yet, applications of this technique to quantum algorithms have so far been restricted to Trotterized Hamiltonian simulation <cit.> and phase estimation <cit.>, leaving a vacuum of applications to other algorithms. In an effort to fill this gap of randomized quantum algorithms, we propose using quantum signal processing (QSP) <cit.> as a medium for achieving widespread advantage of randomized compiling. QSP prepares a polynomial transformation of a linear operator, and has been shown to encompass nearly all quantum algorithms, from Hamiltonian simulation and quantum search, to matrix inversion and fast integer factoring <cit.>. In this work, we achieve exactly this goal. We merge randomized compiling with QSP by developing Stochastic Quantum Signal Processing (Stochastic QSP). By virtue of randomized compiling, our construction quadratically suppresses the error in a QSP polynomial approximation of a target function. To study how this suppression impacts the cost of QSP-based algorithms, we show that an elementary result in the approximation of smooth functions implies that nearly all QSP-based algorithms achieve a query complexity that scales with the error ϵ as O(log(1/ϵ)), which we also empirically confirm. Hence the quadratic suppression of error afforded by stochastic QSP translates to an asymptotic halving of the cost of a QSP-based algorithm over their deterministic counterparts (asymptotic in the limit of log(1/ϵ) dominating the cost). In realizing this cost reduction, we “combine the strengths of QSP and randomization," as hypothesized in Ref. <cit.>. An outline of this work is as follows. After reviewing QSP and other preliminary topics in Sec. <ref>, we present stochastic QSP in Sec. <ref>. We then demonstrate the versatility of our scheme by showing its compatibility with several generalizations and variants of QSP in Sec. <ref>. Finally, in Sec. <ref>, we benchmark the performance of stochastic QSP for various polynomials relevant to quantum algorithms, including those for real and imaginary time evolution, phase estimation, ground state preparation, and matrix inversion. In Appendix <ref> we review some useful results on Fourier and Chebyshev expansions of smooth functions, and in Appendix <ref> we prove some extensions of randomized compilation techniques. § PRELIMINARIES We begin by discussing the preliminary topics of this work, including notation and background concepts (Sec. <ref>), quantum signal processing (QSP) (Sec. <ref>), polynomial approximations to smooth functions (Sec. <ref>), and randomized compiling (Sec. <ref>). §.§ Notation and Background Concepts We will study functions F(x) on the domain x ∈ [-1,1], where it will be convenient to define the function norm F _[-1,1] := max_x∈[-1,1] |F(x)| . Of particular interest will be functions bounded as F _[-1,1]≤ 1. A convenient set of functions on this domain are the Chebyshev polynomials. The order n Chebyshev polynomial is defined as T_n(x) = cos(n arccos(x)) for integer n≥ 0. It can be shown that T_n(x) is a polynomial of degree n with parity n 2 (i.e., either even or odd, depending on n) and bounded magnitude T_n_[-1,1] = 1. An important property of the Chebyshev polynomials is that they furnish an orthogonal basis in which an arbitrary function on x∈ [-1,1] can be expanded: F(x) = c_0/2 + ∑_n=1^∞ c_n T_n(x) , c_n = 2/π∫_-1^1 F(x) T_n(x)/√(1-x^2) dx , where c_n are the Chebyshev coefficients. If F(x) is a degree d polynomial, then this series terminates at order n=d. In this work we will also study unitary and non-unitary transformations. We will denote an operator by a Latin character, say A, and the associated channel by the corresponding calligraphic character: 𝒜(ρ) = A ρ A^†. In analyzing these operators, we will consider the spectral norm (equivalently, the operator norm), defined as A = sup_|ψ⟩ A |ψ⟩ , where the supremum is taken over normalized states ⟨ψ | ψ⟩ = 1. It can be shown that this norm equates to the maximum singular value of A. We will also use the trace norm (equivalently, the Schatten 1-norm) A _1 = ( √(A^† A)) , which equates to the sum of singular values of A. This norm is associated with the trace distance between two density matrices: d_tr (ρ,σ) = 1/2ρ - σ_1 . This bounds the discrepancy in expectation values as | (ρ O) - (σ O) | ≤ O ρ - σ_1 = 2 O d_tr (ρ,σ). Another relevant metric is the diamond norm, defined for a channel ℰ as: ℰ_♢ = sup_ρ (ℰ⊗ℐ)(ρ) _1 , where this supremum is taken over normalized density matrices ρ that live in a possibly enlarged Hilbert space, and ℐ is the identity channel ℐ(ρ) = ρ. The diamond norm induces the diamond distance between two channels: d_♢ (ℰ, ℱ) : = 1/2ℰ - ℱ_♢ = 1/2sup_ρ (ℰ⊗ℐ)(ρ) - (ℱ⊗ℐ)(ρ) _1. For channels 𝒜(ρ) = A ρ A^† and ℬ(ρ) = B ρ B^† with spectral norms A, B ≤ 1, the diamond distance is upper bounded as (see Lemma 4 of Ref. <cit.> for proof): d_♢ (𝒜, ℬ) = 1/2𝒜 - ℬ_♢≤ A - B . §.§ QSP The quantum signal processing (QSP) algorithm is a systematic method of implementing a polynomial transformations on a quantum subsystem <cit.>. QSP works by interleaving a signal operator U, and a signal processing operator S, both taken to be SU(2) rotations about different axes. Conventionally, the signal operator is an x-rotation through a fixed angle and the signal processing operator a z-rotation through a variable angle ϕ: U(x) = [ x i√(1-x^2); i√(1-x^2) x ], S(ϕ) = e^iϕ Z. Then, by selecting a set of d+1 QSP phases ϕ⃗ = (ϕ_0, ϕ_1, ... , ϕ_d) ∈ℝ^d+1, one can construct the following QSP sequence as an interleaved product of U and S, whose matrix elements are polynomials in x: U_ϕ⃗(x) = S(ϕ_0) ∏_i=1^d U(x) S(ϕ_i) = [ P(x) iQ(x)√(1-x^2); iQ^*(x)√(1-x^2) P^*(x) ], where P(x) and Q(x) are polynomials parameterized by ϕ⃗ that obey: 1. deg(P) ≤ d, deg(Q) ≤ d-1 2. P(x) has parity d 2, and Q(x) has parity (d-1) 2 3. |P(x)|^2 + (1-x^2) |Q(x)|^2 = 1, ∀ x ∈ [-1,1]. This result implies that one can prepare polynomials in x by projecting into a block of U_ϕ⃗, e.g. ⟨ 0| U_ϕ⃗ | 0⟩ = P(x). While this class of polynomials is limited by the conditions of Eq. (<ref>), one can prove that by projecting into other bases (e.g., the | + ⟩⟨ + | basis), and incorporating linear-combination-of-unitaries circuits <cit.>, QSP can encode an arbitrary degree-d polynomial that need only obey P_[-1,1]≤ 1 <cit.>. For any such polynomial, the corresponding QSP phases ϕ⃗ can be efficiently determined classically <cit.>, thus amounting to merely a pre-computation step. In addition, as per Eq. (<ref>), the cost of realizing such a degree-d polynomial is d queries to U(x). Remarkably, QSP can be generalized to implement polynomial transformations of linear operators through its extension to the quantum eigenvalue transformation (QET) <cit.> and quantum singular value transformation (QSVT) <cit.>. This is achieved analogous to QSP: provided access to an unitary that block-encodes a linear operator A, we can construct an operation that encodes a polynomial transformation P(A): U[A]=[ A ·; · · ]↦ U_ϕ⃗[A] = [ P(A) ·; · · ] , where the unspecified entries ensure unitarity. Paralleling Eq. (<ref>), this operation is constructed as an interleaved sequence of U[A] and parameterized rotations. In essence, this applies QSP within each eigenspace (or singular value space) of A, such that that the resulting sequence encodes a degree-d polynomial transformation P(A) acting on the eigenvalues (singular values) of A. The cost of realizing the polynomial P(A) is d queries to the block-encoding, translating to a runtime O(d). Lastly, while Eq. (<ref>) caters to an encoding in the |0⟩⟨ 0| matrix element, one can more generally take A to be accessed by orthogonal projectors Π, Π' as A=Π U[A] Π'. To wit, Eq. (<ref>) corresponds to the conventional choice Π = Π' = |0⟩⟨ 0 | ⊗ I. QET and QSVT are powerful algorithms, shown to unify and simplify most known quantum algorithms, while maintaining near-optimal query complexities <cit.>. Explicitly, an algorithm can be cast into the language of QET/QSVT by constructing a polynomial approximation to a matrix function that solves the problem of interest. For instance, in Hamiltonian simulation, one can design a polynomial P(H) ≈ e^-iHt to simulate time evolution <cit.>. Algorithms encompassed in this framework include the primoridal algorithms of search, simulation, and phase estimation <cit.>, as well as more modern algorithms, like matrix inversion <cit.>, and ground state preparation <cit.>. Finally, throughout this work, we use the term “QSP" in place of “QET" and “QSVT" for simplicity, following the conventional parlance. However, this should be understood to be QET/QSVT when acting on a linear operator rather than a scalar. §.§ Polynomial Approximations to Smooth Functions As emphasized above, the utility of QSP lies in generating matrix functions without the need to unitarily diagonalize the underlying matrix. Specifically, QSP enables the approximation of a matrix function F(A), while remaining agnostic to the eigenvalues of A. This is achieved by selecting a polynomial approximation to the target function P(x) ≈ F(x) and implementing P(A) with QSP, where the accuracy of this approximation can be tuned by increasing the degree of P(x). Because the cost of a QSP-based algorithms scales with the polynomial degree, their complexity rests on results in approximation theory. To make this connection concrete, let us consider a few examples encountered in QSP-based algorithms. First, consider the decaying exponential function e^-β (x+1) for a parameter β>0, rescaled so that e^-β (x+1)_[-1,1] = 1. In QSP, this function is employed to prepare thermal states and estimate partition functions at inverse temperature β <cit.>. It is well established that e^-β (x+1) can be approximated to within additive error ϵ over x∈[-1,1] by a polynomial of degree d = O(√(β)log(1/ϵ)) <cit.>. As anticipated, the degree increases with decreasing error ϵ and increasing β. Similarly, consider the error function erf(kx) = 2/√(π)∫_0^kx e^-t^2 dt for a parameter k>0, which is naturally bounded as erf(kx)_[-1,1] < 1. This function is used in QSP to estimate the step function by using a large value of k, with notable applications to phase estimation <cit.> and ground state preparation <cit.>. Prior work has proven that erf(kx) can be approximated to within additive error ϵ over x∈[-1,1] by a polynomial of degree d = O(k log(1/ϵ)) <cit.>. As before, the degree grows with decreasing error and increasing k. Observe that in both of these examples, the degree of the polynomial approximation scales with the error as O(log(1/ϵ)). This scaling is a generic feature of polynomial approximations to smooth functions. To understand this phenomenon, consider expanding a function on x∈[-1,1] in the basis of Chebyshev polynomials as in Eq. (<ref>): F(x) = c_0/2 + ∑_n=1^∞ c_n T_n(x). As we prove in Appendix <ref>, if F(x) is C^∞ function (i.e., continuous and infinitely differentiable), then its Chebyshev coefficients decay super-polynomially as |c_n| = e^-O(n^r) for some exponent r > 0. For a large class of smooth functions, it is found that r=1 <cit.>, such that |c_n| = e^-O(n) decays geometrically. In this case, a truncation of the Chebyshev series at order d, P(x) = ∑_n=0^d c_n T_n(x), furnishes a degree d polynomial approximation to F(x) that suffers error max_x∈ [-1,1]| P(x) - F(x) | = max_x ∈ [-1,1]|∑_n=d+1^∞ c_n T_n(x) | ≤∑_n=d+1^∞ |c_n| = ∑_n=d+1^∞ e^-O(n) = e^-O(d). Hence, to guarantee an error at most ϵ, it suffices to choose a degree d = O(log(1/ϵ)). In practice many polynomial approximations are constructed via truncated Chebyshev series. This includes polynomial approximations to a wide range of functions relevant to quantum algorithms, including the decaying exponential e^-β x, trigonometric functions sin(tx), cos(tx), the step function Θ(x), the inverse function 1/x,[Although the step function Θ(x) and the inverse function 1/x exhibit singularities at x=0, and thus are not C^∞ functions, they can however be approximated by C^∞ functions by excluding a small region around their singularity. This strategy is used in practice, and renders these function amenable to results on polynomial approximations to smooth functions.] and beyond. Accordingly, these approximations all exhibit degrees that scale with the error as d = O(log(1/ϵ)), which carries over to the complexity of their corresponding QSP-based algorithms. §.§ Randomized Compiling and the Mixing Lemma In order to incorporate randomization into QSP, we will use the concept of randomized compiling. Formally introduced in Ref. <cit.>, randomized compiling can be understood as follows. In a quantum algorithm, one repeatedly executes a quantum circuit and measures the output to obtain useful information. For instance, in Hamiltonian simulation, a quantum state is repeatedly time-evolved and then measured to extract an expectation value. In randomized compiling, instead of executing the same circuit at each iteration, one executes a circuit sampled from a distribution, where each sample is drawn independently. This process can be viewed as replacing a unitary operation with a a quantum channel that is a probabilistic mixture of unitaries. Remarkably, if this mixture is chosen strategically, randomized compiling enables a quadratic suppression of error: if an individual gate approximates a target unitary with error ϵ, the randomly compiled channel can approximate the corresponding target channel with error O(ϵ^2). This error suppression is achieved at little to no increase in overhead, requiring only the ability to classically sample a distribution and implement gates on the fly. From a physical point of view, randomized compiling achieves this quadratic suppression by turning coherent errors into incoherent errors: whereas N coherent errors can add constructively to O(N), incoherent errors essentially perform a random walk and collectively average to O(√(N)) <cit.>. The precise error suppression achieved by randomized compiling is quantified by the Hastings-Campbell mixing lemma, independently proven in Refs. <cit.> in the context of gate synthesis. Let V be a target unitary operator, and 𝒱(ρ) = V ρ V^† the corresponding channel. Suppose there exist m unitaries { U_j }_j=1^m and an associated probability distribution p_j that approximate V as U_j - V ≤ a for all j, ∑_j=1^m p_j U_j - V ≤ b , for some a,b > 0. Then, the corresponding channel Λ(ρ) = ∑_j=1^m p_j U_j ρ U_j^† approximates 𝒱 as Λ - 𝒱_♢≤ a^2 + 2b . To see how this enables a quadratic suppression of error, suppose that one can determine an ensemble of unitaries { U_j } that each achieve spectral error a = ϵ, and a distribution p_j such that b=O(ϵ^2). Then, whereas an individual unitary U_j suffers diamond norm error O(ϵ), the mixing lemma establishes that the channel Λ achieves a quadratically suppressed diamond norm error ≤ a^2 + 2b = O(ϵ^2). Importantly, because Λ is a probabilistic mixture of the unitaries { U_j }, the cost of simulating Λ is no more expensive than the cost of sampling p_j and implementing an individual unitary U_j. The mixing lemma has been leveraged to improve a variety of quantum protocols via randomized compiling. Noteworthy examples include reducing the cost of gate synthesis <cit.>, tightening fault-tolerance thresholds for general noise models <cit.>, and enhancing the precision of state preparation <cit.>. On the algorithmic side, the mixing lemma has been merged with Trotterization to significantly reduce the complexity of chemical simulations <cit.>, double the order of Trotter formulae <cit.>, and accelerate imaginary time evolution <cit.>. Here we continue this campaign by extending randomized compiling to quantum signal processing. As QSP unifies nearly all quantum algorithms <cit.>, this opens the floodgates to a new suite of randomized quantum algorithms with reduced query complexities. § STOCHASTIC QUANTUM SIGNAL PROCESSING Our goal is to integrate randomized compiling into QSP, and thereby establish a framework for designing randomized quantum algorithms that achieve reduced query complexities. We begin in Sec. <ref> by introducing an extension of the mixing lemma for operators block-encoded in unitaries. Then, in Sec. <ref>, we use this result to develop stochastic QSP: we replace a single QSP polynomial with a channel that is a probabilistic mixture of QSP polynomials, each strategically crafted to exploit the mixing lemma and quadratically suppress error. As we show, this furnishes randomized QSP-based algorithms with roughly half the cost of their deterministic counterparts. §.§ The Mixing Lemma for Block-Encodings As emphasized in Sec. <ref>, QSP polynomials are constructed as block-encodings. That is, a QSP polynomial P(A) is encoded in a block of a higher dimensional unitary U, and accessed as P(A) = Π U Π' for some orthogonal projectors Π, Π'. Conventionally, the projectors are taken to be Π = Π' = |0⟩⟨ 0 | ⊗ I, such that P(A) is encoded in the |0⟩⟨ 0 | block of the unitary. To apply the mixing lemma to QSP, it is therefore necessary to establish a variant of the mixing lemma for operators block-encoded in unitary transformations. For the sake of simplicity, we present this theorem for an operator encoded in the |0⟩⟨ 0| block of a unitary: Let V be a unitary that block-encodes a (possibly non-unitary) target operator S as S = (⟨ 0 | ⊗ I) V ( |0 ⟩⊗ I). Suppose there exist m unitaries { U_j }_j=1^m that block-encode operators R_j as R_j = (⟨ 0 | ⊗ I) U_j ( | 0 ⟩⊗ I). Also suppose there exists an associated probability distribution p_j such that R_j - S ≤ a for all j , ∑_j=1^m p_j R_j - S ≤ b . Then, the corresponding unitary channel Λ(ρ) = ∑_j=1^m p_j U_j ρ U_j^† approximates the action of the channel 𝒱 as Λ̅ - 𝒱̅_♢≤ a^2 + 2b , where Λ̅ and 𝒱̅ are channels that access the block-encodings of Λ and 𝒱 by appending an ancilla qubit and projecting onto |0⟩: Λ̅(ρ) = (⟨ 0 | ⊗ I) ·Λ( | 0 ⟩⟨ 0 | ⊗ρ) · ( |0 ⟩⊗ I) 𝒱̅(ρ) = (⟨ 0 | ⊗ I) ·𝒱( | 0 ⟩⟨ 0 | ⊗ρ) · ( |0 ⟩⊗ I). For brevity, we defer the proof of this theorem to Appendix <ref>; there, we also showcase an analogous result for arbitrary block-encodings accessed by projectors Π, Π'. Lemma <ref> indicates that by implementing the probabilistic mixture of block-encoding unitaries Λ(σ) = ∑_j=1^m p_j U_j σ U_j^† on an input state σ = |0⟩⟨ 0 | ⊗ρ, and projecting the block-encoding qubit onto |0⟩ (hence the projectors | 0 ⟩⊗ I and ⟨ 0 | ⊗ I), one can reproduce evolution under the target operator S with diamond norm error a^2 + 2b. In short, this result is proven by showing that the channel Λ̅(ρ) is equal to the probabilistic mixture of block-encoded operators ∑_j=1^m p_j R_j ρ R_j^†, to which the mixing lemma applies. Parallel to the usual mixing lemma, this result enables a quadratic suppression of error by selecting operators R_j and an associated probability distribution p_j such that a=ϵ and b = O(ϵ^2). §.§ Stochastic QSP Lemma <ref> very naturally applies to QSP. In this context, the target operation is a matrix function: S = F(A), yet the operators we have access to are QSP polynomials: R_j = P_j(A). A common goal is to simulate evolution under F(A) as ℱ_A(ρ) = F(A)ρ F(A)^†, which encompasses algorithms such as time evolution, linear systems solvers, and ground state preparation, among many others. Traditionally, one achieves this goal with QSP by finding a suitable polynomial approximation to F(x) as |P(x)-F(x)|≤ϵ, such that evolving under this polynomial with QSP as 𝒫_A(ρ) = P(A) ρ P(A)^† suffers error 𝒫_A - ℱ_A _♢≤ O(ϵ). If P(x) is a degree d polynomial, this procedure requires d queries to the block-encoding of A. Here we exploit the mixing lemma to approximate evolution under F(A) to the same level of accuracy, but at asymptotically half the number of queries to the block-encoding. We achieve this by designing an ensemble of polynomials that each approximate F(A) as P_j(A) - F(A)≤ O(√(ϵ)), and an associated probability distribution that obeys ∑_j p_j P_j(A) - F(A) ≤ O(ϵ). Then, Lemma <ref> readily implies that the channel Λ_A(ρ) = ∑_j P_j(A) ρ P_j(A)^† suffers error Λ_A - ℱ_A _♢≤ O(ϵ). We also show that implementing Λ_A requires a number of queries to the block-encoding ≈ d/2 + O(1), a cost reduction stemming from the fact that polynomial approximations of smooth functions tend to have degrees that scale as d = O(log(1/ϵ)) (see Sec. <ref>). Intuitively, this scaling implies that a polynomial that achieves error O(√(ϵ)) (e.g., P_j(x)) has a degree that is asymptotically half that of a polynomial that achieves error O(ϵ) (e.g., P(x)). Therefore, rather than implement a degree d polynomial, one can instead sample over an ensemble of polynomials of average degree ≈ d/2 + O(1), while retaining the same level of precision. As the corresponding channel is constructed as a probabilistic mixture of QSP sequences, we term this algorithm Stochastic Quantum Signal Processing: Suppose that F(x) is a bounded function F_[-1,1]≤ 1 with a Chebyshev expansion F(x) = ∑_n=0^∞ c_n T_n(x) on the domain x ∈ [-1,1], where for some degree d ≥ 2 the coefficients decay as |c_n| ≤ C e^- q n for all n ≥⌈ d/2 ⌉ +1, for some constants C, q > 0, and this bound is assumed to be tight at n=⌈ d/2 ⌉ +1: |c_⌈ d/2 ⌉ +1| = C e^-q(⌈ d/2 ⌉ +1). Suppose furthermore that a degree-d truncation of F(x), P(x) = ∑_n=0^d c_n T_n(x), achieves an approximation error of ϵ as: | F(x) - P(x) | ≤∑_n=d+1^∞ |c_n| =: ϵ, such that a QSP implementation of the channel 𝒫_A(ρ) = P(A) ρ P(A)^† for some operator A deviates from the target channel ℱ_A(ρ) = F(A) ρ F(A)^† by an error 𝒫_A - ℱ_A _♢≤ 2ϵ = O(ϵ), while making d queries to the block-encoding of A. Then there exists an ensemble of ⌊ d/2 ⌋ polynomials {P_j(x)}_j=1^⌊ d/2 ⌋ of degree deg(P_j) ≤ d, and an associated probability distribution p_j such that: | P_j(x) - F (x) | ≤ 2√(Cϵ/1-e^-q)= O(√(ϵ)) | ∑_j=1^⌊ d/2 ⌋ p_j P_j(x) - F(x) | ≤ϵ = O(ϵ), while the average degree of these polynomials is d_avg := ∑_j=1^⌊ d/2 ⌋ p_j deg(P_j) ≤⌈d/2⌉(1 - 2 √(ϵ (1-e^-q)C)) + 2 √(ϵ (1-e^-q)C) + 1/1-e^-q = d/2(1 - O( √(ϵ)) ) + O(1) ≈ d/2. Therefore, according to the mixing lemma for block-encodings (Lemma <ref>), the channel Λ_A(ρ) = ∑_j=1^⌊ d/2 ⌋ p_j P_j(A) ρ P_j(A)^† suffers error Λ_A - ℱ_A _♢≤(4C/1-e^-q + 2) ϵ = O(ϵ), while making d_avg≲ d/2 queries to a block-encoding of A in expectation. PW: Why is it OK to bound the error by O(ϵ)? What if the hidden constant is huge? A referee might object that the method is not practical unless the hidden constant is show to be sufficiently small??? The underlying goal of this theorem is to simulate evolution under F(A) according to the channel ℱ_A(ρ) = F(A) ρ F(A)^†. For notational convenience, let us define the degree t polynomial truncation of the Chebyshev series of F(x) as P^[t](x) := ∑_n=0^t c_n T_n(x). By the assumption that |c_n| ≤ C e^-qn for n ≥⌈ d/2 ⌉ +1, we find that for t ≥⌈ d/2 ⌉ +1, this polynomial truncation suffers error | P^[t](x) - F(x) | = |∑_n=t+1^∞ c_n T_n(x) | ≤∑_n=t+1^∞ |c_n| ≤∑_n=t+1^∞ C e^-qn = Ce^-qt/1-e^-q . Therefore, the error suffered by the degree d truncation P^[d](x) = P(x) is at most |P^[d](x) - F(x) | ≤Ce^-qd/1-e^-q =: ϵ. To estimate evolution under ℱ_A, one can use QSP to implement the channel 𝒫_A(ρ) = P^[d](A) ρ P^[d](A)^†. Using the diamond norm bound of Eq. (<ref>), we see that 𝒫_A suffers error 𝒫_A - ℱ_A _♢ ≤ 2 P^[d](A) - F(A) ≤ 2 max_x∈[-1,1]| P^[d](x) - F(x) | ≤ 2 ϵ. As P^[d](A) is a degree d polynomial, the cost of implementing 𝒫_A is d queries to the block-encoding of A. Next, let us define our ensemble of polynomials as P_j(x) = P^[ ⌈ d/2 ⌉ ](x) + c_⌈ d/2 ⌉+j/p_j T_⌈ d/2 ⌉ +j(x) |c_⌈ d/2 ⌉ + j| > 0 0 c_⌈ d/2 ⌉ +j =0 for j=1,2,..., ⌊ d/2 ⌋, where p_j is the associated probability distribution defined as p_j = |c_⌈ d/2 ⌉ +j|/∑_k=1^⌊ d/2 ⌋ |c_⌈ d/2 ⌉ +k|. Intuitively, each polynomial P_j(x) consists of the degree ⌈ d/2 ⌉ truncation P^[ ⌈ d/2 ⌉ ](x), and an additional higher order Chebyshev polynomial chosen such that the average polynomial is the degree d truncation: ∑_j=1^⌊ d/2 ⌋ p_j P_j(x) = P^[d](x), where we've used the identity ⌈ d/2 ⌉ + ⌊ d/2 ⌋ = d. The distribution p_j is chosen such that terms with larger Chebyshev coefficients are given more probability mass and preferentially sampled. Each polynomial in this ensemble is guaranteed to suffer error |P_j(x) - F(x)| ≤ |P^[ ⌈ d/2 ⌉ ](x) - F(x) | + |c_⌈ d/2 ⌉ +j|/p_j ≤∑_n= ⌈ d/2 ⌉ +1^∞ |c_n| + ∑_k=1^⌊ d/2 ⌋ |c_⌈ d/2 ⌉ +k| ≤ 2 ∑_n=⌈ d/2 ⌉ +1^∞ |c_n| ≤ 2 ∑_n=⌈ d/2 ⌉ +1^∞ C e^-qn = 2Ce^-q ⌈ d/2 ⌉/1-e^-q≤2Ce^-q d/2 /1-e^-q = 2√(C ϵ/1-e^-q) = O(√(ϵ)). On the other hand, the average polynomial suffers error | ∑_j=1^⌊ d/2 ⌋ p_j P_j(x) - F(x) | = | P^[d](x) - F(x) | ≤ϵ. In the language of the mixing lemma for block-encodings (Lemma <ref>), this corresponds to values a = 2√(Cϵ/1-e^-q) and b = ϵ. Accordingly, the channel Λ_A(ρ) = ∑_j=1^⌊ d/2 ⌋ p_j P_j(A) ρ P_j(A)^† suffers error Λ_A - ℱ_A _♢≤ a^2 + 2b = (4C/1-e^-q + 2 ) ϵ = O(ϵ) . Lastly, the expected cost of instantiating Λ_A(ρ) is the average degree d_avg = ∑_j=1^⌊ d/2 ⌋ p_j (P_j), which corresponds to the average number of queries to the block-encoding. Note that the degrees of the polynomials are (P_j) = ⌈d/2⌉ +j ≤ d. To evaluate the average degree, recall that this theorem assumes that |c_n| ≤ C e^- q n for all n ≥⌈ d/2 ⌉ +1, where this bound is tight at n= ⌈ d/2 ⌉ + 1: |c_⌈ d/2 ⌉ +1| = C e^-q(⌈ d/2 ⌉ +1). This implies that the mean of the distribution p_j = |c_⌈ d/2 ⌉ +j|/∑_k=1^⌊ d/2 ⌋ |c_⌈ d/2 ⌉ +k| is upper bounded by the mean of the geometric distribution p̃_j = e^-q(⌈ d/2 ⌉ +j)/∑_k=1^⌊ d/2 ⌋ e^-q( ⌈ d/2 ⌉ +k) = e^-qj/∑_k=1^⌊ d/2 ⌋ e^-q k. Hence, we may upper bound d_avg as d_avg = ∑_j=1^⌊ d/2 ⌋ p_j (P_j) = ⌈d/2⌉ + ∑_j=1^⌊ d/2 ⌋ j p_j ≤⌈d/2⌉ + ∑_j=1^⌊ d/2 ⌋ j p̃_j = ⌈d/2⌉ - ⌊d/2⌋e^-q ⌊ d/2 ⌋/1-e^-q ⌊ d/2 ⌋ + 1/1-e^-q ≤⌈d/2⌉ - ⌊d/2⌋ 2e^-q⌊ d/2 ⌋ + 1/1-e^-q ≤⌈d/2⌉ - ( ⌈d/2⌉-1 ) 2e^-q d/2 + 1/1-e^-q = ⌈d/2⌉(1 - 2 √(ϵ (1-e^-q)C)) + 2 √(ϵ (1-e^-q)C) + 1/1-e^-q = d/2(1 - O( √(ϵ)) ) + O(1) , where line 3 follows from evaluating mean of the geometric distribution (i.e., evaluating a geometric series), the inequality of line 4 holds for e^-qd/2 = √(ϵ)((1-e^-q)/C )^1/2≤ 1/2 (i.e., sufficiently small ϵ), line 5 follows from ⌊ d/2 ⌋ = ⌈ d/2 ⌉ -1 and e^-q⌊ d/2 ⌋≥ e^-qd/2, and line 6 follows from Eq. (<ref>). Suppose that F(x) is a bounded function F_[-1,1]≤ 1 with a Chebyshev expansion F(x) = ∑_n=0^∞ c_n T_n(x) on the domain x ∈ [-1,1], where for some degree d ≥ 2 the coefficients decay as |c_n| ≤ C e^- q n for all n ≥ d/2, for some constants C, q > 0. Suppose furthermore that a degree-d truncation of F(x), P(x) = ∑_n=0^d c_n T_n(x), achieves an approximation error of ϵ as: | F(x) - P(x) | ≤∑_n=d+1^∞ |c_n| =: ϵ, such that a QSP implementation of the channel 𝒫_A(ρ) = P(A) ρ P(A)^† for some operator A deviates from the target channel ℱ_A(ρ) = F(A) ρ F(A)^† by an error 𝒫_A - ℱ_A _♢≤ 2ϵ = O(ϵ), while making d queries to the block-encoding of A. Then there exists an ensemble of polynomials {P_j(x)} of degree deg(P_j) ≤ d, and an associated probability distribution p_j such that: | P_j(x) - F (x) | ≤ 2√(ϵ)= O(√(ϵ)) | ∑_j=1 p_j P_j(x) - F(x) | ≤ϵ = O(ϵ), while the average degree of these polynomials is d_avg := ∑_j=1 p_j deg(P_j) ≤d/2 + log(C)/2q - log(1-e^-q)/2q + 1/2 + 1/1-e^-q = d/2 + O(1). Therefore, according to the mixing lemma for block-encodings (Lemma <ref>), the channel Λ_A(ρ) = ∑_j=1 p_j P_j(A) ρ P_j(A)^† suffers error Λ_A - ℱ_A _♢≤ 6 ϵ = O(ϵ), while making d_avg≤ d/2 + O(1) queries to a block-encoding of A in expectation. The cost reduction realized by this channel is [Here, ≲ neglects the terms independent of d and C in Eq. (<ref>), which are less relevant than log(C)/2q in practice. See examples in Sec. <ref>.] d_avg/d≲1/2(1+ log(C)/qd) , which approaches 1/2 in the limit of large d. The underlying goal of this theorem is to simulate evolution under F(A) according to the channel ℱ_A(ρ) = F(A) ρ F(A)^†. For notational convenience, let us define the degree t polynomial truncation of the Chebyshev series of F(x) as P^[t](x) := ∑_n=0^t c_n T_n(x). By the assumption that |c_n| ≤ C e^-qn for n ≥ d/2, we find that for t ≥ d/2, this polynomial truncation suffers error an over x ∈ [-1,1]: | P^[t](x) - F(x) | = |∑_n=t+1^∞ c_n T_n(x) | ≤∑_n=t+1^∞ |c_n| ≤∑_n=t+1^∞ C e^-qn = Ce^-qt/1-e^-q . Therefore, the error suffered by the degree d truncation P^[d](x) = P(x) is at most |P^[d](x) - F(x) | ≤Ce^-qd/1-e^-q =: ϵ. To estimate evolution under ℱ_A, one can use QSP to implement the channel 𝒫_A(ρ) = P^[d](A) ρ P^[d](A)^†. Using the diamond norm bound of Eq. (<ref>), we see that 𝒫_A suffers error 𝒫_A - ℱ_A _♢ ≤ 2 P^[d](A) - F(A) ≤ 2 max_x∈[-1,1]| P^[d](x) - F(x) | ≤ 2 ϵ. As P^[d](A) is a degree d polynomial, the cost of implementing 𝒫_A is d queries to the block-encoding of A. Our goal is to reproduce evolution under ℱ_A to diamond norm error O(ϵ) by using a probabilistic mixture of QSP polynomials and invoking the mixing lemma. We construct these polynomials by truncating the original series at a cutoff degree d^*, selected as follows. Because the mixing lemma quadratically suppresses error, and the degree scales as log(1/ϵ), there should exist an ensemble of polynomials of degree around d^* ≈ d/2 with the same error as the original polynomial. We can then readily determine d^* by demanding that it be the smallest integer suffering error at most √(ϵ): ( Ce^-qd^*/1-e^-q)^2 ≤ϵ = Ce^-qd/1-e^-q ⇒ d^* = ⌈d/2 + log(C)/2q - log(1-e^-q)/2q⌉ = d/2 + O(1). Because the error suffered by a single polynomial obtained by truncating at d^* is greater than that at d, we have d^* < d. Next, let us define our ensemble of polynomials as P_j(x) = P^[ d^* ](x) + c_d^*+j/p_j T_d^* +j(x) |c_d^* + j| > 0 0 c_d^* +j =0 for j=1,2,..., d-d^*, where p_j is the associated probability distribution defined as p_j = |c_ d^* +j|/∑_k=1^d-d^* |c_d^* +k|. Intuitively, each polynomial P_j(x) consists of the degree d^* truncation P^[ d^* ](x), and an additional higher order Chebyshev polynomial chosen such that the average polynomial is the degree d truncation: ∑_j=1^ d-d^* p_j P_j(x) = P^[d](x). The distribution p_j is chosen such that terms with larger Chebyshev coefficients are given more probability mass and are preferentially sampled. Each polynomial in this ensemble is guaranteed to suffer error |P_j(x) - F(x)| ≤ |P^[ d^* ](x) - F(x) | + |c_ d^* +j|/p_j ≤∑_n= d^* +1^∞ |c_n| + ∑_k=1^d-d^* |c_ d^* +k| ≤ 2 ∑_n=d^* +1^∞ |c_n| ≤ 2 ∑_n=d^* +1^∞ C e^-qn = 2Ce^-q d^* /1-e^-q≤ 2√(ϵ) = O(√(ϵ)). This bound also implies that P_j(x) is bounded as P_j _[-1,1]≤ 1+2√(ϵ). On the other hand, the average polynomial suffers error | ∑_j=1^ d-d^* p_j P_j(x) - F(x) | = | P^[d](x) - F(x) | ≤ϵ. In the language of the mixing lemma for block-encodings (Lemma <ref>), this corresponds to values a = 2√(ϵ) and b = ϵ. Accordingly, the channel Λ_A(ρ) = ∑_j=1^d-d^* p_j P_j(A) ρ P_j(A)^† suffers error Λ_A - ℱ_A _♢≤ a^2 + 2b = 6 ϵ = O(ϵ) . Lastly, the expected cost of instantiating Λ_A(ρ) is the average degree d_avg = ∑_j=1^ d-d^* p_j (P_j), which corresponds to the average number of queries to the block-encoding. Note that the degrees of these polynomials are (P_j) = d^* +j ≤ d. To evaluate the average degree, recall that this theorem assumes that |c_n| ≤ C e^- q n for all n ≥ d/2 , This implies that the mean of the distribution p_j := |c_d^* +j|/∑_k=1^d-d^* |c_d^* +k| is upper bounded by the mean of the geometric distribution p̃_j := e^-q(d^* +j)/∑_k=1^d-d^* e^-q( d^* +k) = e^-qj/∑_k=1^d-d^* e^-q k. Hence, we may upper bound d_avg as d_avg = ∑_j=1^ d-d^* p_j (P_j) = d^* + ∑_j=1^ d-d^* j p_j ≤ d^* + ∑_j=1^ d-d^* j p̃_j = d^* - (d-d^*) e^-q (d-d^*)/1-e^-q (d-d^*) + 1/1-e^-q ≤ d^* + 1/1-e^-q = ⌈d/2 + log(C)/2q - log(1-e^-q)/2q⌉ + 1/1-e^-q ≤d/2 + log(C)/2q - log(1-e^-q)/2q + 1/2 + 1/1-e^-q = d/2 + O(1) where line 3 follows from evaluating mean of the geometric distribution (i.e., evaluating a geometric series), and line 4 from d^* < d. Accordingly, the cost reduction realized by Λ_A is the ratio d_avg/d ≤1/2(1+ log(C)/qd - log(1-e^-q)/qd + 1/d + 2/(1-e^-q) d ) ≈1/2(1+ log(C)/qd) where the last line follows from the fact that the log(C) contribution dominates in practice (see Sec. <ref>). In the large d limit, the cost reduction approaches 1/2 inverse-polynomially fast. Lastly, as this construction makes no reference to the eigenvalues or singular values of A, stochastic QSP applies equally as well to QET and QSVT, where P(A) acts on the eigenvaleus or singular vectors of A, respectively. In addition, while the presentation of stochastic QSP here is tailored toward functions expressed in the basis of Chebyshev polynomials, we extend this result to more general functions and arbitrary arbitrary bases in Sec. <ref>. Let us take a minute to interpret this result. According to Theorem <ref>, stochastic QSP replaces a deterministic polynomial P(x) with an ensemble of polynomials { P_j(x) } and probability distribution p_j, whose average evolution achieves the same precision up to a constant factor, but at asymptotically half the cost. Importantly, this result is agnostic to the specific polynomial, requiring only that its coefficients decay exponentially according to Eq. (<ref>). As we showed in Sec. <ref>, this condition is generally satisfied by polynomial approximations to smooth functions, rendering stochastic QSP applicable to a wide range of algorithms. In practice the values of C and q in Eq. (<ref>) can be chosen to minimize the ratio d_avg/d, which effectively means minimizing log(C)/q. For visual intuition on this exponential decay, we provide an illustration of our stochastic QSP construction in Fig. <ref>. The channel implemented by stochastic QSP is the probabilistic mixture of polynomials Λ_A(ρ) = ∑_j p_i P_j(A) ρ P_j(A). As we discussed in Sec. <ref>, this channel may be realized by implementing an identical probabilistic mixture of unitaries { U_j } that block-encode the polynomials { P_j(A) }, and post-selecting on successfully accessing these block-encodings. In practice, this can be achieved by independently sampling j ∼ p_j, and implementing the QSP sequence that block-encodes the polynomial P_j(A). Because P_j(x) is only bounded as P_j _[-1,1]≤ 1+2√(ϵ) according to Eq. (<ref>), this implementation may require rescaling the polynomials by 1+2√(ϵ), which incurs a measurement overhead ∼ (1+2√(ϵ))^2 that asymptotically approaches 1. This procedure of course requires knowledge of the QSP phases for each polynomial P_j(A). These phases can be determined classically using an efficient phase finding algorithm, such as those of Refs. <cit.>, thus amounting to a classical pre-computation step. Moreover, observe from Eq. (<ref>) that each polynomial in the ensemble is constructed as the degree d^* ≈ d/2 truncation plus a higher order term in the Chebyshev expansion, up to degree d. Inclusion of the degree d^* truncation guarantees that the first condition of the mixing lemma is satisfied with error O(√(ϵ)), and sampling the higher order terms allows the second condition of the mixing lemma to be satisfied with error O(ϵ). The specific sampling distribution is chosen to be proportional to the coefficients of the Chebyshev expansion. Because these are assumed to decay exponentially, the corresponding probability mass is concentrated around ≈ d/2, and so too is the average degree. Through this view, the ensemble of stochastic QSP can be seen as a fixed low order term plus higher order correction terms, which are randomly sampled according to the QDrift protocol <cit.> to leverage randomized compiling. An ensemble with a similar structure, albeit in the context of Trotterization, was recently used in Ref. <cit.> to double the order Trotter formulae through randomized compiling. Lastly, it is important to note that while stochastic QSP reduces the expected cost to d_avg≈ d/2 +O(1), it does not reduce the maximum degree of the polynomials implemented: some polynomials in the ensemble will have degree greater than d_avg, with one even having degree d. This however is unavoidable. In fact it is necessary for the ensemble to contain polynomials of degree > d/2 in order to attain a level of error equivalent to a degree-d polynomial. To see this, observe that if all the polynomials in the ensemble had degree at most k < d, then their average ∑_j p_j P_j(x) would also be a degree k<d polynomial. This average polynomial would not be able to achieve a precision equivalent to that of a degree d polynomial, thus failing to meet the second condition of the mixing lemma. Note however that the distribution p_j concentrates around small values of j, meaning that degrees much larger than ≈ d/2 are rare. Through this interpretation, stochastic QSP is similar to the “semi-quantum matrix processing" algorithm of Ref. <cit.> for estimating expectation values and matrix elements of a matrix-valued function. The authors achieve this by decomposing a degree-d polynomial approximation of this function into the Chebyshev basis, and sampling its constituent Chebsyhev polynomials, and measuring an estimator of the sampled polynomial that converges to the desired expectation value/matrix element. Like stochastic QSP, this procedure can reduce the expected cost of certain algorithms to a value < d if the Chebyshev coefficients decay quickly, but it does not reduce the maximal degree because the order d Chebyshev polynomial can still be sampled. However, this method differ from stochastic QSP in two key ways. First, Ref. <cit.> employs a quasi-probability technique where improvements in average degree are traded for additional variance in the measurements, whereas stochastic QSP always implements a normalized probability distribution. Second, the constructions in Ref. <cit.> explicitly consider a target observable O and modify their circuit to a accommodate a parity measurement Z ⊗ O with an ancilla register. Hence, their construction does not constitute an approximate realization of a desired matrix function, whereas stochastic QSP indeed approximates evolution under said matrix function, thus rendering our approach more modular. § GENERALIZATIONS In the previous section, we introduced stochastic QSP tailored specifically to polynomial approximations obtained from truncated Chebyshev expansions. However, it turns out that this specialization is not necessary, and that stochastic QSP is readily generalized to a much broader range of use cases. In this section, we show how our method applies to both definite- and indefinite-parity polynomials, Taylor series, trigonometric polynomials, generalized QSP <cit.>, and implementations of positive operator-valued measures (POVMs) using QSP. §.§ Definite and Indefinite Parity The statement of Theorem <ref> applies to functions F(x) of indefinite parity[Recall that a function is said to have definite parity if it is either even or odd. Otherwise it has indefinite parity.] and produces an ensemble of polynomials { P_j } that also have indefinite parity. However, if F(x) has definite parity, it turns out that this construction preserves the parity. The parity of the implemented polynomials P_j is important because definite-parity polynomials admit simpler implementations via QSP. Indeed, by construction QSP can only produce polynomials of definite parity (see Eqs. (<ref>) and (<ref>)). In general, indefinite parity polynomials require using a linear combination of unitaries circuit, which demands an extra ancilla qubit and rescales the resulting block-encoding by 1/2. This rescaling can be undesirable in algorithm construction because it may necessitate amplitude amplification to be corrected, and can be avoided when the target function F(x) has definite parity in the first place. We would like to retain this optimized performance in the context of stochastic QSP, which we can show is indeed the case. Consider the setting of Theorem <ref>, but also suppose that F(x) has definite party. Then there exists an ensemble of polynomials {P_j} with the same parity that satisfy the conditions of the theorem. If F(x) = ∑_n=0^∞ c_n T_n(x) has even (odd) parity then all c_n for odd (even) n must vanish. In other words, the function is only supported on even (odd) Chebyshev polynomials. Observe that all P_j in the construction are supported only on subsets of the support of F(x). Hence they must also be even (odd). Note that this corollary also extends to complex-valued functions, which are usually approximated by QSP by decomposition into their real and imaginary components. Therefore, for an arbitrary target function, realizing stochastic QSP requires a circuit no more complicated than a QSP circuit that approximates the target function. §.§ Taylor Series As we discussed in Sec. <ref>, Chebyshev expansions of a smooth C^∞ functions admit exponentially-decaying coefficients, and thus yield polynomial approximations that meet the requisite conditions for stochastic QSP. Functions of interest in the quantum algorithms literature commonly exhibit this smoothness property (like cos(x) or exp(-β x)), or are well-approximated by functions that do (like how the step function is approximated by erf(kx)). As such, we often desire a closed-form expression for the coefficients in the Chebyshev expansion, allowing us to give concrete guarantees for the values of C and q required by Theorem <ref>. However, for certain functions like √(x) and -xln(x), which are only smooth in certain domains, obtaining a closed form expression for the Chebyshev coefficients is cumbersome, and the literature generally works with Taylor series instead (see for example Theorem 68 of <cit.> and its applications). Fortunately, stochastic QSP directly generalizes to Taylor series, and expansions into bases of bounded polynomials more generally. Suppose { B_n (x) } are a collection of basis functions of degree n respectively, which are all bounded as B_n_[-1,1]≤ 1. Then the statement of Theorem <ref> holds with B_n(x) in place of T_n(x). This follows from the fact that the only property of the Chebyshev polynomials T_n(x) leveraged in the proof of Theorem <ref> is that they were bounded as T_n_[-1,1] = 1. Taylor series methods derive their accuracy from the analysis in Corollary 66 of Ref. <cit.>. The basic idea is the following. Suppose F(x) is analytic and bounded on the interval [-1,1], and our goal is to approximate it by a polynomial on the interval [-1+δ, 1-δ] for small δ < 1. If F(x) = ∑_n=0^∞ c_n x^n is the Taylor series of F(x) with coefficients |c_n| ≤ 1, then the error from truncating at degree d is: sup_|x| ≤ 1-δ|∑_n=d+1^∞ c_n x^n | ≤∑_n=d+1^∞ (1-δ)^n. We immediately obtain an exponential decay in truncation error. This is a slightly weaker condition than the one required for Corollary <ref>; we require an exponential bound on the coefficients themselves rather than on the truncation error. But if we are willing to approximate the stretched function F( (1-δ) x ) on the interval [-1,1] instead, then substituting the Taylor expansion yields F((1-δ)x) = ∑_n=0^∞ c_n (1-δ)^n x^n where the new coefficients c_n (1-δ)^n exhibit the desired exponential decay, thus rendering this Taylor series expansion amenable to parallel QSP. §.§ Trigonometric Polynomials Recent works <cit.> have considered a model of quantum computation in which a constant-size control register is strongly coupled to many qubits with an otherwise local connectivity graph. In such an architecture, controlled time evolution can be implemented through the Trotter approximation, but Hamiltonian simulation via QSP techniques remains out of reach due to the small size of the control register. Nonetheless QSP-like transformations of a Hamiltonian H can be implemented through applying QSP to a controlled time evolution operator. A controlled time evolution operator is a block-encoding of e^i H t. Applying QSP to this block-encoding generates trigonometric polynomials ∑_n c_n e^i n H t. If we select t = π/H and let our variable of interest be θ=Ht = π H/H, then we can approximate functions F : [-π, π] → [-1,1] using degree-d trigonometric polynomials: F(θ) ≈∑_n c_n e^i n θ. Hence, by selecting B_n(θ) =e^inθ for the basis functions, we see how Corollary <ref> applies in this setting as well. Any method for constructing trigonometric polynomials can be used as long as the coefficients c_n decay exponentially. Conveniently, we show in Appendix <ref> that for C^∞ functions F(θ), their Fourier series have exponentially decaying coefficients. We see that, due to the relationship between Fourier expansions and Chebyshev expansions, stochastic QSP applies equally well to trigonometric polynomials as to regular polynomials. §.§ Generalized QSP Recently a technique was proposed <cit.> for optimizing QSP implementation, specifically when the block-encoded operator U is unitary and is encoded via the controlled-U operation. In this situation the usual constraints on parity can be lifted when synthesizing complex polynomials, enabling polynomials of indefinite parity to be generated directly through QSP and avoiding rescaling from using LCU circuit. For real polynomials it is possible to remove parity constraints using Theorem 56 of <cit.>, but this introduces an undesirable factor of 1/2 as discussed earlier. Using the methods of <cit.> this factor can be avoided. By this reasoning, stochastic QSP is compatible with the construction of Ref. <cit.>, which halves the asymptotic cost of QSP-based Hamiltonian simulation by using generalized QSP. This is achieved by designing a block encoding of both the quantum walk operator and its inverse, to which generalized QSP may be applied to approximate e^-iHt at roughly half the cost of ordinary QSP. Stochastic QSP could be applied on top of this algorithm to further reduce the cost, by a asymptotic factor of 1/4. stochastic QSP is compatible with this optimization. From Theorem 3 of <cit.>, we see the only additional requirement is that all sampled polynomials P_j satisfy |P_j(x)|^2 ≤ 1 for x on the complex unit circle. If this is the case for F(x), then, since the P_j are all √(ϵ)-close (see Eq. (<ref>)), we can achieve |P_j(x)|^2 ≤ 1 through rescaling the polynomials by 1+2√(ϵ). This rescaling is already presented earlier in footnote 3; can probably forgo this here §.§ POVMs QSP methods typically project out a single polynomial from the QSP sequence, conventionally taken to be the polynomial P(A) encoded in the |0⟩⟨ 0| block (see Eqs. (<ref>) and (<ref>)). Indeed stochastic QSP is concerned with approximating evolution under a channel by projecting out this component. However, in some situations <cit.>, we are also interested in the other elements of the QSP sequence– particularly those that influence the measurements of the ancilla qubit(s) used to construct the block-encoding. In the conventional encoding, if the ancilla qubit is initialized as |0⟩, then by invoking unitarity and drawing analogy to Eq. (<ref>), this measurement implements a POVM with operators |P(A)|^2 and (1-A^2)|Q(A)|^2 = I-|P(A)|^2. In practice this is desired to approximate a target POVM with operators |F(A)|^2 and I- |F(A)|^2. We find that stochastic QSP also applies to the approximation of such a target POVM. We can formalize this by considering an ideal quantum map of the following form. Suppose A has a spectral value decomposition ∑_iλ_i |λ_i⟩⟨λ_i|, and that there is some function G(x) satisfying |F(x)|^2 + (1-x^2)|G(x)|^2 = 1. Then our goal is to approximate the following map: |0⟩⊗|λ_i⟩→|0⟩⊗ F(λ_i)|λ_i⟩ + |1⟩⊗ G(λ_i) √(1-λ_i^2)|λ_i⟩. In the setting of Theorem <ref>, consider the ensemble of quantum circuits that implement stochastic QSP (i.e. the QSP circuits that implement {P_j(A)}), upon leaving the QSP ancilla qubit(s) unmeasured. Denoting this ensemble by {U_j}, the quantum channel ∑_j p_j U_j ρ U_j^† approximates the map in Eq. (<ref>) to error O(ϵ) in diamond norm. In the sake of brevity, we defer this proof to Appendix <ref>. We also note that this result naturally extends to QSVT, in which case the desired mapping is analogous to Eq. (<ref>), but acting on the singular values and singular vectors. § APPLICATIONS Here we apply stochastic QSP to several common polynomials used in the quantum algorithms literature to assess its performance in practice. We study four polynomials: * The Jacobi-Anger expansion of cosine <cit.>: cos(tx) = J_0(t) + 2∑_n=1^∞ (-1)^n J_2n(t) T_2n(x), where the J_2n(t) are the Bessel functions of the first kind. To achieve additive error at most ϵ, this series may be truncated at degree d = O(|t| + log(1/ϵ)/log(e + log(1/ϵ)/|t|)). This polynomial, in conjunction with the analogous expansion for sin(tx), furnishes an algorithm for Hamiltonian simulation with near-optimal query complexity <cit.>. * The Jacobi-Anger expansion of an exponential decay <cit.>: e^-β(x+1) = e^-β[ I_0(β) + 2∑_n=1^∞ I_n(β) T_b(-x)], where the I_n(β) is the modified Bessel functions of the first kind. This expansion may be truncated at degree d = O(√((β + log(1/ϵ))log(1/ϵ))) = O(√(β)log(1/ϵ)) to achieve error at most ϵ. Naturally, the resulting polynomial is commonly used for imaginary time evolution <cit.>, thermal state preparation <cit.>, and partition function estimation <cit.>. * A smooth approximation of 1/x in a domain away from the origin <cit.>: 1/x ≈1- (1-x^2)^b/x = 4∑_n=0^b-1 (-1)^n 2^-2b∑_m=n+1^b 2bb+m T_2n+1(x) for an even integer b ≫ 1. While this series is a degree O(b) polynomial, its coefficients decay rapidly, such that it can be further truncated to degree d= O( √(b log(b/ϵ))) while guaranteeing error at most ϵ. The resulting polynomial is particularly useful for inverting matrices and thus solving linear systems of equations. If we take the (non-zero) eigenvalues of the matrix to be lower-bounded as |λ| ≥λ_min, then in order to ensure that the polynomial approximation behaves as ≈ 1/x over the range of eigenvalues, it suffices to choose chooses b = O ( 1/λ_min^2log(1/λ_minϵ) ). This corresponds to a truncation degree d = O(κlog(κ/ϵ)), where κ := 1/λ_min is the condition number. For completeness, we note that algorithms with improved performance have very recently been discovered <cit.>. * An approximation of (kx) obtained from integrating the Jacobi-Anger expansion of a Gaussian <cit.>: (kx) = 2ke^-k^2/2/√(π)[I_0(k^2/2)x + ∑_n=1^∞ I_0(k^2/2) (-1)^n ( T_2n+1(x)/2n+1 - T_2n-1/2n-1) ]. To achieve error ϵ, it suffices to truncate this series at degree d = O(√((k^2 + log(1/ϵ))log(1/ϵ))) = O(k log(1/ϵ)). In practice, this polynomial is used to approximate the step function by selecting k ≫ 1. Notable applications of this approximation include the search problem  <cit.>, phase estimation <cit.> and ground state preparation <cit.>. All four of these functions feature a cost parameter, namely t,β,b, and k respectively, whose value determines the truncation degree necessary to achieve an accurate approximation. We apply stochastic QSP to these polynomials, and illustrate the cost reduction ratio d_avg/d as a function of d in Figure <ref>. We rely on the following procedure to compute d_avg. First, we select an integer n_1, and then numerically determine values of C,q such that c_n ≤ Ce^-qn holds for n ≥ n_1. This is achieved by selecting another integer n_2 > n_1 and computing C,q such that Ce^-qn goes through the points (n_1, c_n_1) and (n_2, c_n_2). In doing so, we choose n_1,n_2 to guarantee c_n ≤ Ce^-qn indeed holds for all n ≥ n_1, and also to minimize log(C)/q so as to heuristically reduce the dominant term in the bound on d_avg in Eq. (<ref>). We also select n_1,n_2 independent of the degree d; we find that n_1 naturally converges to the degree at which |c_n| starts to decay exponentially. For cos(tx) and (kx) we find that this regime sets in later as the respective cost parameter increases. Lastly, for each d, the cutoff degree d^* is computed from Eq. (<ref>). Then d_avg is obtained by explicitly computing the probabilities p_j from Eq. (<ref>) and calculating the corresponding weighted average of degrees. In Figure <ref> we observe the desired phenomenon: the cost reduction ratio d_avg/d approaches 1/2 as d increases, with a discrepancy scaling as O(1/d). Additionally, as the cost parameter increases, the magnitude of the error terms ∼log(C)/qd can also increase, resulting in a later approach to 1/2. This is expected because a larger cost parameter requires a larger degree to maintain the same level of approximation. These results indicate that indeed, stochastic QSP reduces the query complexity of QSP by approximately 1/2 in regimes of practical interest. A more surprising phenomenon is that for some functions and values of the cost parameter, d_avg/d approaches 1/2 from below, resulting in improved performance for small d. This arises because we determine d_avg by choosing values C,q so as to minimize the deviation log(C)/q. In some cases, this can result in a value C<1, or equivalently log(C)<0, which causes the ratio d_avg/d to deviate from 1/2 by a negative value as per Eq. (<ref>). Moreover, the sawtooth behavior observed for each function in Figure <ref> is explained by the presence of the ceiling function in the definition of d^* in Eq. (<ref>). When d is small then the effect of this rounding is more pronounced. § CONCLUSION By merging quantum signal processing and randomized compiling, we have developed stochastic QSP. As we showed, stochastic QSP enables us to lift a QSP algorithm to its randomized counterpart, and simultaneously shrink the circuit complexity by a factor of 2. We empirically verified this cost reduction for various algorithms of interest, including real/imaginary time evolution, matrix inversion, phase estimation, and ground state preparation. This reduction can also be interpreted as enabling a cost parameter in the underlying function to increase (e.g. a longer time t in Hamiltonian simulation) without changing the query complexity. In aggregate, this result demonstrates that classical randomness is a useful resource for quantum computing, and can help bring quantum algorithms closer to the near-term. Moreover, in this work we did not consider noisy gates, but rather assumed the ability to perform QSP exactly. As such, we leveraged randomized compiling to suppress error in QSP polynomial approximations to functions. Nonetheless, as randomized compiling can also suppress noise in erroneous gates <cit.>, this suggests that a practical implementation of stochastic QSP could benefit from also randomizing over the gate set, as a sort of doubly-stochastic channel. Along these lines, it would be interesting to study the requirements and conditions for implementing stochastic QSP on near-term quantum hardware. The performance improvement realized through randomized compiling suggests further uses of this technique in quantum information. While here we have applied randomized compiling to quantum algorithms via QSP, it is likely that randomized compiling can confer a similar advantage to the traditional constructions of quantum algorithms (e.g., Grover search <cit.>, or conventional phase estimation via the quantum Fourier transform <cit.>). Further, it would be interesting to search for problems for which randomized compiling (or a variant thereof) confers a super-quadratic suppression of error, translating to a cost reduction by a factor smaller than 1/2 in our context. With an eye toward applications, the mixing lemma could also be used to better understand and generate random unitaries and unitary designs <cit.>, and perhaps even be integrated with randomized measurements <cit.> to improve near-term protocols for studying quantum systems. Likewise, there is an absence of randomized compiling in classical simulation algorithms, which could admit similar improvements for problems aimed to emulate quantum mechanics. Given the ubiquity of random processes in the physical world, it is only natural that we can harness randomness to gain deeper insights into quantum systems. Acknowledgements: The authors thank Pawel Wocjan, Anirban Chowdhury, Isaac Chuang, and Zane Rossi for useful discussion and feedback. JMM acknowledges the gracious support of IBM Quantum through the IBM internship program. apsrev4-2 § POLYNOMIAL APPROXIMATION TO SMOOTH FUNCTIONS Here we recall theorems on polynomial approximations to smooth functions, specifically the decay of Fourier coefficients and Chebyshev coefficients of smooth functions. For more details on these results, see Refs. <cit.>. §.§ Decay of Fourier Coefficients Fourier analysis can be used to determine the rate of decay of the Fourier coefficients of smooth functions. The following is a classic result <cit.>, for which we provide a brief proof: Let G(θ) be a function with period 2π and Fourier series G(θ) = a_0/2 + ∑_n=-∞^∞ a_n e^inθ. If G(θ) is a C^k function on θ∈ [0,2π) (i.e., continuous and differentiable through order k), then the Fourier coefficients decay polynomially as a_n = o(1/n^k). Similarly, if G(θ) is a C^∞ function on θ∈ [0,2π), then the Fourier coefficients decay super-polynomially as a_n = O(e^-qn^r) for some q,r > 0. The Fourier coefficients are a_n = 1/2π∫_0^2π G(θ) e^-i nθ dθ . Using integration by parts repeatedly, we can write this as a_n = 1/2π∫_0^2π G(θ) e^-i nθ dθ = i/n1/2π∫_0^2π G'(θ) e^-inθ dθ = ... = i^k/n^k1/2π∫_0^2π G^(k)(θ) e^-inθ dθ . This implies that the Fourier coefficients of the kth derivative G^(k)(θ) are (-in)^k a_n. According to the Riemann-Lebesgue lemma <cit.>, the Fourier coefficients of a smooth function go to 0 as n→∞. Therefore, if G(θ) is a C^k function, then G^(k)(θ) is continuous, and thus its Fourier coefficients decay as lim_n→∞ (-in)^k a_n = 0. This limit implies that a_n = o(1/n^k). On the other hand, if G(θ) is a C^∞ function, then by an analogous argument, lim_n→∞ n^k a_n = 0 for all integers k ≥ 1. This implies that a_n decays super-polynomially, i.e. as a_n = O(e^-qn^r) for some q,r>0. §.§ Decay of Chebyshev Coefficients An analogous result can be derived for the decay of Chebyshev coefficients. Recall that the nth Chebyshev polynomial T_n(x) is a degree n polynomial defined on x∈ [-1,1] as T_n(x) = cos(n arccos(x)). It can be shown that T_n(x) is a degree n polynomial of definite parity (either even or odd, depending on n) and bounded magnitude |T_n(x)|_[-1,1] = 1 <cit.>. The Chebyshev polynomials provide a convenient basis for expanding functions on x∈ [-1,1]. A function F(x) can be expanded as F(x) = c_0/2 + ∑_n=1^∞ c_n T_n(x) , where c_n = 2/π∫_-1^1 F(x) T_n(x)/√(1-x^2) dx are the Chebyshev coefficients for all n≥ 0. By the relation between Chebyshev series and Fourier series, it can be shown that the Chebyshev coefficients decay in a manner analogous to the Fourier coefficients: Let F(x) be a function on x∈ [-1,1] with the Chebyshev series F(x) = c_0/2 + ∑_n=1^∞ c_n T_n(x) , If F(x) is a C^k function on x ∈ [-1,1], then the Chebyshev coefficients decay polynomially as c_n = o(1/n^k). Similarly, if F(x) is a C^∞ function on x ∈ [-1,1], then the Chebyshev coefficients decay super-polynomially as c_n = O(e^-qn^r) for some q,r > 0. Let θ = arccos(x), or equivalently x = cosθ. Using this change of variables, we can re-express the Chebyshev coefficients as c_n = 2/π∫_-1^1 F(x) T_n(x)/√(1-x^2) dx = 2/π∫_0^π F(cos(θ)) cos(n θ) dθ = 1/2π∫_0^2π F(cos(θ)) e^-i n θ dθ where the last equality stems from F(cos(θ)) being an even function of θ. Comparing with Eq. (<ref>), we see that the Chebyshev coefficients c_n are equal to the Fourier coefficients of the function G(θ) := F(cos(θ)). Therefore, if we can show that F(x) being a C^k function of x implies that G(θ) = F(cos(θ)) is a C^k function of θ, then we can inherit the result of Theorem <ref> to prove the purported decay pattern of the coefficients c_n. This is easy to show. Observe that the first derivative of G(θ) is G'(θ) = F'(cosθ) (-sinθ) = - F'(x) sinθ . If F(x) is a C^1 function of x=cosθ, such that F'(x) is a continuous function of x, then G'(θ) = -F'(cosθ) sin (θ) is a composition of continuous functions of θ. Therefore, G'(θ) is is also a continuous function of θ, which implies that G(θ) is a C^1 function of θ. A similar trend persists at higher orders: by using the chain rule and product rule, the kth derivative G^(k)(θ) can be expressed as a linear combination of products of derivatives F^(j)(cosθ) for j≤ k and trigonometric polynomials sin^a(θ) cos^b(θ) for some integers a,b≥ 0. Thus, if F(x) is a C^k function of x, then this expression for g^(k)(θ) is also continuous, which implies that G(θ) is a C^k function. Therefore, by appealing to Theorem <ref>, we see that if F(x) is a C^k function, then the Chebyshev coefficients c_n decays polynomially as c_n = o(1/n^k). Similarly, if F(x) is a C^∞ function, then c_n decays super-polynomially as c_n = O(e^-qn^r) for some q,r > 0. § EXTENSIONS OF THE MIXING LEMMA Here we present generalizations of the mixing lemma to non-unitary evolution and block-encodings. The proofs of these variants parallel the proof of the mixing lemma presented in Ref. <cit.>. §.§ The Generalized Mixing Lemma Consider a mixing lemma for arbitrary operators, including non-unitary evolution. In this scenario, we we wish to implement a channel 𝒮(ρ) = S ρ S^†, where S is not necessarily unitary, yet we only have access to operators R_j that approximate S. Then we can prove the following: Let S be a target operator, possibly non-unitary, and 𝒮(ρ) = S ρ S^† the corresponding channel. Suppose there exist operators R_j and a probability distribution p_j that approximate S as R_j - S ≤ a for all j, ∑_j p_j R_j - S ≤ b , for some a,b > 0. Then, the corresponding channel Λ(ρ) = ∑_j p_j R_j ρ R_j^† approximates the channel 𝒮 as Λ - 𝒮_♢≤ a^2 + 2b S . Paralleling the proof of the mixing lemma in Ref. <cit.>, let δ_j = R_j - R. This obeys δ_j ≤ a and ∑_j p_j δ_j ≤ b by the assumed conditions of the theorem. The action of the channel 𝒮 can then be expanded as Λ(ρ) = ∑_j p_j R_j ρ R_j^† = ∑_j p_j (S+δ_j) ρ (S+δ_j)^† = Sρ S^† + (∑_j p_j δ_j )ρ S^† + S ρ(∑_j p_j δ_j^†) + ∑_j δ_j ρδ_j^†. We can then bound the 1-norm Λ(ρ) - 𝒮(ρ) _1 via the triangle inequality as Λ(ρ) - 𝒮(ρ) _1 ≤ (∑_j p_j δ_j )ρ S^†_1 + S ρ(∑_j p_j δ_j^†) _1 + ∑_j δ_j ρδ_j^†_1 . The first term on the right hand side of Eq. (<ref>) can be bounded by appealing to Holder's inequality (specifically, AB ≤AB_1 and AB ≤A_1 B in this context): ( ∑_j p_j δ_j ) ρ S^†_1 ≤∑_j p_j δ_j ρ_1 S^† ≤∑_j p_j δ_j ρ_1 S ≤ S b, where we have used that ρ_1 = 1. Analogously, the second term in Eq. (<ref>) can be upper bounded by S b. Again invoking Holder's inequality, the last term in Eq. (<ref>) can be bounded as ∑_j δ_j ρδ_j^†_1 ≤∑_j p_j δ_j ρδ_j^†_1 ≤∑_j p_j δ_j ρ_1 δ_j^† ≤∑_j p_j δ_j ρ_1 δ_j^† ≤∑_j p_j δ_j δ_j ≤ a^2. Therefore we have Λ(ρ) - 𝒮(ρ) _1 ≤ a^2 + 2b S , for all density matrices ρ. This translates to a diamond norm bound: Λ - 𝒮_♢ = sup_σ_1 ≤ 1 (Λ⊗ℐ )(σ) - (𝒮⊗ℐ)(σ) _1 ≤ a^2 + 2b S , because appending the identity channel ℐ does not alter the proof of 1-norm bound in Eq. (<ref>), which holds true for all density matrices. Notably, this result is analogous to the original mixing lemma, but modified by the spectral norm of S. For unitary evolution where S = 1, the generalized mixing lemma reduces to the usual mixing lemma. §.§ The Mixing Lemma for Block-Encodings We can consider an analog of the mixing lemma for block-encodings. In this scenario, we wish to evolve under an operator S (which may be non-unitary) encoded in a unitary V, yet we only have access to operators R_j block-encoded in unitaries U_j. We first study the case in which the operators are block-encoded in the |0⟩⟨ 0| block of their respective unitaries, which is the simplest form of a block-encoding and most often considered in the literature. Afterwards, we proceed to the general case, in which the operators are encoded through arbitrary projectors Π and Π'. First consider encodings in the |0⟩⟨ 0| block: [Mixing Lemma for Block-Encodings: |0⟩⟨ 0| Block] Let V be a unitary that block-encodes a (possibly non-unitary) target operator S as S = (⟨ 0 | ⊗ I) V ( |0 ⟩⊗ I). Suppose there exist unitaries U_j that block-encode operators R_j as R_j = (⟨ 0 | ⊗ I) U_j ( | 0 ⟩⊗ I). Also suppose there exists a probability distribution p_j such that R_j - S ≤ a for all j , ∑_j p_j R_j - S ≤ b , for some a,b ≥ 0. Then, the corresponding unitary channel Λ(ρ) = ∑_j p_j U_j ρ U_j^† approximates the action of the channel 𝒱 as Λ̅ - 𝒱̅_♢≤ a^2 + 2b , where Λ̅ and 𝒱̅ are channels that access the block-encodings of Λ and 𝒱 by appending an ancilla qubit and post-selecting: Λ̅(ρ) = (⟨ 0 | ⊗ I) ·Λ( | 0 ⟩⟨ 0 | ⊗ρ) · ( |0 ⟩⊗ I) 𝒱̅(ρ) = (⟨ 0 | ⊗ I) ·𝒱( | 0 ⟩⟨ 0 | ⊗ρ) · ( |0 ⟩⊗ I). First, observe that by linearity Λ̅(ρ) = (⟨ 0 | ⊗ I ) Λ( | 0 ⟩⟨ 0 | ⊗ρ) ( |0 ⟩⊗ I) = ∑_j p_j R_j ρ R_j^†, and 𝒱̅(ρ) = (⟨ 0 | ⊗ I ) 𝒱( | 0 ⟩⟨ 0 | ⊗ρ) ( |0 ⟩⊗ I) = S ρ S^†. Therefore, Λ̅ and 𝒱̅ are analogous to the channels Λ and 𝒮 considered in Theorem <ref>. Likewise, S and R_j obey the requisite conditions of Theorem <ref> as per the assumptions of this theorem. Accordingly, Theorem <ref> implies that Λ̅ - 𝒱̅_♢≤ a^2 + 2b S ≤ a^2 + 2b, where we have used that S≤ 1 because S is block-encoded in a unitary. Next, consider the general case in which the operators are block-encoded by arbitrary projectors Π and Π'. Specifically, in this case, an operator is block-encoded as S = Π V Π' for projection operators Π, Π'. In practice, this means that S is accessed by first applying Π' to project the quantum state into the block of interest, then applying V, and finally applying Π' to extract the desired block. With this intuition, we can prove the following: Let V be a unitary, that block-encodes a (possibly non-unitary) target operator S as S = Π V Π', where Π, Π' are orthogonal projectors. Suppose there exist unitaries U_j that block-encode operators R_j as R_j = Π U_j Π '. Also suppose there exists a probability distribution p_j such that R_j - S ≤ a for all j , ∑_j p_j R_j - S ≤ b . Then, the corresponding unitary channel Λ(ρ) = ∑_j p_j U_j ρ U_j^† approximates the action of the channel 𝒱 as Λ̅ - 𝒱̅_♢≤ a^2 + 2b , where Λ̅ and 𝒱̅ are channels that access the block-encodings of Λ and 𝒱 by applying the projectors Π and Π': Λ̅(ρ) = Π·Λ( Π' ρΠ' ) ·Π 𝒱̅(ρ) = Π·𝒱( Π' ρΠ' ) ·Π. Analogous to the previous proof, observe that by linearity Λ̅(ρ) = Π·Λ( Π' ρΠ' ) ·Π = ∑_j p_j Π U_j Π' ·ρ·Π' U_j^†Π = ∑_j p_j R_j ρ U_j^†, and 𝒱̅(ρ) = Π·𝒱( Π' ρΠ' ) ·Π = Π V Π' ·ρ·Π' V^†Π = S ρ S^†, where we have use the fact the the orthogonal projectors are Hermitian: Π^† = Π, Π'^† = Π'. Therefore, we again see that Λ̅ and 𝒱̅ are analogous to the channels Λ and 𝒮 considered in Theorem <ref>, which implies that Λ̅ - 𝒱̅_♢≤ a^2 + 2b S ≤ a^2 + 2b, where we have again used that S≤ 1 because S is block-encoded in a unitary. § PROOF OF COROLLARY <REF> Here we prove Corollary <ref>. Recall that our map of interest is |0⟩⊗|λ_i⟩→|0⟩⊗ F(λ_i)|λ_i⟩ + |1⟩⊗ G(λ_i) √(1-λ_i^2)|λ_i⟩. In the setting of Theorem <ref>, consider the ensemble of quantum circuits that implement stochastic QSP (i.e. the QSP circuits that implement {P_j(A)}), upon leaving the QSP ancilla qubit(s) unmeasured. Denoting this ensemble by {U_j}, the quantum channel ∑_j p_j U_j ρ U_j^† approximates the map in Eq. (<ref>) to error O(ϵ) in diamond norm. We consider a function F(x) with definite parity for ease of presentation, but our analysis holds in the general case. When we apply QSP to an operator A with spectral decomposition ∑_iλ_i |λ_i⟩⟨λ_i| to a definite-parity real polynomial P_j, we implement the map |0⟩⊗|λ_i⟩→ |0⟩⊗ P_j(λ_i)|λ_i⟩ + |1⟩⊗ Q_j(λ_i) √(1-λ_i^2)|λ_i⟩, where Q_j is some polynomial satisfying |P_j(x)|^2 + (1-x^2)|Q_j(x)|^2 = 1 for all x ∈ [-1,1]. To show that the ensemble of unitaries implementing these polynomials approximates the desired map, we again rely on Lemma <ref>. The error of any individual map j is bounded by: sup_x | |0⟩⊗( P_j(x) - F(x)) + |1⟩⊗( Q_j(x) - G(x))√(1-x^2)| ≤ √( (2√(ϵ))^2 + sup_x |Q_j(x) - G(x) |^2(1-x^2) ), since we have already established |P_j - F| ≤ 2√(ϵ) in the proof of Theorem <ref>. Similarly the error of the probabilistic mixture over the maps is similarly bounded by: √(ϵ^2 + sup_x |∑_j p_j Q_j(x) - G(x) |^2(1-x^2)). Our goal is to bound the above two quantities by deriving properties of the Q_j from |P_j(x)|^2 + (1-x^2)|Q_j(x)|^2 = 1. We define for some fixed j: α(x) := P_j(x) - F(x) and β(x) := Q_j(x) - G(x). Now we calculate: 1 = |F|^2 + (1-x^2) |G|^2 = |P_j + α|^2 + (1-x^2) |Q_j + β|^2 = (|P_j|^2 + α P_j^* + α^* P_j + |α|^2) + (1+x^2)(|Q_j|^2 + β Q_j^* + β^* Q_j + |β|^2) = 1 + (α P_j^* + α^* P_j + |α|^2) + (1+x^2)(β Q_j^* + β^* Q_j + |β|^2), where ^* denotes the complex conjugate here. Thus: (1+x^2)|β Q_j^* + β^* Q_j + |β|^2| = |α P_j^* + α^* P_j + |α|^2|. We have: (1+x^2)|β Q_j^* + β^* Q_j + |β|^2| ≥ |β|^2 |α P_j^* + α^* P_j + |α|^2| ≤ 2|α| + |α|^2 ≤ 3|α| So |β|^2 ≤ 3|α| ≤ 6√(ϵ). Hence the quantity in Eq. (<ref>) is bounded by √(40ϵ). An identical calculation considering α̅(x) := ∑_j p_j P_j(x) - F(x) and β̅(x) := ∑_j p_j Q_j(x) - G(x) yields the bound |β̅|^2 ≤ 3|α̅|, hence bounding the quantity in Eq. (<ref>) by √(10)ϵ≤ 4ϵ. Now we apply Lemma <ref> with a = √(40ϵ) and b = 4ϵ we obtain an error in diamond norm of a^2 + 2b = 48ϵ = O(ϵ). Naturally the constant 48 can be tightened significantly using a more careful analysis.
http://arxiv.org/abs/2409.02399v1
20240904024852
Guidance for twisted particle filter: a continuous-time perspective
[ "Jianfeng Lu", "Yuliang Wang" ]
stat.CO
[ "stat.CO", "math.OC" ]
a]Jianfeng LuE-mail:[email protected] b]Yuliang WangEmail:[email protected] [a]Department of Mathematics, Department of Physics, Department of Chemistry, Duke University, Durham, NC 27708, USA. [b]School of Mathematical Sciences, Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, 200240, P.R.China. Guidance for twisted particle filter: a continuous-time perspective [ =================================================================== § ABSTRACT The particle filter (PF), also known as the sequential Monte Carlo (SMC), is designed to approximate high-dimensional probability distributions and their normalizing constants in the discrete-time setting. To reduce the variance of the Monte Carlo approximation, several twisted particle filters (TPF) have been proposed by researchers, where one chooses or learns a twisting function that modifies the Markov transition kernel. In this paper, we study the TPF from a continuous-time perspective. Under suitable settings, we show that the discrete-time model converges to a continuous-time limit, which can be solved through a series of well-studied control-based importance sampling algorithms. This discrete-continuous connection allows the design of new TPF algorithms inspired by established continuous-time algorithms. As a concrete example, guided by existing importance sampling algorithms in the continuous-time setting, we propose a novel algorithm called “Twisted-Path Particle Filter" (TPPF), where the twist function, parameterized by neural networks, minimizes specific KL-divergence between path measures. Some numerical experiments are given to illustrate the capability of the proposed algorithm. § INTRODUCTION The particle filter (PF), or the sequential Monte Carlo (SMC), has found a wide range of applications in various areas such as computational statistics and machine learning. It has received increasing popularity when dealing with tasks including statistical inference problems for state space models <cit.> and complex static models <cit.>, capability and safety techniques in large language models (LLM) <cit.>, to name a few. Generally, the particle filter involves the simulation of an artificial particle system over time, particularly suited for estimating statistical quantities of the following form via Monte Carlo: Z = 𝔼[∏_k=0^n g_k(X_k)], where g_k (0≤ k≤ n) are deterministic functions and (X_k)_k=0^n is a discrete-time Markov chain in ℝ^d with transition kernel P(x,dy). The simplest and most classical algorithm to solve (<ref>) is the bootstrap particle algorithm (BPF) proposed by Gordon, Salmond, and Smith in 1993 <cit.> (see Algorithm <ref> below). However, to achieve a desired level of precision, the required sample number N in the Monte Carlo approximation can be rather demanding, since the variance of ∏_k=0^n g_k(X_k) can be large. In order to reduce the variance, various “twisted" particle filters have been developed <cit.>. The general idea of the twisted particle filter (TPF) is as follows: Find a suitable (positive) twisting function φ(k,x) (1≤ k ≤ n, x∈ℝ^d), run the twisted Markon chain X^φ with the transition kernel P^φ_k(x,·) ∼φ(k,·)P(x,·) at the k-th step, and modify the function sequence g_k according to φ (denoted by g_k^φ) (see more details in (<ref>)–(<ref>) below). The twisted model wants to achieve: (1) The statistical quantity Z is preserved (i.e. 𝔼[∏_k=0^n g_k(X_k)] = 𝔼[∏_k=0^n g_k^φ(X_k^φ)]) (2) The variance of ∏_k=0^n g_k^φ(X_k^φ) is significantly reduced with a suitable choice of φ. Notably, this approach is exactly the methodology of the importance sampling <cit.> via a change of measure for discrete path measures, and the key step in constructing a twisted model is seeking the optimal twisting function φ to minimize the variance above. In fact, most existing TPF algorithms adopt the “look ahead" strategies when constructing or learning the twisting function φ (see <cit.> for instance), due to the expression of the optimal twisting function (see (<ref>) and (<ref>) below). Similar minimization problems are well-studied in the continuous-time setting, where importance sampling task can be recasted into a stochastic optimal control problem. A fruitful line of research has established several algorithms for the problem <cit.>. Such control-based algorithms have wide range of applications in various areas including molecular dynamics <cit.>, mathematical finance <cit.>, etc. In this paper, for some continuous-time Markov process (X_t)_t≥ 0 and deterministic functions h, g, we are particularly interested in the target quantity of the form Z = 𝔼[exp(∫_0^T h(s,X_s)ds ) g(X_T)], and the corresponding control-based importance sampling algorithms to reduce the variance of exp(∫_0^T h(s,X_s)ds ) g(X_T). See more details in Section <ref>. Remarkably, although the discrete-time particle filter and the continuous-time control-based importance sampling are typically studied separately by researchers from different communities, they share very similar mathematical nature. However, to the best of our knowledge, no attempt has been made to connect the twisted particle filter with the control-based importance sampling algorithms rigorously. In this paper, we build a bridge between these two problems under suitable settings for the discrete/continuous-time models (see Section <ref> for more details). Moreover, based on this connection, we propose a novel algorithm called “Twisted-Path Particle Filter" (TPPF), which is directly guided by a family of control-based importance sampling algorithms, as well as parts of their theoretical foundations including the classical Donsker-Varadhan variational principle <cit.>. The proposed TPPF algorithm is expected to be more versatile, less problem-dependent, and behave better in high dimensions (see detailed discussions at the beginning of Section <ref>). Some numerical examples are given in Section <ref> to validate the TPPF and compare it with existing approaches. §.§ Related works Twisted particle filters (TPF). The idea of twisting the model in the discrete-time setting is well established. Various twisted particle filter algorithms have been proposed and studied by researchers from different areas of computational statistics or machine learning. In <cit.>, the authors proposed the so-called “fully-adapted auxiliary particle filter" (FA-APF), where the “look-ahead" strategy was applied to twist the model, and they chose g_k(·) as the twisting function at each discrete time k. In <cit.>, the authors proposed the “iterated auxiliary particle filter" (iAPF) algorithm, where they made use of the recursive relation of the optimal twisting function (see details in (<ref>) below), and learned the twisting function iteratively using Galerkin approximation with Gaussian basis functions. Similar approaches were adopted in <cit.>, where the authors studied the problem from the view of discrete-time optimal control, and made some improvements to the learning structure for better numerical stability. Note that this fixed-point iteration method for approximating the optimal twisting function in <cit.> shares a similar idea with the temporal difference (TD) method <cit.> widely used in the reinforcement learning (RL) community. This fixed-point iteration structure for learning the twisting function has been applied and improved in some other results for twisted particle filters such as <cit.>, where <cit.> made use of the rejection sampling to generate samples from the twisted Markov chain, and <cit.> adopted suitable neural networks to parameterize the twisting function instead of the Galerkin approximation. Besides the TD-type methods, the authors of <cit.> proposed an “optimized auxiliary particle filter" (OAPF) algorithm, where they solve adaptively an convex optimization problem at each discrete time k to update the weights and positions of particles when simulating the discrete Markov chains. Learning-based TPF algorithms. Most TPF algorithms above are based on optimization over a Galerkin function space. More recently, designed for more practical tasks like large language modeling (LLM) in the machine learning community, some other deep-learning-based structures for approximating the optimal twisting function have been proposed. These learning-based algorithms include the Contrastive Twist Learning (CLT) in <cit.>, the Future Discriminators for Generation (FUDGE) in <cit.>, the SI𝒳O method in <cit.>, to name a few. Basically, in these frameworks, they parameterized the twisting function via a suitable neural network, chose the summation of some specially designed quantities along the time axis as the loss function (for instance, in <cit.>, they selected the KL-divergence at each time k determined by the current twisting function), and trained the network to obtain a good approximation of the optimal twisting functions. See more discussion on the choice of loss functions in Section <ref> below. Compared with Galerkin-based methods, such neural network approximations are expected to have better behavior in high dimensions. Continuous-time importance sampling (IS) via stochastic optimal control. As mentioned above, this paper connects the (continuous-time) control-based importance sampling and the (discrete-time) particle filters. From the theoretical perspective, the control-based importance sampling algorithms are well-studied in the importance sampling community. Various (continuous-time) importance sampling algorithms have been proposed, such as <cit.>. We are in particular interested in importance sampling algorithms for the path-dependent target (<ref>), and focus on a sequence of control-based ones due to the zero-variance property related to the solution of the stochastic optimal control problem (see for instance <cit.> or Proposition <ref> below). A large amount of control-based importance sampling algorithm admits the structure of the continuous-time policy gradient (PG) method <cit.> (also called the iterative diffusion optimization (IDO) in some other literature <cit.>), and the corresponding details will be discussed later in Section <ref>. There are various classical choices for the loss function in the PG iteration in the existing literature. In the “cross-entropy" algorithm <cit.>, the loss is chosen as KL-divergence between path measures (P^u^* P^u), where P^u is the path measure induced by the controlled SDE adding a control u to the drift, and P^u^* is the path measure corresponding to the optimal control u^*. Note that different from some other statistical inference problems, under the current setting, (P^u^* P^u) has explicit expression and is even quadratic after suitable parameterization for the control function u. Other choices of the loss functions include the relative entropy loss (P^u P^u^*) <cit.>, the variance loss (or log-variance loss) _P^v(dP^u^*dP^u) (or _P^v(logdP^u^*dP^u)) for some suitable basis path measure P^v <cit.>, etc. Some theoretical bounds for the KL-type losses above were established in <cit.>. Besides the PG-based algorithms, other related importance sampling methods include the well-known forward-backward stochastic differential equation (FBSDE) approaches <cit.>, where one approximates the target value Z via the solution of some SDE with given terminal-time state and a forward filtration. Learning-based IS algorithms Many of the conventional methods above may suffer from the curse of dimensionality. In order to solve the related stochastic optimal control problem discussed above, a wide range of learning-based algorithms are available in reinforcement learning for the continuous-time setting. In these frameworks, the learning objective is usually parameterized by neural networks, and over decades a variety of these algorithms have been proposed and studied, such as the classical (soft) policy gradient <cit.>, actor-critic with temporal difference learning <cit.> or with deep backward SDE <cit.>, soft Q-learning <cit.>, etc. Other improvements to these learning-based frameworks include adding entropy regularization to the optimal control problem <cit.>, and combining the metadynamics algorithms when X_s in (<ref>) is trapped in some metastable region <cit.>. §.§ Main contributions To end the introduction, we summarize the novelty and main contribution of our work here. * First, to our knowledge, this paper is the first to rigorously build a bridge between (discrete-time) twisted particle filters and (continuous-time) control-based importance sampling algorithms. This connection is of great significance because through it we can (1) explore the continuous limit of some twisted particle filter algorithm and have a better understanding of its behavior from a continuous-time perspective; (2) propose novel twisted particle filter algorithms based on continuous-time importance sampling algorithms, which are well-studied and proved to have their own advantages when applied to suitable models. * Second, our proposed algorithm TPPF treats the KL divergence between path measures as a loss function, and learns the optimal twisting function by neural networks. This approach can overcome the curse of dimensionality in many models compared with other Galerkin-based approximations for the optimal twisting functions. Moreover, TPPF is more robust and has a wider range of applications, because the learning procedure of the twisting function is problem-independent, which is different from other popular Galerkin-based methods <cit.>. The rest of the paper is organized as follows. In Section <ref>, we introduce the basic settings of both discrete and continuous time cases. We also discuss the importance sampling for both models via the twisting function or control variate, respectively. The existence and the zero-variance property of the optimal twisting function / optimal control are also discussed in Section <ref>. In Section <ref>, under suitable assumptions, we prove the convergence from the discrete-time model to the continuous-time model. Based on the connection built in Section <ref>, and motivated by existing algorithms for the continuous-time model, we proposed a novel particle filter algorithm called “Twisted-Path Particle Filter" (TPPF) in Section <ref>. Several numerical examples are given in Section <ref> to compare TPPF with existing approaches. A brief conclusion and some possible future work are summarized in Section <ref>. In the appendix, we provide the proofs of some technical lemmas and propositions. § IMPORTANCE SAMPLING FOR DISCRETE-TIME AND CONTINUOUS-TIME MODELS In this section, we introduce both discrete-time and continuous-time models. For each model, after presenting the basic structure and the expression for the quantity of interest mentioned in (<ref>), (<ref>) above, we introduce the basic idea of variance reduction via importance sampling: Applying Girsanov's transform, we provide an unbiased estimate for the target quantity, and meanwhile the variance could be reduced if we choose a suitable change of measure. As introduced in Section <ref>, this change of measure is in practice achieved by twisting the Markov chain. The twisting procedure is realized by multiplying some suitable function to the original transition kernel in the discrete setting, or adding a control variate to the drift in the continuous setting. Afterwards, for both models, we present the expressions of the optimal twisting function (for discrete setting) / optimal control variate (for continuous setting). The optimal twisting function and the optimal control share similar zero-variance properties, similar backward-time evolution equations, and similar Feynman-Kac representations. §.§ Discrete-time model and optimal twisting Let us begin with the discrete-time model. Fix a positive integer n. Given a (time-homogeneous) Markov transition kernel P̂(x,dy) in ℝ^d (for simplicity we assume the corresponding transition density exists and still denote it by P̂(·,·)), a sequence of bounded, continuous, nonnegative function ĝ_k(·) (0 ≤ k ≤ n), and a (deterministic) initial point x ∈ℝ^d, the general discrete-time model is determined by the triple (P̂(·,·), (ĝ_k(·))_k, x): For the discrete Markov chain X̂_0:n (:= (X̂_0, X̂_1,…,X̂_n)) with transition kernel P̂(·,·) and initial state X̂_0 = x, the statistical quantity of interest is given by Z_dis(x) := 𝔼_x[∏_k=0^n ĝ_k(X̂_k)], where 𝔼_x[·] := 𝔼[ ·|X̂_0 = x]. Such a general model is often called the Feynman-Kac model <cit.>, and the quantity Z_dis(x) corresponds to the terminal marginal measure in it. Note that throughout this paper, we consider the Markov chain with a homogeneous transition kernel and a fixed deterministic initial state. As common in the literature, such settings are only designed to simplify the arguments and notations, and all results in this paper can be easily extended to the Markov chains with inhomogeneous transition kernels and with random initial conditions. A special case of the general model above is the Hidden Markov Model (a state space model with finite states), where the functions ĝ_0:n(·) are determined on the random observation values y_1:n with an observation conditional density function ĝ_y|x^k(·|X̂_k) at each time k: X̂_i ∼P̂(X̂_i-1, ·), Ŷ_i ∼ĝ_y|x^i(·|X̂_i), 0≤ i ≤ n. Then, given the observations y_0:n, the functions ĝ_0:n(·) are given by ĝ_k(·) := ĝ^k_x|y(· | Ŷ_j = y_j,j≤ k). Consequently, conditioning on the observations y_0:n, the distribution of X̂_1:n is proportional to ĝ_0(x_0)∏_k=1^n ĝ_k(x_k) P̂(x_k-1,x_k) dx_1:n, and its normaling constant is Z_dis(x) defined in (<ref>). Moreover, in order to compare with the time-continuous model driven by the Brownian model, later on we will consider the Gaussian transition (denoted by P̂^η(·,·)) with a given drift function b(·) and time step η. Namely, the more special case where P̂^η(x,dy) := (4πη)^-d/2exp(-1/4η|y-x-η b(x)|^2 ) dy. Now, with the triple (P̂(·,·), (ĝ_k(·))_k, x) of the discrete model, we aim to calculate Z_dis(x). A standard approach is the particle filter method (also known as sequential Monte Carlo) <cit.>, where the expectation in Z_dis(x) is simulated via Monte Carlo. A classical particle filter algorithm is the following bootstrap particle filter (BPF) <cit.>: Note that for each k, in (<ref>), particles are resampled according to the weights W_k-1^1:N defined in (<ref>). An effective way to implement (<ref>) is the ancestor-prediction method (see for instance <cit.>), namely, at k-th step, * Sample ancestors A_k-1^i ∼𝒞(W_k-1^1, …, W_k-1^N), 1≤ i ≤ N. * Sample predictions ζ_k^i ∼P̂(ζ^A_k-1^i_k-1,·), 1 ≤ i ≤ N. Above, 𝒞(·,…,·) denotes the categorial distribution to sample the indexes. Moreover, the resampling step does not need to occur at each step, and one improvement is the κ-adapted resampling <cit.>, where the resampling step only occurs when the effective sample size is smaller than κ N for some fixed κ∈ (0,1). For any w^1:N with ∑_i=1^N w^i = 1, the effective sample size ESS(w^1:N) is defined by ESS(w^1:N) := 1 / ∑_i=1^N (w^i)^2, and it is a good standard to evaluate a particle filter algorithm - higher ESS usually means better performance. One of the shortcomings of BPF is the relatively high variance, especially when the dimension d or the length of the Markov chain n is large. The twisted particle filter was introduced <cit.> to reduce the variance of Z_dis(x) defined in (<ref>) based on importance sampling. The general idea is to use the (discrete-time) change of measure, or equivalently, a sequence of twisting functions φ(k,·) (1≤ k ≤ n). Then, we consider the twisted model with the triple (P̂^φ(·,·), (ĝ_k^φ(·))_k, x) defined by P̂^φ_k(x,dy) := φ(k,y) /P̂[φ](k,x)P̂(x,dy), P̂[φ](k,x) = ∫φ(k,y') P̂(x,dy') 1≤ k≤ n, ĝ^φ_k(x) := ĝ_k(x) ℓ^φ_k(x), 0≤ k≤ n, with ℓ_0^φ(x) := P̂[φ](1,x), ℓ_n^φ(x) := 1/φ(n,x), ℓ^φ_k(x) := P̂[φ](k+1,x)/φ(k,x), 1≤ k≤ n-1. Then it is not difficult to check that the twisted model preserves the quantity (<ref>). We conclude this property in the following proposition (see Appendix <ref> for a detailed proof): Consider the twisted model defined in (<ref>)–(<ref>). Recall the quantity Z_dis(x) defined in (<ref>). Then Z_dis(x) = 𝔼_x[∏_k=0^nĝ_k^φ(X̂^φ_k)], where X̂^φ is the twisted Markov chain with the (time-inhomogeneous) transition kernel P̂_k^φ(·,·) , namely, X̂^φ_k∼P̂^φ_k(X̂^φ_k-1, ·) (1≤ k ≤ n). Proposition <ref> is in fact the Girsanov's transform in the discrete-time setting. Denote 𝒫̂ the law of the untwisted Markov chain X̂_0:n with the transition kernel P̂(·,·), and denote 𝒫̂^φ the law of the twisted Markov chain X̂^φ_0:n with the transition kernel P̂^φ(·,·). Then, the Girsanov transform gives Z_dis(x) = 𝔼_x^X∼𝒫̂[∏_k=0^n ĝ_k(X_k)] = 𝔼_x^X∼𝒫̂^φ[∏_k=0^n ĝ_k(X_k) d𝒫̂/d𝒫̂^φ(X)], where the Radon-Nikodym derivative is given by d𝒫̂/d𝒫̂^φ(X) = ∏_k=0^n ℓ_k^φ(X_k) = ∏_k=1^n P̂[φ](k,X_k-1)/φ(k,X_k). With the twisted model (<ref>) - (<ref>) and Proposition <ref>, it is natural to consider the following twisted particle filter (TPF) method <cit.>, whose output Z^N,φ_dis(x) defined below approximates our target quantity Z_dis(x) due to law of large number (see for instance, Proposition 3 in <cit.>, or Section 3.6 in <cit.>): Now since the φ-twisted model (recall its definition (<ref>)–(<ref>)) obtains an unbiased estimation for the quantity Z_dis(x) by Proposition <ref>, it is natural to ask the following question: How do we choose φ to minimize the variance. In fact, it is possible to choose an optimal twisting function sequence φ^*(k,·) (1≤ k ≤ n), under which the random variable ∏_k=0^nĝ_k^φ^*(X̂^φ^*_k) is an unbiased estimate of Z_dis(x) with zero variance. Optimal twisting functions have been widely studied in the literature <cit.>. In particular, the optimal twisting function sequence φ^*(k,·) (1≤ k ≤ n) is defined recursively as follows: φ^*(k,x) = ĝ_k(x) ∫φ^* (k+1,y) P̂(x,dy), 0 ≤ k ≤ n-1, φ^*(n,x) = ĝ_n(x). Then the optimal twisting function has the following useful properties: Consider the functions φ^*(k,·) (1≤ k ≤ n) defined by (<ref>). Recall X̂_0:n is a Markov chain with transition density P̂(·,·). Then * φ^*(k,·) satisfies φ^*(k,x) = 𝔼[∏_i=k^n ĝ_i(X̂_i) |X̂_k = x], 0 ≤ k ≤ n, x ∈ℝ^d. In particular, Z_dis(x) = φ^*(0,x), ∀ x ∈ℝ^d. * (zero variance property) Consider the Markov chain X̂^φ^*_0:n with the initial state X̂^φ^*_0 = x and the transition density P̂^φ^*_k(x,y) := φ^*(k,y) /∫φ^*(k,y') P̂(x,y')dy'P̂(x,y), 1≤ k≤ n, Define Ŵ(X̂^φ^*) := L̂(X̂^φ^*) ∏_k=0^n ĝ_k(X̂_k^φ^*) with L̂(X̂^φ^*) := ∏_k=0^nℓ_k^*(X̂^φ^*_k) and ℓ_0^φ^*(x) := ∫φ^*(1,y)P̂(x,dy), ℓ_n^φ^*(x) := 1/φ^*(n,x), ℓ^φ^*_k(x) := ∫φ^*(k+1,y)P̂(x,dy)/φ^*(k,x), 1≤ k≤ n. Then, Ŵ(X̂^φ^*) is an unbiased estimate of Z_dis(x) with zero variance. Namely, Ŵ(X̂^φ^*) = φ^*(0,x) = Z_dis(x), ∀ x ∈ℝ^d. Proposition <ref> has also been discussed in related literature such as <cit.>. We provide a proof in Appendix <ref> for completeness. Moreover, a direct corollary of the zero-variance property in Proposition <ref> is that, with the optimal twisting function, after Monte Carlo approximation with N samples, the output Z^N,φ^*_dis(x) of twisted particle filter (Algorithm <ref>) is a perfect approximation of the target Z_dis(x). We refer to Appendix <ref> for a detailed proof. Consider the optimal twisting function defined in (<ref>). Then for any positive integer N and x ∈ℝ^d, Z^N,φ^*_dis(x) = Z_dis(x). Note that although the φ^*-twisted particle filter provides a perfect approximation for our target Z_dis(x), it is impossible to implement the twisted particle filter (Algorithm <ref>) associated with φ^* in practice. The reason is that we need the pointwise value of the twisting function so that X̂^φ^*_k+1 can be sampled from the distribution proportional to φ^*(k,·)P̂(X̂_k-1^φ^*,·). Therefore, it is crucial to find suitable approaches to approximate the optimal twisting functions φ^*(k,·) (1≤ k ≤ n) to lower the variance of the particle filter algorithm. Moreover, due to the law of large numbers <cit.>, the larger sample size N and the better approximation of φ^* would give a better approximation for Z_dis(x) (i.e. smaller variance of the numerical output Z_dis^N,φ(x)). In Section <ref>, we will propose a new method to approximate φ^*, guided by similar method for the condinuous-time model described in the following subsection. §.§ Continuous-time model and optimal control In this subsection, we first define the time-continuous model and the target quantity. Then we introduce the control-based importance sampling method based on the Girsanov's theorem. In the end, we introduce the optimal control and its zero-variance property. For a fixed function b: ℝ^d →ℝ^d, we consider the following SDE X_t = X_0 + ∫_0^t b(X_s) ds + √(2)B_t, X_0 = x∈ℝ^d. where (B_t)_t≥ 0 is the Brownian motion in ℝ^d under the probability measure P. The dynamics is thus a time-homogeneous Markov process with transition density (or Green's function) P_t(y|x) satisfying a Fokker-Planck equation ∂_t P_t(y|x) = -∇_y · (b(y) P_t(y|x)) + Δ_y P_t(y|x), P_0(y|x) = δ_x(y), where δ_x(·) is the Dirac delta at x. Moreover, given functions h(t,x), g(x) and T>0, the general continuous model is determined by (P;h,g;x), and the corresponding statistical quantity of interest is Z_con(x) = 𝔼_x[e^∫_0^T h(s,X_s) dsg(X_T)], Recall the example in Remark <ref>. Here, a corresponding special case is the latent SDE model, which is a state space model which has infinitely many states. In detail, the functions h(t,x) and g(x) are determined by the value of a random observation process Y_t conditional on X_t, i.e. X_t = x + ∫_0^t b(X_s) ds + √(2)B_t, Y_t ∼ g_y|x^t(·| X_s, s≤ t), 0≤ t≤ T. Denote the conditional density function (which is the density of Law(X_t | y_s,s≤ t) by g_t(·):=g_x|y^t(·|y_s:s≤ t), and let h(t,x) = log g_t(x), g(x) = g_T(x), 0≤ t≤ T, x∈ℝ^d. Then the target quantity Z_con(x) is Z_con(x) = 𝔼_x[e^∫_0^T log g_s(X_s) ds g_T(X_T)], which is comparable with the discrete target Z_dis(x), since Z_dis(x) has the similar expression Z_dis(x) = 𝔼_x[∏_k=0^n ĝ_k(X̂_k)] = 𝔼_x[e^∑_k=0^n-1logĝ_k(X̂_k)ĝ_n(X̂_n)]. We will prove the convergence from Z_dis(x) to Z_con(x) in Section <ref> below for the discrete transition kernel of the form (<ref>). We will give a concrete example in Example <ref> of Section <ref>, where the function g_t(·)=g_x|y^t(·|y_s:s≤ t) has an explicit expression. The example satisfies all the conditions required by the convergence analysis in Section <ref> below. Now, considering the general continuous model determined by (P;h,g;x), we aim to calculate the quantity Z_con(x). Similar to the discrete model, we approximate it via Monte Carlo using N samples: Z_con(x) ≈ Z^N_con(x) := 1/N∑_i=1^N e^∫_0^T log h(s,X^i_s) ds g(X^i_T), where X^1,… X^N are N independent realizations of the SDE (<ref>). Next, we aim to reduce the variance of the random variable Z^N_con(x). In fact, (Z^N_con(x)) = 1/N (e^∫_0^T log h(s,X_s) ds g(X_T)). Therefore, we aim to reduce the variance of the random variable e^∫_0^T log h(s,X_s) ds g(X_T) for X satisfying (<ref>). Here, we also consider variance reduction via importance sampling, which is based on a change of measure, as discussed in the discrete case. In detail, the change of measure is constructed by adding a control variate u to the drift and considering the following controlled SDE: X^u_t = X_0^u + ∫_0^t (b(X_s^u) + √(2)u(s,X_s^u) ) ds + √(2)W_t, X_0^u = x ∈ℝ^d. Then, denoting the path measures 𝒫 := Law(X_[0,T]) (X solving (<ref>)) and 𝒫^u := Law(X^u_[0,T]) (X^u solving (<ref>)), the change of measure gives Z_con(x) = 𝔼_x^X∼𝒫[e^∫_0^T log h(s,X_s) ds g(X_T)] = 𝔼_x^X∼𝒫^u[e^∫_0^T log h(s,X_s) ds g(X_T) d𝒫/d𝒫^u(X)] = 𝔼_x[e^∫_0^T log h(s,X^u_s) ds g(X^v_T)d𝒫/d𝒫^u(X^u)], where the Radon-Nikodym derivative has the following expression due to Girsanov's theorem: d𝒫/d𝒫^u(X^u) = exp(-∫_0^T u(s,X^u_s) · dB_s - 1/2∫_0^T |u(s,X^v_s)|^2 ds ). Consequently, the following Monte-Carlo approximation is an unbiased estimate of the target Z_con(x): Z^N,u_con(x) := 1/N∑_i=1^N e^∫_0^T log h(s,X^i,u_s) ds g(X^i,u_T) d𝒫/d𝒫^u(X^i,u), where X^1,u,…,X^N,u are N independent realizations of the controlled SDE (<ref>). Our goal is to minimize the variance of e^∫_0^T log h(s,X^u_s) ds g(X^u_T)d𝒫/d𝒫^u(X^u) for u belonging to some suitable admissable set. Luckily, we can find an optimal control (similar to the discrete case) satisfying a zero-variance property, and then we only need to find suitable approaches to approximate the optimal control. In fact, the optimal control u^* is given by u^*(t,x) = √(2)∂_x log v^*(t,x), where v^*(t,x) satisfies the following backward Kolmogorov (parabolic) equation: -∂_t v^*(t,x) = b(x) ·∇_x v^*(t,x) + Δ_x v^*(t,x) + h(t,x) v^*(t,x), 0 ≤ t ≤ T, v^*(T,x) = g(x). Moreover, the following Feynman-Kac representation and zero-variance property hold: Fix T>0 and x∈ℝ^d. Given functions h: ℝ_+×ℝ^d →ℝ, g: ℝ^d →ℝ, b: ℝ^d →ℝ^d, * (Feynman-Kac representation) The solution to the PDE (<ref>) has the following representation: v^*(t,x) = 𝔼[e^∫^T_t h(s,X_s) ds g(X_T) | X_t = x ], 0≤ t≤ T, x∈ℝ^d, where the process X_t solves the SDE (<ref>). In particular, Z_con(x) = v^*(0,x), ∀ x ∈ℝ^d. * (zero-variance property) Let P be the probability measure such that under which (B_t)_t≥ 0 is a Brownian motion in ℝ^d. Then there exists a probability measure P^* such that under P^*, the law of the controlled process with control u^* = √(2)∂_x log v^* X^u^*_t = X_0^u^* + ∫_0^t (b(X^u^*_s) + 2∂_x log v^*(s,X^u^*_s))ds + √(2)B_t, X^u^*_0 = x is the same of the law of X_t under P (recall that X_t satisfies (<ref>)). Moreover, define L(T) := exp(-∫_0^T ∂_x log v^*(s,X^u^*_s) dB_s - 1/2∫_0^T |∂_x log v^*(s,X^u^*_s)|^2 ds), and W := e^∫_0^T h(s,X^u^*_s) dsg(X^u^*_T)L(T). We have W = v^*(0,x) = Z_con(x) P^*-a.s. § CONVERGENCE ANALYSIS: FROM DISCRETE-TIME TO CONTINUOUS-TIME MODELS In Section <ref> above, we have introduced the importance sampling for discrete/continuous-time models. Now we aim to build a bridge between the discrete-time results and the continuous-time results. The benefit of such connection is evident: Given an existing importance sampling algorithm in the continuous-time setting (approximating the optimal control), we can design the corresponding particle filter algorithm (approximating the optimal twisting function) in the discrete-time setting. Obviously, this also works in the reverse direction. Indeed, we will provide a concrete example for algorithm design in Section <ref>. Now, in order to study the connection between the discrete and continuous models, we need the following restriction for their transition kernels P, P̂ and the functions determining the targets Z_con(x), Z_dis(x), so that we can establish the convergence results rigorously below. In detail, fix the function b: ℝ^d →ℝ^d, time step η > 0, and T :=nη. We consider the Gaussian transition kernel for the discrete model: P̂^η(x,dy) := (4πη)^-d/2exp(-1/4η|y-x-η b(x)|^2 ) dy. Recall that the transition kernel for the continuous model P_t is associated with the SDE (<ref>) with drift b(·) and volatility √(2), and it satisfies the Fokker-Planck equation (<ref>). Clearly, P̂^η is the one-step transition kernel of the corresponding Euler-Maruyama scheme. Moreover, for convergence analysis in this section, we consider the discrete model determined by (P̂^η; ĝ_0:n(·); x) and the continuous model determined by (P_t;log g(·,·),g_T(·);x). Correspondingly, the statistical quantities of interest are respectively Z_dis(x) := 𝔼_x[∏_k=0^n-1ĝ_k(X̂_k)ĝ_n(X̂_n)] = 𝔼_x[exp(∑_k=0^n-1∫_kη^(k+1)ηlog(ĝ_k(X̂_k))^η^-1ds)ĝ_n(X̂_n)], and Z_con(x) = 𝔼_x[exp(∫_0^Tlog g(s,X_s) ds) g_T(X_T)]. In what follows, under suitable assumptions for the functions b(·), ĝ_k(·), g(·,·) and g_T(·), we will consider the convergence from discrete-time model to the continuous-time model. In detail, we will show that at the time step η→ 0, P̂^η converges to P_t, and Z_dis(x) converges to Z_con(x). We need the following assumptions to ensure convergence: We assume the following conditions for b:ℝ^d →ℝ^d, ĝ_k: ℝ^d →ℝ (0≤ k ≤ n), g:ℝ_+×ℝ^d →ℝ and g_T: ℝ^d →ℝ: (a) b(·) is L_b-Lipschitz (i.e. |b(x) - b(y)|≤ L_b |x-y|, ∀ x,y∈ℝ^d), and sup_0≤ i ≤ n𝔼|X̂_iη|^2 < ∞ uniform in η. (b) ĝ_n(x) → g_T(x), η^-1logĝ_k(x) →log g(kη, x) uniformly in x and k (0 ≤ k ≤ n-1) as η→ 0. (c) log g(t, x) is continuous in t ∈ [0,T] uniform in x. η^-1logĝ_kη(x) is Lipschitz in x uniformly for all k (0 ≤ k ≤ n-1) and η>0. (d) ĝ_n(·) is bounded. For any t ∈ [0,T], the functional X_[t,T]↦ e^∫_t^T log g(τ, X_τ)dτ is continuous and bounded. Above, condition (a) is used to ensure the existence and uniqueness of the strong solution for (<ref>) and the convergence of transition kernel in Proposition <ref> below. Other conditions are required for the convergence of Z_dis(x) to Z_con(x) in Proposition <ref> below. Also, we will give a concrete example in Example <ref> below which satisfies all the conditions in Assumption <ref>. We also remark here that the upper bound for the second moment sup_0≤ i ≤ n𝔼|X̂_iη|^2 can be proved if one assumes: (1) L_b-Lipschitz condition for b(·); (2) the following confining condition for b(·): x · b(x) ≤ -C_1 |x|^2 + C_2, ∀ x ∈ℝ^d, where C_1, C_2 are two positive constants. Moreover, under such assumptions, the upper bound sup_0≤ i ≤ n𝔼|X̂_iη|^2 can be shown to be independent of the time T (recall that T = nη). In the following proposition, we establish the convergence of transition kernels in terms of the KL-divergence or the total variation (TV) distance. Note that for two probability measures μ, ν on ℝ^d, the KL-divergence and TV distance are given by (μν) := { ∫_ℝ^dlogdμ/dν μ(dx), μ≪ν, +∞, otherwise. . TV(μ,ν) := sup_A∈ℬ(ℝ^d) |μ(A) - ν(A)|. Consider the Markov transition kernels P̂^η and P_η (=P_t=η) defined in (<ref>), (<ref>), respectively. Suppose that the condition (a) in Assumption <ref> holds. Then for any x∈ℝ^d, as η→ 0, P̂^η(x,·) → P_η(x,·) in terms of KL-divergence or TV distance. In detail, for η < 1, there exists a positive constant C independent of η and d such that ( P̂^η(x,·) P_η(x,·) ) ≤ Cdη^2 → 0 as η→ 0, and TV( P̂^η(x,·) , P_η(x,·) ) ≤ C√(d)η→ 0 as η→ 0. Note that the Proposition considers the local truncation error of the Euler-Maruyama scheme in terms of TV distance and KL divergence, consistent with the existing results <cit.>: consider the solution X_t (t ≥ 0) to the continuous-time SDE (<ref>) and its Euler-Maruyama discretization X̂_nη (n ∈ℕ_+) with time step η, for any time interval length T such that η divides T, one can show that D_KL(Law(X̂_T)Law(X_T)) ≲ Tη. This then reduces to (<ref>) when T = η. Moreover, recent literature (for instance, <cit.>) has shown that, if one assumes stronger smoothness conditions for the drift b(·), it is possible to improve the upper bound for the time-discretization error in terms of the KL-divergence from O(Tη) to O(Tη^2). Consequently, the rate in (<ref>) is O(η^3). The same arguments also hold for TV distance via Pinsker's inequality. The next Proposition guarantees that under the settings of Assumption <ref>, the target Z_dis(x) converges to Z_con(x) as η→ 0. For x ∈ℝ^d, recall the definitions of Z_dis(x), Z_con(x) in (<ref>), (<ref>), respectively. Then under Assumption <ref>, Z_dis(x) → Z_con(x) pointwisely as η→ 0. Under current assumptions, one cannot obtain an explicit convergence rate due to conditions (b) and (c) in Assumption <ref>. However, if we use a stronger version Assumptions <ref>, it is possible to obtain an explicit rate. In fact, if we replace conditions (b) and (c) in Assumption <ref> by the following stronger quantitative version (b') g_T(·) is bounded. |ĝ_n(x) - g_T(x) | ≤ C_1η^α_1, |η^-1logĝ_kη(x) - log g_kη(x) | ≤ C_2η^α_2, 0 ≤ k ≤ n-1, where C_1, C_2, α_1, α_2 are positive constants independent of x, k. (c')log g_s(x) is α_3-Hölder continuous in t ∈ [0,T] uniform in x. η^-1logĝ_kη(x) is Lipschitz in x uniformly for all k. then, following exactly the same derivation, one can easily obtain for small η, |Z_dis(x) - Z_con(x)| ≤η^α, where α := min (1/2, α_1, α_2, α_3). We end this section by giving a concrete example satisfying all the conditions in Assumption <ref>. In particular, it satisfies the time-continuity condition of log g(t,x), which might not be very direct at first glance. [an example satisfying time-continuity and other assumptions] Note that one important assumption is that the function log g(t,x) is time-continuous. Recall the state space models in Remark <ref> and Remark <ref>. In such model, if the observation Y_t at time t only depends on the value of X_t (for instance, Y_t ∼ N(X_t, Σ) with Σ being a constant covariance matrix), then the time-continuity for the function g^t_x|y would not hold, since the observation process Y itself is not continuous in time due to the randomness. However, we remark in the following that by considering time-dependency for g^t_y|x (recall its definition in (<ref>)), we can indeed find such time-continuous log g(t,x) In fact, we consider the following continuous-time state space model: X_t = x + ∫_0^t b(X_s) ds + √(2)B_t, Y_t = X_t + B̃_t+t_0, 0≤ t≤ T for two independent Brownian motions (B_t)_t≥ 0, (B̃_t)_t≥ 0. Here we introduce the positive constant t_0 only to avoid a potential singularity of 1/t at t=0: As will be discussed below, the convergence would require the time continuity of - |x - y_t|^2/2(t+t_0) in the closed interval [0,T], thus the choice of some t_0 > 0. Correspondingly, for the discrete model, for 0≤ k≤ n, we define Ŷ_k by X̂_k+1 = X̂_k + η b(X̂_k) + √(2)(B_(k+1)η - B_kη), Ŷ_k= X̂_k + B̃_kη+t_0, 0≤ k ≤ n. Now suppose that we have been given observation y_t (0≤ t≤ T) and ŷ_k (0≤ k ≤ n) (note that by our construction above, the observed process y_t of the continuous model is time-continuous). Clearly, the expressions for g(t,x) and ĝ_k(x) are explicit and are of the Gaussian form. Therefore in this case, we are able to convert the conditions in Assumption <ref> for g and ĝ to assumptions for y and ŷ. Basically, defining 𝒩(·;μ,Σ) to be the density of d-dimension Gaussian distribution with mean μ and covariance Σ, we need the followings: * 𝒩(x;ŷ_n,T+t_0) →𝒩(x;y_n,T+t_0), |x - ŷ_k|^2 → |x-y_k|^2 uniformly in x, k as η→ 0. * For any t ∈ [0,T], the functional X_[t,T]↦ e^∫_t^T -|X_τ - y_τ|^2/2(t+t_0)dτ is bounded. * - |x - y_t|^2/2(t+t_0) is continuous in t ∈ [0,T] uniform in x. - |x - ŷ_k|^2/2(kη+t_0) is Lipschitz in x uniformly for all k. It is then easy to see that these conditions generally require (1) time-continuity for the observation y_t, (2) the observation for the two models y_kη, ŷ_k are very close. Clearly, since X̂ in (<ref>) is the Euler-Maruyama discretization of X in (<ref>), the continuous observed processes y, ŷ stay close when the time step η is small. Furthermore, once these requirements are met, by Proposition <ref>, as η→ 0, the rescaled target Z̃_dis(x) := 𝔼_x[∏_k=0^n-1 (ĝ_k(X̂_k))^η^-1ĝ_n(X̂_n)] converges pointwise to Z_con(x) = 𝔼_x[exp(∫_0^Tlog g(s,X_s) ds) g_T(X_T)]. As a final remark in this section, the time-continuity of the observation (in particular, the form (<ref>)) is only required when proving the convergence from the discrete model to the continuous model. This discrete-continuous bridge then allows us to construct new twisted particle filters motivated by existing continuous-time importance sampling algorithms (e.g., see Section <ref>). However, since we construct algorithms directly for the discrete setting, the new algorithm does not necessarily require such time-continuity condition of the observation. In fact, the proposed algorithms work for simple linear Gaussian models (see Section <ref>), whose observation can be defined by Y_k ∼ N(X_k, Σ) with Σ being a constant covariance matrix. § TWISTED-PATH PARTICLE FILTER (TPPF) In this section, we give a concrete example of how to explore new twisted particle filters guided by continuous-time importance sampling algorithms. Recall that in Section <ref>, we have proved the convergence from the discrete model to the continuous model under suitable settings. As has been mentioned at the beginning of Section <ref>, this connection we found can guide us to explore novel particle filter algorithms based on existing methods in the continuous-time setting. In this section, motivated by mature algorithms from the continuous-time community, we propose a Twisted-Path Particle Filter (TPPF). In this new algorithm for the discrete-time model, we treat the KL-divergence between twisted path measures and the optimal path measures as the loss function, parameterize the twisting function via neural networks, and approximate the optimal twisting function via suitable optimization methods such as the stochastic gradient descent. We expect this proposed algorithm to have the following strengths: * By parameterizing the twisting function using neural networks, TPPF is expected to overcome the curse of dimensionality compared with algorithms based on Galerkin approximation. * Due to the universal approximation theorem of neural networks <cit.>, TPPF is more versatile and less problem-dependent, while in some other existing algorithms, the twisting function is learnt within a problem-dependent function class. * Compared with some other frameworks for learning the twisting function (see <cit.> for instance), which are based on the backward recursive relation (<ref>), our method has relatively stronger theoretical foundations (see Lemma <ref>, Proposition <ref>, Corollary <ref> below). In the rest of this section, we will first explain the motivation of TPPF by presenting one importance sampling algorithm mainly from a variational perspective. Then we will formulate the details of the TPPF algorithm under the guidance of the existing continuous-time algorithm. Some theoretical results as well as implementation details will also be provided. §.§ Motivation: importance sampling via variational characterization in continuous-time setting In the continuous-time model, it has been relatively well-studied to find the optimal control variate from a variantional perspective. In order to seek the optimal control variate with the zero variance property, instead of solving the PDE we derived in (<ref>), people convert this to an optimal control problem from a variational perspective <cit.>. We first give some basis on the so-called Donsker-Varadhan variational principle <cit.>, which is independent of whether the model is time-continuous or time-discrete. Given some function W(·): Ω→ℝ and probability measure P on Ω, the following relation holds: -log𝔼^X ∼ P[exp(-W(X) )] = inf_Q ∈𝒫(Ω){𝔼^X∼ Q[W(X)] + (Q P)}, where the minimum is over all probability measures on Ω. Take Ω to be the path space corresponding to the continuous-time model and choose W(·) to be the path integral for some stochastic process (X_s)_0≤ s≤ T, as in <cit.> W(X) := -∫_0^T h(s,X_s) ds - log g(X_T) for some h(·,·), g(·) defined in Section <ref>, the variational relation (<ref>) becomes -log𝔼^X_[0,T]∼ P[exp(∫_0^T h(s,X_s) ds + log g(X_T) )] =inf_Q∈𝒫(Ω),Q<<P𝔼^X_[0,T]∼ Q[∫_0^T -h(s,X_s) ds - log g(X_T) ] + (QP). In particular, if the path measure P above is the law of solution to some X_[0,T] satisfies an SDE as introduced in Section <ref> dX_t = b(X_t)dt + √(2)dW, 0 ≤ t ≤ T, X_0 = x, then the probability Q above is characterized by the path measure of the controlled SDE after adding a control variate u to the drift. In detail, consider the controlled the SDE dX^u_t = b(X^u_t)dt + √(2)u(t,X_t^u)dt + √(2)dW, 0 ≤ t ≤ T, X^u_0 = x. By Girsanov's theorem, the dual relation (<ref>) becomes -log𝔼_x[exp(∫_0^T h_s(X_s) ds + log g_T(X_T) )] = inf_u∈𝒰𝔼_x[-∫_0^T h_s(X^u_s) ds + 1/2∫_0^T |u_s(X_s^u)|^2 ds - log g(X_T^u)], and under the setting of Section <ref>, the set of admissible controls 𝒰 is usually chosen to be as follows (see for instance Section 1 in <cit.>) 𝒰={u ∈ C^1(ℝ^d ×[0, T], ℝ^d): u grows at most linearly in x }. Denote J(u) := 𝔼_x[-∫_0^T h_s(X^u_s) ds + 1/2∫_0^T |u_s(X_s^u)|^2 ds - log g(X_T^u)]. It can be verified that the minimizer u^* of the functional J(u) yields a zero-variance importance sampler in Proposition <ref> for the continuous setting, namely, exp(∫_0^T h(s,X^u^*_s) ds + log g(X^u^*_T) ) dP/dP^u^*(X^u^*) = Z_con(x), P^u^* - a.s., where P^u^* is the path measure induced by the controlled process X^u^* in (<ref>) associated with the optimal control u^*, and P is the path measure induced by the original process X in (<ref>). Therefore, it is reasonable to find the optimal control variates by solving the following stochastic optimal problem: min_u ∈𝒰 J(u) = 𝔼_x[-∫_0^T h_s(X^u_s) ds + 1/2∫_0^T |u_s(X_s^u)|^2 ds - log g(X_T^u)]. In practice, one may parameterize u(t,x) by u = u(θ;t,x) and iteratively solve the optimization problem. In details, given a suitable loss function L(u) = L(F(X^u_[0,T],u)) (e.g., L = J) in each iteration, one implement the followings: * With the current control u(t,x) = u(θ;t,x) simulate N realizations of the controlled SDE X^u_[0,T], and calculate the loss L and its derivative ∇_θ L using the N realizations of X^u_[0,T]. * Update θ using ∇_θ L via some suitable method, for instance, the stochastic gradient descent. As mentioned in Section <ref>, such an iterative framework is exactly the continuous-time policy gradient (PG) method <cit.>, and it is also called the iterative diffusion optimization (IDO) in some other literature <cit.>. As a remark, J(u) of the form (<ref>) offers a good choice of the loss L, which equals to (P^uP^u^*) up to a constant. Moreover, people also studied other forms of loss to approximate the optimal control u^*, for instance, in the so-called cross-entropy method <cit.>, people use the loss (P^u^*P^u) instead. More discussion on the choice of loss function and detailed derivations would be given in the next subsection for the TPPF algorithm, where we also treat the KL-divergence between path measures as the loss function. §.§ Approximating the optimal twisting function in discrete-time setting Now we propose a novel framework to approximate the optimal twisting function φ^* defined in (<ref>), guided by the existing method for training u^* in the continuous-time setting. Different from the iteration method proposed in <cit.> (which is similar with the temporal difference (TD) learning in the reinforcement learning community <cit.>) based on the recursive formula in (<ref>), our method learns the optimal twisting function in the whole path via a neural network. Hence, we name the proposed algorithm the “Twisted-Path Particle Filter" (TPPF). As discussed at the beginning of Section <ref>, we expect the proposed method TPPF to (1) perform better in high dimensions; (2) be more robust and have wider applications; (3) have relatively stronger theoretical foundations. In what follows, we first derive a similar variational principle and some relations between different choices of loss induced by the twisting function in the discrete-time setting. After that we would give our detailed algorithms corresponds to possible choices of loss function. Some detailed formulas and implement details will also be discussed at the end of this section. First, note that the Donsker-Varadhan variational principle can also be applied to the discrete-time model in Section <ref>. In fact, after a change of measure, we have -log Z_dis(x) = inf_φ J(φ), where φ = (φ_1,…,φ_n ), φ_i(·) (1 ≤ i ≤ n) are positive, continuous functions in ℝ^d, and as a direct result of the dual relation, by choosing W(X̂) = -∑_k=0^nlogĝ_k(X̂_k) for some discrete Markov chain (X̂_k)_k=0^n J(φ) := 𝔼_x[-∑_k=0^nlogĝ_k(X̂_k)] + (P^φP^1), where P^φ is the path measure induced by the twisting function φ as in (<ref>), and the P^1 is the original path measure (or equivalently, with twisting function being 1). More precisely, under P^φ, the Markov chain X^φ evolves under the twisted transition density P̂^φ_k(x_k-1,x_k) = φ(k,x_k)P̂(x_k-1,x_k)/P̂[φ](k,x_k-1), 1 ≤ k ≤ n, with the normalizing constant P̂[φ](k,x_k-1) defined by P̂[φ](k,x_k-1) = ∫φ(k,y)P̂(x_k-1,y)dy, and under P the Markov chain X evolves under the untwisted transition density P̂(x_k-1,x_k). Consequently, the twisted path measure associated with the twisting function φ has the following explicit expression: P^φ(dx_1:n) = ∏_k=1^n φ(k,x_k)P̂(x_k-1,x_k)/P̂[φ](k,x_k-1) dx_1:n. Similarly as in the continuous-time setting, it can be verified that the optimal twisting function φ^* with zero-variance property (discussed in Proposition <ref>) is the minimizer of the functional J(φ) above, and minimizing the functional J(φ) is equivalent to minimizing the KL-divergence (P^φP^φ^*). We summarize this result in the Proposition below: Consider the functional J(φ) defined in (<ref>) and the optimal twisting function φ^* defined in (<ref>). Then it holds J(φ) = J(φ^*) + (P^φP^φ^*). Moreover, one can derive the following explicit formula for J(φ) and (P^φP^φ^*): J(φ) = 𝔼_x[-∑_k=0^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X^φ_k-1)] and (P^φP^φ^*) = 𝔼_x[-∑_k=0^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X^φ_k-1)] + logφ^*_0(x) After a change of measure, it is easy to see that J(φ) or (P^φP^φ^*) has an alternative expression: (P^φP^φ^*) = 𝔼_x[exp(∑_k=1^nlogφ_k(X_k)/φ̃_k-1(X_k-1)) (-∑_k=0^n log g_k(X_k) + ∑_k=1^n logφ_k(X_k)/φ̃_k-1(X_k-1))] + logφ^*_0(x). This expression is in particularly useful in the cases where sampling from the original transition P̂(x_k-1,x_k) is much easier than sampling from the twisted transition P̂_k^φ(x_k-1,x_k). For instance, if φ is parameterized by some neural network, basic sampling methods like rejecting sampler (proposed in <cit.> for the twisted particle filter) would not be so efficient, suffering from low, unstable acceptance rate. Moreover, it is natural to doubt whether the Monte Carlo approximation for the loss (and its gradient) would deteriorate after a change of measure. So far we have got no theoretical guarantee for this, but empirically, the experiments show that (<ref>) can approximate the loss well. Guided by various variational-based methods in continuous-time setting (discussed in Section <ref>), we can also consider different loss functions other than J(φ) or (P^φP^φ^*). Before seeking blindly for other possible choices, let us first study the relationship between the relative variance and KL-divergence, since our final goal of approximating the optimal twisting function is just to reduce the variance. In fact, using a generalized Jensen's inequality, we are able to derive the following: Given some function W(·): Ω→ℝ and probability measures ν, ν̃ on Ω which are absolutely continuous with each other, define Z := 𝔼^X ∼ν[e^-W(X)] = 𝔼^X̃∼ν̃[e^-W(X̃)dν/dν̃(X̃)] and the relative variance with respect to ν̃ defined by r(ν̃) :=√(Var_ν̃(e^-Wdν/dν̃))/Z Suppose there is an optimal probability measure ν^* with the zero-variance property dν^*/dν = e^-W/Z ν-a.s., Then the following estimates hold: * r^2(ν̃) ≥ e^(ν^*ν̃)-1 * If the constants m:=inf_E ν^*(E)/ν̃(E), M = sup_E ν^*(E)/ν̃(E) exist, then e^m(ν̃ν^*) + (ν^*ν̃)-1 ≤ r^2(ν̃) ≤ e^M(ν̃ν^*) + (ν^*ν̃)-1 Consider the discrete-time model with (P̂(·,·); ĝ_0:n(·);x), we have the following direct corollary of Proposition <ref>. Take Ω = ℝ^n, W(X̂) = -∑_k=0^nlogĝ_k(X̂_k), ν = P^1, ν̃ = P^φ, and ν^* = P^φ^* (recall the definition of P^φ in (<ref>)) in Proposition <ref>. Then there exists 0< m ≤ M depending on φ, φ^* such that e^m(P^φ P^φ^*) + (P^φ^* P^φ)-1 ≤ r^2(ν̃) ≤ e^M(P^φ P^φ^*) + (P^φ^* P^φ)-1. Motivated by Proposition <ref> and Corollary <ref> above, it is then reasonable to consider the loss function of the form a(P^φ P^φ^*) + ( P^φ^* P^φ) for some positive a. Moreover, when P^φ approximates the target P^φ^* well, m, M in Proposition <ref> are approximately 1, so it is reasonable to choose a=1, namely, treat (P^φ P^φ^*) + (P^φ^* P^φ) as a loss function for ν̃. Of course with these bounds, it is also reasonable to consider the loss (P^φ^* P^φ), which corresponds to the loss in the well-studied cross-entropy method in the continuous-time settings <cit.> (and this is the reason why we denote it by L_CE). When implementing the TPPF algorithm, we need explicit expressions for the loss functions chosen above, so that we can calculate them via Monte Carlo approximation. In fact, denoting L_RE:=(P^φ P^φ^*), L_CE := (P^φ^* P^φ), L_RECE = L_RE+L_CE. Simple calculations yield L_RE =𝔼_x[-∑_k=0^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X̂_k-1^φ)] + logφ^*(0,x) = 𝔼_x[exp(∑_k=1^nlogφ(k,X̂_k)/P̂[φ](k,X̂_k-1))(-∑_k=0^n logĝ_k(X̂_k) + ∑_k=1^n logφ(k,X̂_k)/P̂[φ](k,X̂_k-1))] + logφ^*(0,x) , where we can use either the first line (via the twisted Markov chain X̂^φ) or the second line (via untwisted Markov chain X̂) in the training. See Section <ref> for more details. Also, for L_CE, we have L_CE = 1/φ_0^*(x)𝔼_x[exp(∑_k=0^n logĝ_k(X̂_k) ) (∑_k=1^n logĝ_k(X̂_k) - logφ^*(0,x) )] (=constant) - 1/φ_0^*(x)𝔼_x[exp(∑_k=0^n logĝ_k(X̂_k) ) (∑_k=1^n logφ(k,X̂_k)/P̂[φ](k,X̂_k-1))], Note that the coefficient exp(∑_k=0^n logĝ_k(X̂_k) ) / φ_0^*(x) in second line of (<ref>) may bring numerical instability to L_CE-training. First, it is possible that in some extreme cases, we have no idea of the value of φ^*(0,x) (which is our target Z_dis(x)), and sometimes we can't even get a good approximation of it. Second, this coefficient above is determined by ĝ_k and the (random) position of X̂_k, so it is numerically unstable especially when the length of the Markov Chain n is large. Moreover, this observation is also empirically true in some examples, see details in Section <ref>. Given the loss functions above, to approximate optimal twisting function φ^* in practice, it is remaining to parameterize φ and learn the parameters according to the loss functions. In our algorithm, we parameterize the twisting function by neural network φ_k(x) = φ(θ;k,x), where θ denotes all the parameters of the network, and k, x are the input. For numerical stability, it is better to treat logφ(θ;k,x)=NN(θ;k,x) as the output so that φ_k(x) = exp(NN(θ;k,x)). We refer to Section <ref> for more details and give the structure of the proposed TPPF in Algorithm <ref> below: As a final remark, recall that when implementing Algorithm <ref>, we need to calculate the gradient ∇_θ L via Monte Carlo approximation. Note that when the loss function L is approximated via the untwisted Markov Chain X̂, the gradient can be calculated via auto-differentiation. However, if the loss L is approximated via the twisted Markov Chain X̂^φ (see the first line in (<ref>)), the computation of ∇_θ L is not that direct, since X̂^φ depends on the parameter θ. Here, we provide the explicit formula for ∂_θ_i L_RE, where L_RE is given in the first line of (<ref>). In fact, we first write L_RE into the form of the second line in (<ref>), where X̂ in it is independent of θ. Then, after a change of measure, we can write the gradient into the following: ∂_θ_iL_RE =𝔼_x[(1-∑_k=1^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X̂_k-1^φ))(∑_k=1^n log∂_θ_iφ(k,X̂_k^φ)/P̂[∂_θ_iφ](k,X̂_k-1^φ))] . For a further implementation detail, to obtain ∂_θ_iL_RE in (<ref>), there is no need to calculate ∂_θ_iφ. Instead, we can first calculate the value of 𝔼_x[(1-∑_k=1^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X̂_k-1^φ))(∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X̂_k-1^φ))] . using Monte Carlo approximation, and then “detach" the first term above so that it would contain no gradient information. Consequently, auto-differentiation via (one-time) back-propagation gives the desired value of the gradient in (<ref>). § NUMERICAL EXAMPLES In this section, we test the proposed TPPF algorithm with loss function L_RE, L_CE, or L_RECE on different models including the linear Gaussian model, the Lorenz-96 model, and the stochastic volatility model. We compare our algorithm with well-known competitors including the bootstrap particle filter (BPF) <cit.>, the iterated auxiliary particle filter (iAPF) <cit.>, and the fully-adapted auxiliary particle filter(FA-APF) <cit.>. In most examples, TPPF performs better than other algorithms. §.§ Implementation details Before the main numerical experiment, let us give some details on how to parameterize the twisting function φ, how to sample from the twisted Markov transition kernel P̂^φ, and how to calculate the normalizing constant P̂[φ](k,x). In fact, in the following three experiments, We consider two ways of parameterization: the robust non-parametric way and the problem-dependent parametric way. In particular, for the linear Gaussian model, we use the problem-dependent implementation, while for the other two models we use the non-parametric implementation. * The non-parametric implementation. As discussed in Section <ref>, we approximate the twisting function by logφ_k(x) = NN(θ;k,x), where NN(θ;k,x) is a neural network with parameters θ and inputs k, x (0≤ k ≤ n, x ∈ℝ^d). In our experiments, we set NN as DenseNet <cit.> with two hidden layers. Moreover, we add a tanh activation to the final layer so that the output NN(θ;k,x) takes value in (ϵ,1), where ϵ>0 is a hyperparameter. This bounded restriction is designed for the following reject sampling step when sampling from the twisted kernel P̂^φ. In fact, since we usually do not have much information of the current twisting function φ, so it is not easy to sample from the twisting kernel P̂^φ_k(x,·) ∼φ(k,·)P̂(x,·). Under the robust implementation setting, we make use of the reject sampling recently proposed in <cit.>: Using X^φ_k and the untwisted transition kernel P̂(X^φ_k,·), propose a new position X_pro, accept it with probability φ(X_pro). Repeat until first acceptance. Moreover, in the non-parametric implementation setting, the normalizing constant is calculated via Monte Carlo approximation using Ñ samples (following <cit.>, we choose Ñ = 50 in both Lorenz-96 and NGM-78 models): P̂[φ](k,x) := ∫φ(k,y) P̂(x,dy) ≈1/Ñ∑_i=1^Ñφ(k,U_i), U_i ∼P̂(x,·) i.i.d. Also, to make the training faster, we use the untwisted process X instead of X^φ when calculating the loss and the gradient (i.e. for L_RE, the loss is computed via the first line in (<ref>) and its gradient is computed using (<ref>)). * The parametric implementation. In the linear Gaussian model, we can calculate the analytical solution of Z_dis(x) using the Kalman filter <cit.>. Moreover, we can analytically calculate the optimal twisting function using the backward recursive relation (<ref>), and clearly the optimal twisting function is also Gaussian. Therefore, with so much knowledge of the solution, it is reasonable to consider a problem-dependent way to parameterize the twisting function to make the learning more efficient. In more details, via a mean-variance estimation framework, we set μ_k = NN_1(θ_1;k) ∈ℝ^d, σ_k^2 = NN_2(θ_2;k) ∈ℝ_+, 0≤ k ≤ n. And then set φ(k,x) = C_k N(x;μ_k,σ^2_k). Note that the twisting function is invariant of the constant scaling, so we do not care about the constant that multiplies the Gaussian. This means there is no need to learn C_k in (<ref>), and in our experiment we just consider C_k = (2πσ_k^2)^-d/2 so that φ(k,x) = exp(-|x-μ_k|^2/2σ^2_k). Also, under such settings, we no longer need the reject sampling and the inner-loop Monte Carlo for calculating the normalizing constant, because everything can be calculated analytically. Consequently, compared with the robust way, the problem-dependent implementation is less time consuming and the neural network is easier to train. Another important remark is about the relative fairness of the comparisons in the numerical experiments. We learn and run the particle filter using the same particle numbers. For the time complexity, we admit that as a training-based algorithm, our algorithm is more time-consuming, and in fact, there is not a completely fair comparison due to different non-optimal choice of network structures. In order to conduct a relatively fair comparison, we restrict the number of iterations for the training so that the total running time is comparable and similar to its competitors. §.§ Linear Gaussian model As a first example, we consider the linear Gaussian model, which is also the discretization of an OU process X_t with linear Gaussian observations Y_k ∼ N(·;B_k X_k,Σ_OB) X_k+1 = X_k + Δ t A X_k + √(Δ_t) N(0,Σ). We choose Δ t = 0.01, T = 0.5 (so the length of the discrete Markov Chain is n = T/Δ t = 50), B_k = I_d, A = -I_d, σ = I_d, Σ_OB = I_d. We consider the parametric implementation with configurations d ∈{2,5,15,20 }. The boxplots in Figure <ref> compare the TPPF (with L_RE, L_CE, or L_CERE) with competitors including BPF, iAPF and FA-APF using 1000 replicates. The red cross represents the mean and the red dash line represents the medium. We also report the empirical standard deviations in Table <ref>, and in Table <ref> we report the average relative effective sample size (ESS-r) defined by ESS(W_1:N) = 1 / (N∑_i^N W_i^2) with ∑_i=1^N W_i = 1. Clearly, TPPF with L_RE and L_RECE can defeat BPF and FA-APF, especially when the dimension is high, and TPPF with L_CE suffers from the curse of dimensionality due to the numerical instability discussed in Remark <ref>. Moreover, the iAPF performs the best in this linear Gaussian model, because iAPF is learning the twisting function in a Gaussian function class, each time solving a standard restricted least square minimization problem. Therefore, our training-based method cannot perform as well as iAPF in this experiment. However, as we will see in the other experiments when the optimal twisting function is not in the Gaussian family, our algorithm performs better than iAPF. §.§ NGM-78 model In contrast to the linear nature of the last model, here we consider an (artificial) nonlinear model frequently used when testing the performance of particle filters <cit.>. To our knowledge, this model was first used by Netto, Gimeno, and Mendes in 1978 <cit.>, so here we name it NGM-78 model for simplicity. The NGM-78 model describes the following discrete-time Markov chain in ℝ^d: X_n = a_0 X_n-1 + a_1 X_n-1/(1 + |X_n-1|^2) + f(n) + v_n, and the observations Y_n = a_2 |Y_n|^2 + u_n, where v_n ∼ N(0,σ^2_v I_d) and u_n ∼ N(0, σ^2_u I_d). In our experiments, we choose a_0 = 1/2, a_1 = 25, a_2 = 1/20, f ≡ 0, σ_u^2 = 1, σ_v^2 = 0.01, and n = 0,1,…, 50. We test our TPPF algorithm on the NGM-78 model with the configuration d ∈{1,2,5,10,15,20 } and compare TPPF with its competitors. The empirical standard deviation of log Z is reported in Table <ref>. As we can see, in constrast to the linear Gaussian model, in the current nonlinear settings, the linear structure (twisted functions learned in some Gaussian function family) of iAPF leads to its relatively worse behavior. On the other hand, our TPPF algorithms behave better than iAPF partly due to the stronger expressive ability of neural networks. Moreover, in this model the term P̂[g](k,x) (recall the definition in (<ref>)) used in FA-APF is calculated via Monte-Carlo approximation since there is not analytical solution for it. Consequently, we observe in Table <ref> that although the FA-APF outperforms other algorithms when d=1, it suffers from the curse of dimensionality and we cannot obtain reasonable estimates with the FA-APF in a feasible computational time. §.§ Lorenz-96 model The Lorenz-96 is a nonlinear model with partial observation <cit.>. The discrete model can be viewed as the Euler-Maruyama scheme of the following interacting particle system consisting of d particles in ℝ^1: dX^i = (-X^i-1X^i-2 + X^i-1X^i+1 - X^i + α)dt + σ^2 dW^i, 1 ≤ i ≤ d, where α∈ℝ, σ^2 ∈ℝ^+, W^i (1≤ i≤ d) are independent Brownian motions in ℝ^d, and the indices should be understood modulo d. The observation is through Y_t ∼𝒩(·; HX_t,Σ_OB), where H is a diagonal matrix with H_ii = 1 (1 ≤ i ≤ d-2), H_ii = 0 (d-1 ≤ i ≤ d). Note that this model does not have an analytical solution. We use the non-parametric implementation in this example. Choose α = 3.0, Σ_OB = I_d, σ = 1. Consider d ∈{3,5,10 }. We report here the empirical standard deviations in Table <ref> below: To test the sensitivity of the proposed method, we fix d = 3, Σ_OB=I_d, σ=1 and run the experiment for different α. See the results in Figure <ref>. As we can see from Table <ref> and Figure <ref>, the proposed TPPF algorithm (especially the ones using loss functions L_RE and L_RECE) behaves much better than BPF and FA-APF, and slightly better than iAPF. Moreover, we observe that for the current partial-observed nonlinear model, sometimes the result of iAPF is more biased (see (b) in Figure <ref>), despite its relatively low variance. § CONCLUSION In this paper, we study the discrete-time twisted particle filters (TPF) from a continuous-time perspective. In detail, under suitable settings, we prove the convergence from discrete-time to continuous-time models. This then enables us to view existing control-based importance sampling algorithms as good guidance for constructing novel TPF algorithms in discrete-time settings. As a concrete example, we propose a novel TPF algorithm, Twisted-Path Particle Algorithm (TPPF), inspired by algorithms in continuous-time settings. In TPPF, we choose some specific KL-divergence between path measures as the loss function and learn the twisting function parameterized by a neural network. We also give some numerical examples to illustrate the capability of the proposed algorithm. Some possible future work may include: seeking other practical TPF algorithms guided by the continuous-time importance sampling algorithms, and understanding existing TPF algorithms rigorously by finding its continuous-time limit. § ACKNOWLEDGEMENTS This work is supported in part by National Science Foundation via grant DMS-2309378. This work is done during Yuliang Wang's visit to Duke University. He thanks the Mathematics Department for their hospitality. The authors would thank Pieralberto Guarniero, Adam M. Johansen and Anthony Lee for help discussions on implementation details of iAPF method. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> Recall the notation P̂[φ](k,x) = ∫φ(k,y') P̂(x,dy'). Then by definition, 𝔼_x[∏_k=0^nĝ_k^φ(X̂^φ_k)] = ∫ĝ_0^φ(x) ∏_k=1^n ĝ_k^φ(x_k) P̂_k^φ(x_k-1,x_k)dx_1:n =∫ĝ_0(x) (∏_k=1^n ĝ_k(x_k))(P̂[φ](1,x)/φ(n,x_n)∏_k=1^n-1P̂[φ](k+1,x_k)/φ(k,x_k)∏_k=1^n φ(k,x_k)/P̂[φ](k,x_k-1)P̂_k(x_k-1,x_k))dx_1:n =∫ĝ_0(x) ∏_k=1^n ĝ_k(x_k) P̂_k(x_k-1,x_k)dx_1:n = Z_dis(x). §.§ Proof Proposition <ref> The proof is straightforward. * We first prove the first claim. Since φ^*(n,x) = ĝ_n(x), direct calculation yields φ^*(n-1,x) = ĝ_n-1(x) ∫ĝ_n(y) P̂(x,dy) = 𝔼[ĝ_n(X̂_n)ĝ_n-1(X̂_n-1)|X̂_(n-1) = x], φ^*(n-2,x) = ĝ_n-2(x) ∫𝔼[∏_i=n-1^n ĝ_i(X̂_i)|X̂_n-1 = y] P̂(x,dy) = 𝔼[∏_i=n-2^n ĝ_i(X̂_i)|X̂_n-2 = x], ⋯ φ^*(k,x) = ĝ_k(x) ∫𝔼[∏_i=k+1^n ĝ_i(X̂_i)|X̂_k+1 = y] P̂(x,dy) = 𝔼[∏_i=k^n ĝ_i(X̂_i)|X̂_k = x], ⋯ φ^*(0,x) = ĝ_0(x) ∫𝔼[∏_i=1^n ĝ_i(X̂_i)|X̂_1 = y] P̂(x,dy) = 𝔼[∏_i=0^n ĝ_i(X̂_i)|X̂_0 = x] = Z_dis(x). * Next, we prove the unbiased and zero-variance property of Ŵ. The unbiased property is a direct result of Proposition <ref> and Remark <ref>, and is valid for any twisting function sequence φ(k,·) (1≤ k ≤ n). To prove the zero variance property, we observe that under the probability measure P^*, Ŵ = ∏_k=0^nℓ_k^*(X̂_k) ∏_k=0^n ĝ_k (X̂_k) = ĝ_n(X̂_n)φ^*(0,X̂_0)/φ^*(n,X̂_n) = φ^*_η(0,X̂_0) = Z_dis(X̂_0). §.§ Proof of Corollary <ref> By definition, Z^N.φ_dis(x) = ∏_k=0^n 1/N∑_i=1^N ĝ^φ^*_k(ζ_k^i), and ĝ^φ^*_n(x)=ĝ_n(x) ℓ_n^φ^*(x) = ĝ_n(x)/φ^*(n,x)≡ 1, ĝ^φ^*_k(x) = ĝ_k(x) ℓ_k^φ^*(x) = ĝ_k(x) P̂[φ^*](k+1,x)/φ^*(k,x)≡ 1, 1≤ k ≤ n-1, ĝ^φ^*_0(x) = ĝ_0(x) ℓ_0^φ^*(x) = ĝ_0(x) P̂[φ^*](1,x) = φ^*(0,x). Therefore, Z^N,φ^*_dis(x) =φ^*(0,x)= Z_dis(x), ∀ x ∈ℝ^d. §.§ Proof of Proposition <ref> * We first prove the Feynman-Kac representation formula. Fix 0≤ t≤ T, define the following process for s ∈ [T-t,T] Y(s) := e^∫_t^s h(τ,X_τ) dτ v^*(s,X_s). Clearly, Y(t) = v^*(t,X_t), Y(T) = e^∫_t^T h(τ ,X_τ) dτ g(X_T). By Itô's formula, dY(s) = e^∫_t^s h(τ,X_τ) dτ h(s,X_s) v^*(s,X_s) ds + e^∫_t^s h(τ,(X_τ) dτ∇_x v^*(s, X_s) ·(b(X_s) ds + √(2) dB_s ) + e^∫_t^s h(τ,X_τ dτΔ_x v^*(s,X_s)ds + e^∫_t^s h(τ,X_τ) dτ∂_t v^*(s,X_s) ds By (<ref>), we have dY(s) = √(2) e^∫_t^s h(τ,X_τ) dτ∇_x v^*(s, X_s) · dB_s. Hence, Y(s) is a martingale, and consequently, 𝔼[Y(T)| X_t = x] = 𝔼[Y(t)| X_t = x], namely, v^*(t,x) = 𝔼[ e^∫_t^T h(τ, X_τ) dτ g(X_T)| X_t = x], ∀ x ∈ℝ^d, ∀ t∈[0,T]. * The existence and expression of the Radon-Nikodym derivative L(T) is guaranteed by the classical Girsanov's theorem. We focus on the derivation of the zero-variance property (<ref>) here. For s ∈ [0,T], define the process ω_s := e^∫_0^s h(τ, X^u_τ)dτv^*(s,X^u_s)L(s). Clearly, ω_0 = v^*(0,X_0), ω_T = W. By Itô's formula, dω_s = h(s, X_s^u^*)e^∫_0^s h(τ, X^u_τ)dτv^*(s,X^u_s)L(s) ds + e^∫_0^s h(τ, X^u^*_τ)dτ∂_s v^*(s,X^u^*_s)L(s)ds +e^∫_0^s h(τ, X^u^*_τ)dτ∂_x v^*(s,X^u^*_s) ·(b(X_s^u^*) ds + 2 ∂_x log v^*(s,X^u^*_t) ds + √(2)dB_s )L(s) +e^∫_0^s h(τ,X^u^*_τ)dτv^*(s,X^u^*_s)L(s)(-√(2)∂_x log v^*(s,X_s^u^*) dB_s ) -e^∫_0^s h(τ, X^u_τ)dτL(s)(√(2)∂_x v^*(s,X_s^u^*) ·√(2)∂_x log v^*(s,X_s^u^*))ds. Using the time evolution (<ref>) for u^*, we have dω_s ≡ 0. Therefore v^*(0,X_0) = ω_0 = ω_T = W P^*-a.s. § PROOFS OF SECTION <REF> §.§ Proof of Proposition <ref> Consider the following SDEs with the same initial state: dX_t = b(X_t) dt + √(2)dB, X_0 = x, 0≤ t ≤ T=nη, dX̅_t = b(X̅_kη)dt + √(2)dB, X̅_0 = x, t∈[kη,(k+1)η), 0≤ k≤ n-1. Clearly, P_η(x,·) = Law(X_η) and P̂^η(x,·) = Law(X̅_η). Denote by P^x_[0,t], P̂_[0,t]^x the path measures with the same initial x associated with the time interval [0,t] for any fixed t ≤ T. Then using the data processing inequality <cit.> and Girsanov's theorem <cit.>, it holds that (Law(X̅_t) Law(X_t) ) ≤(P̂_[0,t]^x P_[0,t]^x ) ≤𝔼[∑_k=0^⌈ t/η⌉∫_kη^(k+1)η |b(X̅_t) - b(X̅_kη)|^2 dy]. By condition (a) in Assumption <ref>, b(·) is L_b-Lipschitz, and the second moment for X̅_kη has uniform bound, then for any 0≤ k ≤⌈ t/η⌉, one has 𝔼|b(X̅_t) - b(X̅_kη)|^2 ≤ L_b^2𝔼|(t-kη)b(X̅_kη) + ∫_kη^t dB_s |^2 ≤ 2L_b^2(2η^2(|b(0)|^2 + L_b^2 sup_0≤ i ≤ n𝔼|X̅_iη|^2) + η d ) ≤ Cdη, where we need η < 1 and C = C(L_b, b(0), sup_0≤ i ≤ n𝔼|X̅_iη|^2 ) is a positive constant. Combining (<ref>) and (<ref>), we know that (Law(X̅_t) Law(X_t) ) ≤ Cdtη. And by Pinsker's inequality, we have TV(Law(X̅_t) , Law(X_t) ) ≤ C√(dtη). Finally, taking t = η, we obtain the desired result. §.§ Proof of Proposition <ref> Fix T>0, η>0, x∈ℝ^d. Recall that T = nη, and Z_dis(x) := 𝔼_x[∏_k=0^n-1ĝ_k(X̂_k)ĝ_n(X̂_n)] = 𝔼_x[exp(∑_k=0^n-1∫_kη^(k+1)ηlog(ĝ_k(X̂_k))^η^-1ds)ĝ_n(X̂_n)], Z_con(x) = 𝔼_x[e^∫^T_0log g(x,X_s) ds g_T(X_T) ]. Then, for dX_t = b(X_t) dt + √(2)dB, X_0 = x, 0≤ t ≤ T=nη, dX̅_t = b(X̅_kη)dt + √(2)dB, X̅_0 = x, t∈[kη,(k+1)η), 0≤ k≤ n-1, we have Z_con(x) - Z_dis(x) = 𝔼_x[exp(∫^T_0log g(s,X_s) ds) g_T(X_T) ] - 𝔼_x[exp(∫^T_0log g(s, X̅_s) ds) g_T(X̅_T) ] +𝔼_x[exp(∫^T_0log g(s, X̂_s) ds) ( g_T(X̅_T) -ĝ_n(X̅_T) ) ] +𝔼_x[ĝ_n(X̅_T) (∏_i=0^n-1exp(∫_iη^(i+1)ηlog g(s,X̅_s)ds) - ∏_i=0^n-1exp(∫_iη^(i+1)ηη^-1logĝ_iη(X̅_iη)ds) ) ], For the first term above, by condition (d) in Assumption <ref>, exp(∫^T_kηlog g(s, X_s) ds) g_T(X_T) is a continuous, bounded functional of X_[kη,T] with X_kη = x, denote by F_k(X_[kη,T]). Then, using the KL upper bound for path measures obtained in (<ref>), we have 𝔼_x[exp(∫^T_0log g(s,X_s) ds) g_T(X_T) ] - 𝔼_x[exp(∫^T_0log g(s, X̅_s) ds) g_T(X̅_T) ] =𝔼_x[F_0(X_[0,T]) ] - 𝔼_x[F_0(X̅_[0,T]) ] =∫ F_0(y) (P_[0,T]^x(y) - P̂_[0,T]^x(y) )dy ≤ C TV(P_[0,T]^x , P̂_[0,T]^x) ≤ C (P̂_[0,T]^x P_[0,T]^x)^1/2≤ C√(Tdη)→ 0, where we have used the Pinsker's inequality in the last line above. For the second term, by conditions (b), (d) in Assumption <ref>, exp(∫^T_0log g(s, X̂_s) ds) is bounded, and ĝ_n(x) → g_T(x) uniformly in x. Therefore, as η→ 0, 𝔼_x[exp(∫^T_0log g(s, X̂_s) ds) ( g_T(X̅_T) -ĝ_n(X̅_T) ) ] → 0. For the third term, for s ∈ [iη,(i+1)η), |log g(s,X̅_s) - η^-1logĝ_iη(X̅_iη)| ≤ |log g(s,X̅_s) - log g(iη,X̅_s)| + |log g(iη, X̅_s) - η^-1logĝ_iη(X̅_s)| + |η^-1logĝ_iη(X̅_s) - η^-1logĝ_iη(X̅_iη)|. By conditions(b), (c) in Assumption <ref>, as η→ 0, |log g(s,x) - log g(iη,x)|, |log g(iη,x) - η^-1logĝ_iη(x)| tend to 0 uniformly in s, i, x. Moreover, by condition (c) in Assumption <ref>, |logĝ_iη(X̅_s) - logĝ_iη(X̅_iη)| ≲ |X̅_s - X̅_iη | ≲η |b(X̂_iη)| + |W_s - W_iη|. Hence, as η→ 0, 𝔼_x[ĝ_n(X̅_T) (∏_i=0^n-1exp(∫_iη^(i+1)ηlog g(s,X̅_s)ds) - ∏_i=0^n-1exp(∫_iη^(i+1)ηη^-1logĝ_iη(X̅_iη)ds) )) ] → 0. Combining all the above, we conclude that for any fixed x ∈ℝ^d, | Z_dis(x) - Z_con(x) | → 0 as η→ 0. § PROOF OF SECTION <REF> §.§ Proof of Lemma <ref> We refer to <cit.> for a similar proof. The proof is based on the Jensen's inequality. In fact, by direct calculation and convexity of -log(·), we have -log𝔼^X ∼ P[exp(-W(X) )] = -log∫ e^-W(x) P(dx) = -log∫ e^-W(x)dP/dQ(x) Q(dx) ≤∫(W(x) - logdP/dQ(x)) Q(dx) = 𝔼^X∼ Q[W(X)] + (Q P), and the equality holds if and only if dQ/dP(X) = exp(-log𝔼^X ∼ P[exp(-W(X))] -W(X)) . §.§ Proof of Proposition <ref> By definition (<ref>), we have J(φ) = 𝔼_x[-∑_k=0^nlogĝ_k(X̂_k)] + ∫log(∏_k=1^nφ(k,x_k)P̂(x_k-1,x_k)/P̂[φ](k,x_k-1) / ∏_k=1^n P̂(x_k-1,x_k)) ∏_k=1^nP̂^φ_k(x_k-1,x_k) dx_1:n = 𝔼_x[-∑_k=0^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X^φ_k-1)], and (P^φP^φ^*) = ∫log(∏_k=1^nφ(k,x_k)/P̂[φ](k,x_k-1) / ∏_k=1^nφ^*(k,x_k)/P̂[φ^*](k,x_k-1))∏_k=1^nP̂_k^φ(x_k-1,x_k) dx_1:n =𝔼_x[∑_k=1^n logφ(k,x_k)/P̂[φ](k,X^φ_k-1)] - ∫log(φ^*(n,x_n)/φ^*(0,x)∏_k=0^n-1φ^*(k,x_k)/P̂[φ^*](k+1,x_k))∏_k=1^nP̂^φ_k(x_k-1,x_k) dx_1:n = 𝔼_x[-∑_k=0^n logĝ_k(X̂_k^φ) + ∑_k=1^n logφ(k,X̂_k^φ)/P̂[φ](k,X^φ_k-1)] + logφ^*_0(x), where we have used the recursive relation (<ref>) in the last equality. Moreover, using the derived expression for J(φ) and (<ref>), we know that J(φ^*) = logφ_0^*(x). Consequently, J(φ) = J(φ^*) + (P^φP^φ^*). §.§ Proof of Proposition <ref> The proof relies on the following auxiliary results: * (Lemma <ref>, generalized Jensen's inequality) Given deterministic functions ϕ: Ω→ℝ and f: ℝ→ℝ. Assume that f is convex. Let λ, λ' be two probability measures on Ω that are absolutely continuous with each other. Define the functional 𝒥(f,λ,ϕ) := 𝔼_λ f(ϕ) - f𝔼_λϕ. Then, m𝒥(f,λ, ϕ) ≤𝒥(f,λ', ϕ) ≤ M𝒥(f,λ, ϕ), where m := inf_E∈ℬ(Ω)λ'(E)/λ(E), M := inf_E∈ℬ(Ω)λ(E)/λ'(E). * (Lemma <ref>, equivalence with the 𝒳^2-divergence) r^2(ν̃) = 𝒳^2(ν^* | ν̃), where the 𝒳^2-divergence is defined by 𝒳^2(ν^* | ν̃):=𝔼_μ[|dν^*/dν̃|^2 - 1]. Now, by Jensen's inequality, we have (ν^*ν̃) = 𝔼_ν^*[logdν^*/dν̃] ≤log𝔼_ν^*[ dν^*/dν̃] Combining (<ref>) and (<ref>), we have r^2(ν̃) = 𝒳^2(ν^* | ν̃) = 𝔼_μ[|dν^*/dν̃|^2 - 1] = 𝔼_ν^*[dν^*/dν̃] - 1 ≥ e^(ν^*ν̃) - 1. For the second claim in Proposition <ref>, we choose λ' = ν^*, λ = ν̃, ϕ = dν^*/dν̃, and f = -log in (<ref>). Then, 𝒥(f,λ,ϕ) = -𝔼_ν̃logdν^*/dν̃ + log𝔼_ν̃dν^*/dν̃ = (ν̃ν^*), and 𝒥(f,λ',ϕ) = -𝔼_ν^*logdν^*/dν̃ + log𝔼_ν^*dν^*/dν̃ = -(ν^*ν̃) + log (𝒳^2(ν^* | ν̃) + 1) = -(ν^*ν̃) + log (r^2(ν̃) + 1), where we have used (<ref>) in the last equality. Combining (<ref>), (<ref>) and (<ref>), we obtain the second claim e^m(ν̃ν^*) + (ν^*ν̃)-1 ≤ r^2(ν̃) ≤ e^M(ν̃ν^*) + (ν^*ν̃)-1, where m:=inf_E ν^*(E)/ν̃(E) and M := sup_E ν^*(E)/ν̃(E). The two auxiliary lemmas used in the proof of Proposition <ref> are given below: Given deterministic functions ϕ: Ω→ℝ and f: ℝ→ℝ. Assume that f is convex. Let λ, λ' be two probability measures on Ω that are absolutely continuous with each other. Define the functional 𝒥(f,λ,ϕ) := 𝔼_λ f(ϕ) - f(𝔼_λϕ). Then, m𝒥(f,λ, ϕ) ≤𝒥(f,λ', ϕ) ≤ M𝒥(f,λ, ϕ), where m := inf_E∈ℬ(Ω)λ'(E)/λ(E), M := inf_E∈ℬ(Ω)λ(E)/λ'(E). We first show that m𝒥(f,λ, ϕ) ≤𝒥(f,λ',ϕ). Taking E = Ω, we have m := inf_E λ'(E)/λ(E)≤λ'(E)/λ(E) = 1. Without loss of generosity, assume m<1. (If m=1, then λ≡λ', and the argument is trivial). (<ref>) is then equivalent to m𝔼_λ f(ϕ) - mf(𝔼_λϕ) ≤𝔼_λ'f(ϕ) - f(𝔼_λ'ϕ). Since m ∈ (0,1), and m = inf_E λ'(E)/λ(E), the probability λ̅ := λ' - mλ/1-m is well-defined. Then, by Jensen's inequality, since f is convex, we have 𝔼_λ'f(ϕ) - m𝔼_λ f(ϕ) = (1-m)𝔼_λ̅ f(ϕ) ≥ (1-m)f(𝔼_λ̅ϕ) = (1-m)f(𝔼_λ'ϕ - m𝔼_λϕ/1-m). Using convexity of f again, we have (1-m)f(𝔼_λ'ϕ - m𝔼_λϕ/1-m) + mf(𝔼_λϕ) ≥ f(𝔼_λ'ϕ). Therefore, (<ref>) holds, and thus m𝒥(f,λ, ϕ) ≤𝒥(f,λ',ϕ). Assuming M>1 and using exactly the same arguments, we have M𝒥(f,λ, ϕ) ≥𝒥(f,λ', ϕ). Recall the definitions of ν̃, ν^* and the relative variance r(ν̃) in Proposition <ref>. Then r^2(ν̃) = 𝒳^2(ν^* | ν̃), where the 𝒳^2-divergence is defined by 𝒳^2(ν^* | ν̃):=𝔼_μ[|dν^*/dν̃|^2 - 1]. By definition, we have 𝒳^2(ν^* | ν̃):=𝔼_μ[|dν^*/dν̃|^2 - 1] = 𝔼_ν̃| dν^*/dν̃|^2 - |𝔼_ν̃dν^*/dν̃|^2 = _ν̃(dν^*/dν̃) = _ν̃(dν^*/dνdν/dν̃). Recall the zero-variance property of ν^*: dν^*/dν = e^-W/Z ν-a.s. . Consequently, _ν̃(dν^*/dνdν/dν̃) = _ν̃(e^-W/Zdν/dν̃) = _ν̃(e^-Wdν/dν̃)/Z^2 = r^2(ν̃). Hence, r^2(ν̃) = 𝒳^2(ν^* | ν̃). plain
http://arxiv.org/abs/2409.03119v1
20240904230613
Register Aggregation for Hardware Decompilation
[ "Varun Rao", "Zachary D. Sisco" ]
cs.AR
[ "cs.AR", "cs.PL" ]
Up, Up, and Away: Winds and Dynamical Structure as a Function of Altitude in the Ultra-Hot Jupiter WASP-76b [ September 2024 =========================================================================================================== empty empty § ABSTRACT Hardware decompilation reverses logic synthesis, converting a gate-level digital electronic design, or netlist, back into hardware description language (HDL) code. Existing techniques decompile data-oriented features but do not address memory elements, which pose difficulty due to their deconstruction into data flip-flops in netlists and the cycles they form. Recovering multi-bit registers and memory blocks expands the applications of hardware decompilation, notably towards retargeting technologies (ie FPGAs to ASICs) and decompiling processor memories. We devise a method for register aggregation, to identify relationships between the flip-flops in a netlist, categorize them into registers and memory blocks, and output HDL code instantiating these memory elements. We group flip-flops by common enable pins and derive register bit-orders using functional dependencies, scaling up similarly to two-dimensional memory blocks. We evaluate our technique over a dataset of netlists, comparing the quantity and widths of the recovered registers and memory blocks with the netlist source code. The technique successfully recovers memory elements in all of the tested circuits, even aggregating beyond the source code expectation. In 10/13 circuits, all source code memory elements are accounted for, and we are able to compact up to 2048 disjoint bits into a single memory block. Keywords: hardware decompilation, hardware description languages, sequential logic, electronic design automation § INTRODUCTION Hardware description languages (HDLs) are often employed for designing large-scale digital electronics, given their large inventory of abstractions. They take advantage of logic synthesis tools to produce netlists, which are graphical circuit representations consisting of individual logical gates that prove useful during manufacturing. Since synthesis distills HDL code down into gates, the semantic meaning of the code gets lost in the synthesized netlist, making these netlists quite difficult to analyze and interpret. Synthesis is also nondeterministic, involving numerous optimizations, incompleteness, and occasional errors [<ref>–<ref>]. Therefore, hardware designers must run extensive simulations to confirm that their HDL code and the netlist synthesized from it have identical behavior [<ref>]. The problem of hardware decompilation has been proposed to reduce this simulation time and compress netlists for easier use. Hardware decompilation produces HDL code deterministically from a netlist by recognizing various abstractions and collapsing them into code. Netlists are much larger artifacts than their code counterparts [<ref>], so decompiling a netlist into HDL code allows it to be simulated much faster. Beyond improving simulation speed, hardware decompilation has a number of other applications, such as netlist compaction, transpilation between HDLs, and propagating netlist edits back up to code. Existing work in hardware decompilation deals with features like loops [<ref>] and modules [<ref>], which are more prevalent in combinational logic. However, these methods can only decompile sequential circuits in certain contexts, lacking a robust way to deal with larger memory elements. In this paper, we address hardware decompilation with respect to the defining feature of sequential logic: registers and memory blocks. The gates forming a netlist are bitwise operators, so any registers present in the netlist must be split into single-bit data flip-flops (DFFs) during synthesis, losing their connection to one another. We aim to perform register aggregation, to recover the multi-bit registers and memory blocks in a netlist which were originally instantiated in the equivalent HDL code. This contributes significantly to processor decompilation given that registers, and especially memory blocks, are key elements of almost all processors. Additionally, register aggregation allows for technology retargeting, such as from field-programmable gate arrays (FPGAs) to application-specific integrated circuits (ASICs), once memory blocks can be manipulated as cohesive pieces of the netlist. There is existing work in forming multi-bit registers in a netlist, but this is applied to reverse engineering instead of decompilation and focuses primarily on data flow, neglecting bit-order and not outputting HDL code [<ref>]. It also does not support memory block recovery. Other reverse engineering work does produce HDL code but does not target memory elements [<ref>-<ref>]. We propose a two-step approach to aggregate the DFFs in a netlist into registers: first dividing them into groups and then ordering each group to form a multi-bit register. We follow a similar approach in the second dimension for memory blocks. We group DFFs by enable, and derive their relative order according to functional relationships between DFFs. This is inspired by a characterization of sequential logic using linearly inductive boolean functions [<ref>]. The key contributions of this paper are: * We describe and implement a technique to group and order DFFs into registers (Section 2A). * We describe and implement a technique to scale our work up to two-dimensional memory blocks by aggregating multi-bit registers (Section 2B). * We evaluate our technique on a set of benchmark netlists with known memory elements by comparing the sizes and quantities found (Section 3). § METHODS §.§ Aggregating Registers We begin by partitioning the set of netlist DFFs into register groups, which we accomplish through grouping DFFs by their enable inputs. This is motivated by the idea that DFFs belonging to the same multi-bit register should all be activated by the same enable signal. Grouping by enable limits our algorithm to considering only DFFs with an enable pin, which is reasonable given that hardware decompilation is an inherently heuristic process. Once the partition has been made, we search for an ordering of each register group. We do this by finding the register transfer arc of each DFF in the group, which is the set of nodes in the netlist that are ancestors of that particular DFF. We find the transfer arc of a DFF by performing an upward depth-first search from it on the netlist, searching specifically for other DFFs in the register group to find functional dependencies. A dependency relationship expresses that the input into one DFF is predicated upon the output from another. Key to the depth-first search is dismantling the cycles formed by the DFFs, which we do by separating each DFF into two nodes: one for its current value and one for the next value to be stored. We organize the dependencies into a directed graph, where the nodes are the DFFs in the register group, and each directed edge represents that one DFF is dependent upon another. Note how DFF y being dependent on DFF x (denoted x < y) is directly transferable to x coming before y in the register’s ordering. This is because if there is no dependency between two bits in a register, their pairwise bit-order does not actually affect the circuit’s functionality; a dependency between two DFFs shows that the circuit relies on their pairwise order in the register. This means that our graph of < dependencies is essentially a graph of pairwise orders. In the graph, a valid ordering consists of a sequence of the DFFs that obeys as many < dependencies as possible (ideally all). This is a classic scenario where we can employ topological sort. Since the <’s form directed edges in the graph, a topological sort would naturally build an ordering that resolves all of the < operations if such an ordering exists, or pick one which resolves as many <’s as possible. Oftentimes, there are not enough dependencies to leave the topological sort with only one valid ordering, leading it to pick a different ordering from the originally intended one. However, this is because the intended ordering was chosen in a partially arbitrary manner, so the modified portion of the computed ordering does not actually matter to the circuit’s output. Thus, we have an algorithm to partition the registers in a circuit and order each grouping, addressing our twofold goal for multi-bit registers. To illustrate our method, we provide an example visualizing the ordering technique on a 3-bit counter netlist in Figure 1, assuming the partition has already identified the 3 DFFs as a register group. The register transfer arcs for the 3-bit counter can be formalized into equations, with respect to a high enable bit. If the superscript n denotes the nth clock cycle, we have [ r_0^n = r_0^n-1,; r_1^n = r_0^n-1⊕ r_1^n-1,; r_2^n = ( r_0^n-1 r_1^n-1 ) ⊕ r_2^n-1 ] Equation <ref> can be generalized to an inductive formula [8] for all counter DFFs: [ r_i^0 = 0,; r_i^n = (r_0^n-1 r_1^n-1… r_i-1^n-1 ) ⊕ r_i^n-1 ] Extracting functional relationships between DFFs from equation <ref> yields: r^n_0(r^n-1_0), r^n_1(r^n-1_0, r^n-1_1), r^n_2(r^n-1_0, r^n-1_1, r^n-1_2) The relationships stated in <ref> are then used to form a directed graph of dependencies between the DFFs, upon which topological sort is performed, creating the final ordering, as shown in Figure 2a. The dependency graph for a counter shows each DFF being dependent on all of the DFFs coming after it in the register, but for other small building blocks we find different dependency relationships. For example, the dependencies in a shifter (Figure 2b) create relationships only between consecutive bits instead. For a circuit where the register value is updated using a bitwise operation with an input wire, each of the DFFs is completely independent, so all possible orderings are valid and will preserve the integrity of the circuit (Figure 2c). §.§ Aggregating Memory Blocks After aggregating multi-bit registers, we wish to form memory blocks by grouping suitable registers of the same width into a matrix of DFFs. Under the assumption that the memory blocks support enabled writes, we can begin with grouping by enable again, this time by searching for a wire which is a common input into the respective enables of each multi-bit register in the memory group. Unlike the register scenario, there is no need to find a vertical ordering between the registers in the memory group because they will be deleted and replaced with the memory block instantiation. However, memory blocks are more abstract than multi-bit registers, with defined ports beyond just an organized collection of DFFs. To form a complete memory block, we also need to identify and bundle the wires corresponding to each port by searching around the registers in the memory group. We consider memory blocks with the following four ports: one read port, one write port, one address, and an enable. We have already described how to find the enable. To find the address, we move upward on the netlist from the registers in search of the logic that selects which register to write to based on the bits of the address. This logic is recognizable in the netlist because it consists solely of ands and nots in terms of the single-bit wires corresponding to each address bit: these are the wires that must be bundled into the address. Since each register has a unique address, some unique subset of the address bits is negated and conjuncted with the rest of the address bits to form the address. We leverage the fact that every register in the memory group will have these ands and nots in terms of the same address bits, allowing us to find the address reliably. Since we assume that there is only one write port, all the registers will have their DFF data inputs updated by the same set of wires when the enable is high, which is the write port. Finally, for the read port, we follow a multiplexer tree stemming from the registers until it stops at a series of n wires, where n is the width of each register. The multiplexers are synthesized due to choosing which register is read from given the address, with the select pin of each multiplexer being an address bit. We infer the depth of the multiplexer tree from the address width to find the read port wires. Bundling all the ports completes memory block aggregation: we can now instantiate memory blocks using the widths of their registers and addresses, as well as their bundled input and output ports. We implement our full aggregation technique entirely in Python, inputting netlists as Berkeley Logic Interchange Format (BLIF) [<ref>] files and then parsing them to a more usable format in the PyRTL HDL [<ref>]. Our output modifies the PyRTL working block, which stores all the netlist wires and gates, for easy translation to code. § EVALUATION We test our method on a set of 13 benchmark circuits designed in Verilog (mainly from the Basejump STL [<ref>]) and PyRTL [<ref>] with known registers and memory blocks, described in Figure 3. The benchmarks consist mainly of circuit building blocks, like arithmetic operations, caches, and register buffers, but they also include two CPUs: nerv (a RISC V processor) and opdb_pico from OPDB [<ref>]. We compare the source code registers and memory blocks to the technique’s outputted predictions for each benchmark (see Figure 4 for an overview across benchmarks). §.§ Categorization of the Main Study We find that the results can be categorized into groups. For the alu, fifo (Figure 5a), bsg_fifo, and bsg_lfsr benchmarks, our technique performs perfect aggregation, or behaves exactly as expected. It recovers all the enabled registers and memory blocks originally present in the source code with identical dimensions. Notably, we use bsg_strobe as a negative example, where there are only DFFs without enables; the technique does not produce any false positives and outputs zero registers. For the piso, bsg_assembler, and bsg_idiv (Figure 5b) examples, the technique not only recovers all the source code registers as desired, but it additionally aggregates subsets of them into larger registers. Each such subset shares an enable and can therefore be concatenated into a larger register without affecting the circuit’s behavior. We verify that the technique is indeed accurate by confirming the presence of common enable pins between multi-bit registers instantiated in the source code. This additional register aggregation occurs more dramatically in the bsg_multiply (Figure 5c) and bsg_multiply_array examples, where all the source code registers get combined into one register because they all share one common enable. The additional register aggregation performed by our technique shows that our work can improve the register groupings in a netlist beyond what is originally in the source code, suggesting organizational and compaction capabilities. These capabilities open up potential new applications for decompilation, such optimizing register and memory block usage in existing HDL code. Additional register aggregation seems to occur with the opdb_pico example as well, but the scale is too large to identify where each source code register ends up in the register predictions. Moreover, opdb_pico also has two memory blocks, and the technique successfully recovers the 32× 32 memory block but fails on the 4 × 32 one, outputting it as 4 32-bit registers instead. Here, the neglected memory block is still compacted into 4 registers from 128 DFFs present after synthesis, so some organizational information is retained. For the nerv example, the technique groups all the registers correctly, additionally aggregating two 5-bit registers into a 10-bit one. However, it fails to recover the 32 × 32 memory block, instead outputting it as a 32 32-bit registers. The reason that this memory block is not recovered is because it has multiple read ports and addresses, breaking our memory block port assumptions and preventing the technique from finding a single address. In investigating this failure, we find that the technique does group the 32 registers with an enable before quitting the aggregation process after a unique address is not found. Finally, for the bsg_cache example, additional aggregation is extended even further, with 8 memory blocks being created out of source code without any memory block instantiations. This is of great significance because 16384 DFFs after synthesis are not only recovered into 2048 8-bit registers in the source code, but compacted even further into 8 8 × 256 memory blocks. This compaction from 2048 source code artifacts down to 8 artifacts does not affect the circuit's behavior while improving its organization tremendously. Given limitations of our current technique, such as reliance on PyRTL and memory block port assumptions, future directions include building a more language-agnostic technique and supporting multiple read or write ports in memory blocks. Moreover, additional aggregation, particularly its extension to forming new memory blocks as observed in bsg_cache, illuminates the possibility of utilizing hardware decompilation to build implicit memory blocks from existing HDL code. To further affirm the real-world relevance of register aggregation, previously stated applications such as technology retargeting could be tested in later work. §.§ Runtime Analysis To analyze the time complexity of our aggregation technique, we measure its runtime on the alu example, varying the dimensions of the alu memory block to change the size of the netlist. As the number of gates increases, the runtime scales at a cubic rate, as seen in Figure 5a. The cubic complexity is due to the graph traversals necessary to find dependencies for each located DFF. It turns out that the number of DFFs in a netlist is roughly linear with respect to the netlist's size. As a result, register aggregation is necessarily polynomial time, meaning our current technique is reasonably fast. We also investigate the sub processes inside the aggregation technique to see which tasks consume the most time. Given that most of the steps are linear time operations, we only measure three main graph traversal tasks. The first task is optimize, which is a method outside of the to the actual aggregation that eliminates redundancies in the netlist. The other two tasks are finding register dependencies and memory addresses, as described in Sections 2a and 2b. In benchmarks with fewer netlist gates, optimize is the slowest task, but the other two tasks take up all most all of the time for larger benchmarks. This elucidates that optimize performs its necessary graph traversal operations in less than cubic time relative to the netlist size, revealing the potential to speed up other graph traversal operations in the technique to be on par with optimize. Moreover, when there are memory blocks found, memory address computation quickly outweighs finding register dependencies. This is because memory blocks are two-dimensional while registers are one-dimensional, so they have an extra factor of time complexity. § CONCLUSION Our research presents and evaluates a technique to perform register aggregation for hardware decompilation, a facet of the problem which has not been targeted before. We find that our technique is largely successful at the task of register aggregation, recovering memory elements in all 13 examples and producing accurate dimensions in 10 of them. It is also relatively efficient given its predicted cubic time complexity. Our register aggregation work expands the scope of hardware decompilation, paving the way towards applications like technology retargeting and decompiling large processors. § ACKNOWLEDGEMENTS We would like to thank Prof. Jonathan Balkind for guidance and discussions about the research, as well as Pranjali Jain for many tips and helpful feedback. We thank Dr. Lina Kim for instruction and making this research possible through the UCSB Research Mentorship Program. § REFERENCES * Y. Herklotz and J. Wickerson, “Finding and understanding bugs in FPGA synthesis tools,” Proceedings of the 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 2020. doi:10.1145/3373087.3375310 * R. Nigam et al., “Predictable accelerator design with time-sensitive affine types,” Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation, Jun. 2020. doi:10.1145/3385412.3385974 * G. H. Smith et al., “FPGA technology mapping using sketch-guided program synthesis,” Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, Apr. 2024. doi:10.1145/3620665.3640387 * S. Beamer, “A case for accelerating software RTL simulation,” IEEE Micro, vol. 40, no. 4, pp. 112–119, Jul. 2020. doi:10.1109/mm.2020.2997639 * M. Ganai and A. Kuehlmann, “On-the-Fly Compression of Logical Circuits,” International Workshop on Logic Synthesis, Jul. 2000. * Z. D. Sisco, J. Balkind, T. Sherwood, and B. Hardekopf, “Loop rerolling for hardware decompilation,” Proceedings of the ACM on Programming Languages, vol. 7, no. PLDI, pp. 420–442, Jun. 2023. doi:10.1145/3591237 * G. H. Smith et al., “There and Back Again: A Netlist's Tale with Much Egraphin',” arXiv:2404.00786 [cs.AR], Mar. 2024. doi:10.46586/tches.v2020.i4.309-336 * N. Albartus, M. Hoffmann, S. Temme, L. Azriel, and C. Paar, “Dana Universal Dataflow Analysis for gate-level netlist reverse engineering,” IACR Transactions on Cryptographic Hardware and Embedded Systems, pp. 309–336, Aug. 2020. doi:10.46586/tches.v2020.i4.309-336 * J. Portillo, T. Meade, J. Hacker, S. Zhang, and Y. Jin, “RERTL: Finite State Transducer Logic Recovery at Register Transfer Level,” 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST), Dec. 2019. doi:10.1109/asianhost47458.2019.9006699 * T. Zhang, J. Wang, S. Guo, and Z. Chen, “A comprehensive FPGA reverse engineering tool-chain: From bitstream to RTL code,” IEEE Access, vol. 7, pp. 38379–38389, 2019. doi:10.1109/access.2019.2901949 * A. Gupta and A. L. Fisher, "Representation and symbolic manipulation of linearly inductive Boolean functions," Proceedings of 1993 International Conference on Computer Aided Design (ICCAD), Santa Clara, CA, USA, 1993, pp. 192-199. doi: 10.1109/ICCAD.1993.580055 * UC Berkeley. Berkeley logic interchange format (BLIF). Oct Tools Distribution, vol. 2, pp. 197–247, Jul. 1992. * J. Clow et al., “A pythonic approach for Rapid Hardware Prototyping and instrumentation,” 2017 27th International Conference on Field Programmable Logic and Applications (FPL), Sep. 2017. doi:10.23919/fpl.2017.8056860 * M. B. Taylor, “Basejump STL: Systemverilog needs a standard template library for hardware design,” Proceedings of the 55th Annual Design Automation Conference, Jun. 2018. doi:10.1145/3195970.3199848 * G. Tziantzioulis et al., “OPDB: A Scalable and Modular Design Benchmark,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, no. 6, pp. 1878–1887, Jun. 2022. doi:10.1109/tcad.2021.3096794
http://arxiv.org/abs/2409.02646v1
20240904122227
Theory on CKM and heavy quark decay
[ "Oliver Witzel" ]
hep-ph
[ "hep-ph", "hep-lat" ]
Center for Particle Physics Siegen, Theoretische Physik 1, Naturwissenschaftlich-Technische Fakultät, Universität Siegen, 57068 Siegen, Germany The combination of precise experimental measurements and theoretical predictions allows to extract Cabibbo-Kobayashi-Maskawa (CKM) matrix elements or constrain flavor changing processes in the standard model. Focusing at theoretical predictions, we review recent highlights from the sector of heavy charm and bottom quark decays. Special emphasis is given to nonperturbative contributions due to the strong force calculated using lattice QCD. Theory on CKM and heavy quark decay Oliver [email protected] September 4, 2024 ============================================= § INTRODUCTION In the standard model (SM) of elementary particle physics quark masses and mixing arises from the Yukawa interactions with the Higgs condensate. The probability for the transition of a quark flavor j to a flavor i is encoded in the Cabibbo-Kobayashi-Maskawa (CKM) matrix. In the SM with three generations of quark flavors, the CKM matrix is a unitary 3× 3 matrix. Its elements are fundamental parameters of the SM which are determined combining experimental measurements and theoretical calculations. The following values refer to the 2022 review of particle physics by the Particle Data Group (PDG) <cit.>[The 2024 review has become available at <cit.>.] [ V_ud V_us V_ub; V_cd V_cs V_cb; V_td V_ts V_tb ] = [ 0.97370(14) 0.2245(8) 0.00382(24); 0.221(4) 0.987(11) 0.0408(14); 0.0080(3) 0.0388(11) 1.013(30) ]. While the most precisely known matrix element |V_ud| is has better than per mille level precision, the least precisely known matrix element |V_ub| is quoted with an uncertainty of 6.3% δ V_CKM/V_CKM = [ 0.014 0.35 6.3; 1.8 1.1 3.4; 3.8 2.8 3.0 ]%. Following the discussion in <cit.>, we can exploit the fact that the CKM matrix in the SM is unitary and parametrize it in different ways. A popular choice expresses the CKM matrix in terms of three mixing angles and a CP-violating phase V_CKM = [ c_12 c_13 s_12c_13 s_13e^-iδ; -s_12c_23 - c_12s_23s_13 e^iδ c_12c_23-s_12s_23s_13e^iδ s_23c_13; s_12s_23 - c_12c_23s_13 e^iδ -c_12s_23-s_12c_23s_13e^iδ c_23c_13 ]. If we acknowledge the experimental observation that s_13≪ s_23≪ s_12≪ 1, we can highlight the hierarchical nature of the CKM matrix and arrive at the Wolfenstein parametrization V_CKM = [ 1-λ^2/2 λ Aλ^3(ρ -iη); -λ 1-λ^2/2 Aλ^2; Aλ^3(1-ρ -iη) -Aλ^2 1 ] + O(λ^4), which is unitary in all order of λ. In Eqs. (<ref>) – (<ref>) we used the following notation: s_12 = λ = |V_us|/|V_ud|^2+|V_us|^2, s_23 =Aλ^2 = λ| V_cb/V_us|, s_13 e^iδ = V^*_ub = Aλ^3 (ρ + iη) = Aλ^3(ρ̅+ iη̅)√(1-A^2λ^4)/√(1-λ^2)(1-A^2λ^4(ρ̅+ iη̅)), (ρ̅+iη̅) = -V_udV_ub^*/V_cd^*V_cb^*. The virtue of this form is to visualize the unitary CKM matrix in terms of six different unitarity triangles. Most commonly used is the one based on the relation V_ud^*V_ub^* + V_cd^*V_cb^* + V_td^*V_tb^* =0. Dividing all sides by V_cdV_cb^*, the vertices are exactly at (0,0), (1,0), and (ρ̅,η̅) as shown in the sketch in Fig. <ref>. The quest is now to over-constrain CKM elements in order to test and constrain the SM. Two groups, CKMfitter <cit.> and UTfit <cit.>, regularly gather experimental and theoretical updates to perform global fits of the CKM unitarity triangle. In the following sections we discuss updates on the determinations of the CKM matrix elements |V_cd|, |V_cb|, and |V_ub| which all involve either a heavy charm or bottom quark before summarizing in Section <ref>. § DETERMINATION OF VCD First we consider the determination of V_cd for which the PDG <cit.> presently reports an uncertainty of 1.8%. The PDG averages three different determinations: * Determinations based on neutrino scattering data: |V_cd|_PDG^ν = 0.230 ± 0.011 * Leptonic D^+→{μ^+ ν_μ, τ^+ ν_τ} decays: |V_cd|_PDG^f_D = 0.2181 ± 0.0050 * Semileptonic D→πℓν decays (at q^2=0): |V_cd|_PDG^Dπ(0) = 0.233 ± 0.014 which results at the value of |V_cd|_PDG = 0.221 ± 0.004. The determinations based on leptonic (semileptonic) decays are obtained by combining experimental data and theoretical calculations of decay constants (form factors), using lattice quantum chromodynamics (LQCD). In the case of leptonic decays, experimental data from BESIII <cit.> and CLEO <cit.> are combined with LQCD calculations by Fermilab/MILC <cit.> and ETMC <cit.>. For semileptonic decays measurements by BaBar <cit.>, BESIII <cit.>, CLEO-c <cit.>, and Belle <cit.> as well as the LQCD form factors by ETMC <cit.> at q^2=0 are used. Recently the Fermilab/MILC collaboration published new results determining the form factors D→πℓν and D_s→ Kℓν over the full q^2 range <cit.>. Combining the D→πℓν form factors with the experimental data from BaBar, BESIII, CLEO-c, and Belle <cit.> leads to a new most precise determination of |V_cd|_FNAL/MILC^Dπ = 0.2238 ± 0.0029. The gain in precision arises by exploiting the full q^2 dependence in combination with state-of-the-art lattice simulations.[A possible point of concern is using f_∥ and f_⊥ in the chiral-continuum extrapolation (cf. discussion in Sec. <ref>).] In addition a first prediction of |V_cd| based on semileptonic D_s decays is presented and a value of |V_cd|_FNAL/MILC^D_s K = 0.258 ± 0.015, is obtained using experimental results by BESIII <cit.>. Due to fewer experimental results with larger uncertainty, the precision of this channel is however limited. Overall the different determinations of |V_cd| show very good agreement as can be seen in the comparison plot shown in Fig. <ref>. § DETERMINATION OF VCB Unlike for |V_cd|, we cannot determine |V_cb| from simple leptonic decays because an experimental measurement of B_c →τν_τ is currently not feasible. Determinations of |V_cb| are, therefore, based on analyzing semileptonic decays and we can consider both, inclusive and exclusive, processes. While in the case of exclusive decays the hadronic final state is explicitly specified, inclusive decays consider all semileptonic decays featuring a b→ c transition. Unfortunately, the value obtained for |V_cb| based on inclusive analyses has been showing a persistent 2-3σ tension to values corresponding to exclusive analyses. The current situations is summarized in Fig. <ref> where we show the values of inclusive determinations discussed below as well as FLAG averages <cit.> for different exclusive channels. §.§ Inclusive determination of Vcb Measurements of inclusive B → X_c ℓν_ℓ decays are typically performed at B-factories where an e^+ beam collides with an e^- beam and the collision energy is tuned to the Υ(4s) threshold. The Υ(4s) predominantly decays into B and B mesons and their semileptonic decays are then experimentally observed. For the inclusive determination of |V_cb| moments e.g. of the out-going leptons are experimentally measured. |V_cb^incl| is then extracted by fitting these lepton moments using a fit ansatz based on the systematic expansion of the total decay rate. This operator product expansion (OPE) is performed in terms of Λ_QCD/m_b with m_b ≫Λ_QCD and therefore named heavy quark expansion (HQE) B = |V_cb|^2 [ Γ(b→ cℓν_ℓ) + 1/m_c,b + α_s + …]. As is the case for all OPE, Eq. (<ref>) does not allow point-by-point predictions. It however converges if integrated over large phase space ∫ dΦ w^n(ν, p_ℓ, p_ν) dΓ/dΦ with ν = p_B/m_B. In Eq. (<ref>) we have introduced a weight functions w which can e.g. be defined by * 4-momentum transfer squared: w = (p_ℓ + p_ν)^2 = q^2, * Invariant mass squared: w = (m_Bν -q)^2 = M_X^2, * Lepton energy: w = (ν· p_ℓ) = E_ℓ^B. This method has been established using spectral moments (hadronic mass moments, lepton energy moments, …) dΓ = dΓ_0 + dΓ_μ_πμ^2_π/m_b^2 + dΓ_μ_Gρ^3_D/m_b^3 + dΓ_ρ_LSρ^3_LS/m_b^3 + O(1/m_b^4). In Eq. (<ref>) the dΓ have been calculated perturbatively up to O(α_s^3) <cit.> whereas μ_π^2, μ_G^2, ρ_D^3, ρ_LS^3 parameterize nonperturbative dynamics which is fitted from data. The state-of-the-art analysis including 3-loop α_s corrections for the semileptonic fit to experimentally measured spectral moments yields <cit.> |V_cb^incl| = (42.16 ± 0.51) · 10^-3, which has an uncertainty 1.2%. Due to the large number of higher order terms in the HQE expansion it is, however, not straight-forward to further improve this determination. The number of terms can be reduced by using reparametrization invariance (RPI) as proposed by Fael, Mannel, and Vos in Ref. <cit.>. Unfortunately, not all observables are RPI invariant. Out of the three weight functions named above, only the q^2 moments are RPI invariant. By now Belle <cit.> and Belle II <cit.> have performed dedicated analyses extracting the ⟨ (q^2)^n⟩ moments and thus enabled the first determination of |V_cb| using q^2 moments <cit.>. Including contributions up to 1/m_b^4 and correction up to α_s |V_cb^incl, q^2| = (41.69 ± 0.63) · 10^-3, is obtained which has a competitive uncertainty of 1.5%. Simultaneously extracting |V_cb^incl| using all moments, an even more precise value can be obtained <cit.> |V_cb^incl, all| = (41.97 ± 0.48) · 10^-3, which has an uncertainty of 1.1%. We emphasize that the new determination based on q^2 moments provides a different lever arm to constrain the fit parameters than the method based on spectral moments. §.§ Exclusive determination of Vcb Exclusive decays have been measured experimentally both, at B factories as well has at hadron colliders e.g. the LHCb experiment at the large hadron collider (LHC). Such measurements have been reported with B, B_s, or Λ_b initial states and pseudoscalar or vector hadronic final states. To extract |V_cb^excl|, these measurements need to be combined with form factors either determined using LQCD or determinations based on sum rules. In the following we restrict ourselves to exclusive B → D^*ℓν_ℓ decays where the D^* is treated as a QCD-stable particle using the narrow width approximation and form factors are obtained using LQCD. Experimentally B→ D^* ℓν is preferred and measurements have been reported by BaBar, Belle, and Belle II <cit.>. Conventionally we parametrize semileptonic B decays in terms of known kinematical terms K_D^*(q^2,m_ℓ) and form factors F(q^2) dΓ(B→ D^*ℓν)/dq^2 = K_D^*(q^2,m_ℓ) · | F(q^2)|^2 · |V_cb|^2. The form factors parametrize contributions due to the (nonperturbative) strong force and we use an OPE to identify short distance contributions. These short distance contribution are calculable using lattice QCD where the corresponding flavor changing currents are implemented as point-like operators. A sketch of the lattice setup for exclusive B→ D^*ℓν decays is shown on the left hand side of Fig. <ref>. At the magenta dot the flavor changing vector (V^μ) and axial (A^μ) currents are inserted to calculate hadronic matrix elements and subsequently extract the (relativistic) form factors V(q^2), A_0(q^2), A_1(q^2), and A_2(q^2): ⟨ D^*(k,ε_ν) | V^μ | B(p)⟩ = V(q^2) 2iε^μνρσε_ν^* k_ρ p_σ/M_B+M_D^*, ⟨ D^*(k,ε_ν) | A^μ| B(p)⟩= A_0(q^2)2M_D^*ε^*· q/q^2 q^μ + A_1 (q^2)(M_B + M_D^*)[ ε^*μ - ε^*· q/q^2 q^μ] -A_2(q^2)ε^*· q/M_B+M_D^*[ k^μ + p^μ - M_B^2 -M_D^*^2/q^2q^μ]. Since in a b→ c transition a heavy bottom quark decays to a heavy charm quark, frequently the four form factors are expressed using the HQE convention where the momentum transfer q^2 is replaced by w=v_D^*· v_B and the four form factors are named h_V(w), h_A_0(w), h_A_1(w), h_A_2(w). By now three lattice collaborations, Fermilab/MILC <cit.>, JLQCD <cit.>, and HPQCD <cit.> have published form factor results for B→ D^*ℓν at non-zero recoil. Fermilab/MILC and JLQCD restrict their lattice determinations to the range of high q^2 to keep cutoff effects well controlled. By first performing an extrapolation of the lattice data to physical quark masses and the continuum limit, they cover the full q^2 or w range in a second step carrying out BGL z-expansion <cit.>. HPQCD follows a different strategy simulating heavy flavor masses ranging from charm-like to bottom-like masses. In a combined analysis HPQCD extrapolates their lattice data to the continuum with physical quark masses and performs the kinematical interpolation at the same time. An advantage of this strategy is that for heavy flavor masses below the bottom quark mass a larger, if not the entire phenomenologically allowed range of q^2 can be covered. The analysis is however more involved and direct comparisons/checks may be less straight forward. In general these three form factor determinations show a reasonable level of consistency in particular for the range in q^2 directly covered by the individual lattice calculations. However, when considering form factors extrapolated over the full kinematically allowed range in q^2, tensions in the shape of the form factors show up warranting further scrutiny. Similarly when combining the form factor results with the binned experimental measurements by Belle <cit.> and Belle II <cit.> tensions in the shape are present. Efforts are on-going to better understand the origin of these tensions, see e.g. <cit.>. Furthermore, additional groups are working on LQCD determinations of B→ D^*ℓν form factors <cit.>. § DETERMINATION OF VUB |V_ub| is the least precisely known CKM matrix element. Although leptonic B→τν_τ decays have been experimentally observed <cit.>, the uncertainties are too large to impact the determination of |V_ub|. Hence semileptonic decays are preferred but similarly to |V_cb| these exhibit a long standing tension between determinations based on inclusive and exclusive decays. Here we report on recent updates concerning exclusive decays using LQCD to determine the nonperturbative input in terms of form factors. While for |V_cb| the (narrow width) vector final state D^* is the preferred channel for extracting the CKM matrix element, it is the pseudoscalar-to-pseudoscalar B→πℓν decay in the case of |V_ub|. Conventionally we parametrize this process placing the B meson at rest by dΓ(B→πℓν)/dq^2 = G_F^2 |V_ub|^2/24 π^3 (q^2-m_ℓ^2)^2√(E_π^2-M_π^2)/ q^4M_B^2 ×[ (1+m_ℓ^2/2q^2)M_B^2(E_π^2-M_π^2)|f_+(q^2)|^2 + 3m_ℓ^2/8q^2(M_B^2-M_π^2)^2|f_0(q^2)|^2 ] and encode the nonperturbative input in terms of the two form factors f_+ and f_0. Again an OPE has been performed to identify the short distance contributions which we obtain from the lattice calculation by extracting the hadronic matrix element ⟨π |V^μ | B⟩ = f_+(q^2) ( p^μ_B + p^μ_π - M^2_B - M^2_π/q^2q^μ) + f_0(q^2) M^2_B - M^2_π/q^2q^μ. A sketch of the lattice setup is shown on the right hand side in Fig. <ref>. Since pions are much lighter than D^* mesons, B→πℓν decays expand over a much larger kinematical range. So far all semileptonic form factor calculations for B→πℓν on the lattice have only been performed at high q^2 and a kinematical z-extrapolation is performed to cover the entire range. Semileptonic form factors have been calculated by HPQCD <cit.>, RBC/UKQCD <cit.>, Fermilab/MILC <cit.>, and JLQCD <cit.>. To combine the different lattice determinations, FLAG uses the continuum limit form factors from RBC/UKQCD, Fermilab/MILC, and JLQCD and extracts so called synthetic data points. Treating all calculations as statistically independent, a combined fit of these synthetic data points with the experimental measurements by BaBar <cit.> and Belle <cit.> using the BCL parametrization <cit.> is performed. The FLAG average value is |V_ub^excl| = 3.64(16) · 10^-3, where the error has been inflated following the PDG procedure for fits with poor p-value (large χ^2/d.o.f.). Already the lattice form factors exhibit a small tension which may be caused by how the continuum limit of the form factors is taken. This issue has been first pointed out in Ref. <cit.> for semileptonic B_s→ Kℓν decays, an alternative channel to determine the CKM matrix element |V_ub|. Form factors f_+ and f_0 describing semileptonic B_s→ Kℓν decays over the full q^2 range have been obtained by HPQCD <cit.>, RBC/UKQCD <cit.>, and Fermilab/MILC <cit.>. For several years the value at q^2=0 predicted by RBC/UKQCD and Fermilab/MILC has been in tension with the value predicted by HPQCD which is in turn consistent with analytic predictions <cit.>. The lattice calculation for pseudoscalar final states typically proceeds by determining on the lattice the form factors f_∥ and f_⊥ which are directly accessible by hadronic matrix elements. Forming a linear combination of f_∥ and f_⊥ leads to the phenomenological form factors f_+ and f_0. As pointed out by RBC/UKQCD <cit.>, it is important to perform the chiral-continuum extrapolation using the phenomenological form factors f_+ and f_0 because only for phenomenological quantities pole masses entering the extrapolation formulae have a physical meaning. In the case of form factors describing B_s→ Kℓν decays, Ref. <cit.> demonstrates that using f_+ and f_0 in the chiral-continuum extrapolation (instead of f_∥ and f_⊥) removes the tension. Furthermore, Flynn, Jüttner, and Tsang devised a new procedure based on Bayesian inference <cit.> to overcome issues related to truncating the z-expansion at too low order and find consistency with the dispersive matrix approach <cit.>. § SUMMARY The determination of |V_cd| seems to be in very good shape. Different determinations based on neutrino scattering, leptonic or semileptonic decays agree and the new Fermilab/MILC calculation using the full q^2 range in the semileptonic determination will help to reduce the uncertainty. Both inclusive and exclusive determinations of |V_cb| have significantly progressed but the tension between both remains. Different inclusive determinations are consistent and the new method based on q^2 moments leads to further improvement. On the exclusive front we now have three independent determinations covering the full q^2 range. Although we observe some tension between the lattice form factors as well as w.r.t. to the shape of the experimental data, having different data gives us a handle to further scrutinize these calculations and gain a better understanding. |V_ub| remains the CKM matrix element with the largest uncertainty. However, progress on the analysis of exclusive decay channels has been made and further work by different collaborations is ongoing. In addition new LQCD developments target the determination of inclusive processes on the lattice see e.g. <cit.>.
http://arxiv.org/abs/2409.03380v1
20240905093036
Distinguishability-induced many-body decoherence
[ "Christoph Dittel", "Andreas Buchleitner" ]
quant-ph
[ "quant-ph" ]
[email protected] Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, 79104 Freiburg, Germany EUCOR Centre for Quantum Science and Quantum Computing, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, 79104 Freiburg, Germany Freiburg Institute for Advanced Studies, Albert-Ludwigs-Universität Freiburg, Albertstraße 19, 79104 Freiburg, Germany Department of Physics, Lund University, Box 118, 221 00 Lund, Sweden Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, 79104 Freiburg, Germany EUCOR Centre for Quantum Science and Quantum Computing, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, 79104 Freiburg, Germany § ABSTRACT We show that many-body interference (MBI) phenomena are exponentially suppressed in the particle number, if only the identical quantum objects brought to interference acquire a finite level of distinguishability through statistical mixing of some internal, unobserved degrees of freedom. We discuss consequences for cold atom and photonic circuitry experiments. Distinguishability-induced many-body decoherence Andreas Buchleitner September 9, 2024 ================================================ Wave-particle duality, the modulation of the statistics of particle-like detection events by wave-like interference patterns, is the essential feature which distinguishes the quantum from the classical realm. It hinges on a sufficient degree of purity of the interfering quantum object's state <cit.>, as the precondition to witness interference in a suitably chosen measurement set-up which probes the quantum state's coherences in the associated basis. On the many-body level, coherent superpositions of many-body states give rise to multi-partite entanglement <cit.>, and to many-body interference (MBI) phenomena <cit.> when the involved elementary constituents are identical and at least partially indistinguishable. Entanglement (i.e., non-separability) between those degrees of freedom (dof) which are interrogated by the experimental measurement set-up and other, unobserved (“environmental", “bath", “internal" or “ancilla") dof – which are traced over when sampling the measurement record – reduces the purity of the quantum object's state in its observed dof, and thus its ability to exhibit interference phenomena, by reducing the strength of the associated coherences <cit.>. It is intuitively plausible that the larger the number of involved dof and of constituents, it becomes ever more difficult to warrant separability of observed and unobserved dof (by preventing the former from any type of interactions with the latter). This is the fundamental impediment to push the quantum-classical demarcation line to meso- if not macroscopic scales. Notwithstanding, stunning progress has been achieved in preparing coherent superposition states of collective degrees of freedom of many-body compounds of ever increasing size, from bucky balls <cit.> to supercurrents <cit.>, micromechanical oscillators <cit.>, and Bose Einstein Condensates <cit.>. Furthermore, beyond such experiments, which probe effective single-body coherences, experimental progress in the manipulation of controlled many-body quantum systems on the level of single constituents <cit.> now allows to assess bona fide MBI phenomena. While on the single-body level the quantum object’s effective size or mass defines the scale on which interference phenomena are to be observed <cit.>, it is suggestive that the relevant scale is defined by the number N of interfering constituents on the MBI level. Our present purpose is to make this quantitative. We derive a scaling law which shows that distinguishability due to a finite degree of mixedness in the quantum objects' internal, ancilla degrees of freedom – easily brought about by some residual environment coupling – induces the exponential suppression of many-body coherences and hence of MBI phenomena with increasing N. We examine and discuss the consequences of our scaling law for MBI of cold atoms or of photons in optical lattices or in photonic circuits, respectively. Consider a quantum many-body system composed of N identical bosons or fermions localized in mutually orthogonal external states (e.g., think of atoms in a Mott state <cit.>, or of photons in distinct optical modes <cit.>). To describe the particles' distribution across their individual external states, we make use of the first quantization formalism and denote the N-particle basis states by |E⃗⟩=|E_1⟩⊗⋯⊗|E_N⟩, where |E_α⟩ is the external state occupied by the αth particle, with E_αE_α'=δ_E_α,E_α' and E⃗E⃗'=∏_α=1^N δ_E_α,E'_α <cit.>. Further suppose that the particles are equipped with internal degrees of freedom (e.g., the arrival time and polarization state of photons, or the atoms' electronic energy levels), prepared in potentially mixed internal many-body states ρ, which are neither acted upon nor measured. Given N identical bosons (fermions), we must (anti-) symmetrize |E⃗⟩⟨E⃗|⊗ρ with respect to all particle permutations of the symmetric group S_N of N elements. Since MBI is to be observed by interrogation of the external degrees of freedom alone, we subsequently trace over the internal degrees of freedom. This yields the external many-body state ρ_E= ∑_π,π' ∈N [ρ_E ]_π,π'|E⃗_π⟩⟨E⃗_π'|, [ρ_E ]_π,π'=(-1)_B(F)^ππ'1/N!Π_πρΠ^†_π' , which has N!× N! matrix elements <cit.>. The many-particle basis states |E⃗_π⟩=|E_π(1)⟩⊗…⊗|E_π(N)⟩ of ρ_E result from |E⃗⟩ by permuting the particles according to π^-1∈N, much as the operator Π_π in (<ref>) performs a particle permutation π^-1 in the internal degrees of freedom. (-1)^ππ'_B=1 for bosons, with ππ' a composition of π and π', and (-1)^ππ'_F=(ππ') for fermions. Many-body coherences [ρ_E ]_π,π', π≠π' in (<ref>), thus result from the (anti-) symmetrization and are associated with different orderings of the particles in the tensor product structure. By virtue of the trace in (<ref>), these coherences are governed by the particles' mutual indistinguishability <cit.> with respect to their internal degrees of freedom. In the limiting case of pure states of separable, perfectly indistinguishable bosons (B) or fermions (F), i.e., ρ=|ϕ⟩⟨ϕ| with |ϕ⟩=|φ⟩⊗…⊗|φ⟩, the trace in (<ref>) yields unity for all π,π'∈N, such that the corresponding reduced external state is fully coherent and described by a pure state, ρ_E^B(F)=|ψ_B(F)⟩⟨ψ_B(F)|, with |ψ_B(F)⟩=∑_π∈N(-1)_B(F)^π|E⃗_π⟩/√(N!) the usual Fock state of indistinguishable particles. On the other hand, pure states of separable, fully distinguishable (D) particles feature internal states with orthogonal support, i.e., ρ=|ϕ⟩⟨ϕ| with |ϕ⟩=|φ_1⟩⊗…⊗|φ_N⟩ and φ_jφ_k=δ_j,k, and thus give rise to a fully incoherent many-body state, ρ_E^D=∑_π∈N|E⃗_π⟩⟨E⃗_π|/N!. The normalized many-body coherence <cit.> of ρ_E is thus given by 𝒲_C = 1/N!-1∑_π,π' ∈N π≠π' [ρ_E ]_π,π' , 0≤𝒲_C≤ 1 , with pure states of separable, fully distinguishable (indistinguishable) particles saturating the lower (upper) bound. Since the modulus in Eq. (<ref>) erases the sign of the coherences (<ref>), the results presented hereafter apply for bosons as well as for fermions. Another source of distinguishability is due to mixedness of the particles' internal degrees of freedom, which typically arises through decoherence on the single-particle level <cit.>: Suppose that each particle is well described by the same single-particle state ρ_1p <cit.>, such that ρ=ρ_1p⊗⋯⊗ρ_1p in Eq. (<ref>). In Sec. II of [Supplemental Material] we show that in this case the normalized coherence 𝒲_C of many bosons (fermions), N!≫ 1, can be identified with the expectation value of the projector Π_S(A) onto the N-particle (anti-) symmetric subspace, 𝒲_C≈Π_S(A)ρ_E [Note that for a bosonic state ρ_E, tight lower bounds of Π_Sρ_E can be efficiently measured <cit.>.], which, as we further show, is equivalent to the support of the unsymmetrized N-particle internal state ρ on the N-particle symmetric subspace, i.e., 𝒲_C≈Π_S(A)ρ_E=Π_Sρ. Although the internal states of all particles are described by the same density operator, the particles are indistinguishable if and only if ρ_1p is pure. To make this explicit, suppose that ρ_1p has a discrete spectrum of m eigenvalues λ_j≥ 0 with corresponding eigenvectors |j⟩, such that its eigendecomposition reads ρ_1p=∑_j=1^m λ_j |j⟩⟨j|. As we show in Sec. III of <cit.>, the normalized coherence (<ref>) can then be written, for N!≫ 1, as 𝒲_C ≈∑_J_1+J_2+⋯+J_m=Nλ_1^J_1λ_2^J_2⋯λ_m^J_m, with the sum running over all non-negative integers J_1,J_2,…,J_m summing to N. Equation (<ref>) explicitly shows how the spectrum of ρ_1p controls many-body coherence, with 𝒲_C= 1 for pure ρ_1p, and 𝒲_C≈1/m^NN+m-1 m-1 , for ρ_1p maximally mixed. Since 𝒲_C≈Π_Sρ, the finite residual coherence quantified by (<ref>) is given by the relative dimension of the symmetric component S_N(ℋ_I) with respect to that of the total internal Hilbert space ℋ_I. In the limit of an infinite number m of single-particle internal states, S_N(ℋ_I) tends to zero, and so does the residual coherence in (<ref>). With 𝒲_C from Eq. (<ref>) at hand, we have a quantifier of many-body coherence, i.e., a quantifier of the very source of MBI, independently of the exact experimental protocol <cit.>. Any such experiment, however, can only unfold the complexity seeded by MBI for large system sizes. We therefore focus on the scaling behavior of 𝒲_C in the thermodynamic limit, N→∞ (at fixed particle density N/L, with L the dimension of the external single-particle Hilbert space, i.e., here, the number of external single-particle modes), and further examine the scaling behavior in the special case of faint particle distinguishability, i.e., for very weakly mixed internal states ρ_1p. Let λ_max=max_jλ_j be the maximum eigenvalue of ρ_1p, and suppose that λ_max is non-degenerate. Unless all particles are perfectly indistinguishable (i.e., λ_max=1), it then follows from Eq. (<ref>) (see Sec. IV of <cit.>) that, in the thermodynamic limit, 𝒲_C≈λ_max^N ∏_j=1 j≠max^m( 1-λ_j/λ_max)^-1 , i.e., 𝒲_C from Eq. (<ref>) vanishes exponentially in the number N of constituents [d-fold degeneracy of λ_max instead leads to 𝒲_C∝ (N+1)^d-1λ_max^N (Sec. IV of <cit.>)]. Alternatively, we decompose the internal single-particle state into a dominating pure and a faint mixed component, ρ_1p=(1-ϵ)|ϕ⟩⟨ϕ|+ϵρ̃_1p, with ϵ≪ 1/2, and ρ̃_1p a valid density operator. Again (Sec. V in <cit.>), for N!≫ 1, 𝒲_C exhibits exponential scaling: 𝒲_C≈ (1-ϵ)^N . To asses the scaling behavior of the entire hierarchy from two- to N-body coherences, we consider the reduced k-particle state ρ_E^(k)=N-kρ_E <cit.> obtained from ρ_E by tracing out all but k particles. As compared to ρ_E, the reduced k-particle state ρ_E^(k) carries no information about collective many-body properties of subsets of more than k particles <cit.>. The normalized coherence 𝒲_C^(k) of ρ_E^(k) (see Sec. VI of <cit.>) thus quantifies k-particle coherence of order k<N. Since we consider the particles to be localized in distinct external states, with equal, independent internal states ρ_1p, it is intuitively clear that, for k!≫ 1, 𝒲_C^(k) takes a similar form as Eq. (<ref>) (see Sec. VI of <cit.>): 𝒲_C^(k)≈∑_J_1+J_2+⋯+J_m=kλ_1^J_1λ_2^J_2⋯λ_m^J_m. Consequently, Eqs. (<ref>) and (<ref>) also apply to 𝒲_C^(k), but with an exponential scaling in k instead of N, such that k(<N)-body coherences fade away slower than those of order N. Note that 𝒲_C^(k) is, by the very purpose of its construction, undefined for k=1, and that the constituent particles interference with themselves is controlled by the coherence of the reduced single-particle state, as in standard single-body interference. We now discuss the consequences of the above for specific experimental scenarios. First consider cold atoms in distinct optical lattice sites (external states) <cit.> which we model as harmonic potentials each with m=4 equidistant energy levels (internal states) with energy differences E_j+1-E_j=Δ E [see Fig <ref>(a)]. The atoms' population distribution over the oscillator levels be given by a Boltzmann distribution at equilibrium temperature T [see Fig. <ref>(a)]. Their single-particle internal states ρ_1p are then given as ρ_1p= ∑_j=1^m e^-β E_jZ(β)^-1|j⟩⟨j|, with β=1/ T the inverse temperature, the Boltzmann constant, Z(β)=∑_j=1^m e^-β E_j the partition function, and {|j⟩}_j=1^m a set of m orthonormal oscillator energy states, with associated eigenvalues λ_j=e^-β E_j Z(β)^-1. For particle numbers up to N=100, Figs. <ref>(b,d) show a monotonous decrease of 𝒲_C with increasing T, from a fully coherent [k_BT/Δ E⪅ 0.1, see Fig. <ref>(c)] to an almost incoherent (k_BT/Δ E⪆ 1) external many-body state with finite residual coherence as described by Eq. (<ref>). The complementary, asymptotically exponential decrease of 𝒲_C as a function of the particle number N, at fixed temperature, is shown in Fig. <ref>(e) – in perfect agreement with (<ref>) [dashed lines in Fig. <ref>(e)]. Close to the critical temperature, 0.1⪅ k_BT/Δ⪅ 1, at which external decoherence sets in in Fig. <ref>(d), we can apply Eq. (<ref>): With e^-βΔ E≪ 1/2, i.e., T /Δ E≪ 1/ln(2), the internal ground state (i.e., lowest energy level of the local oscillator potential) dominates, leading to (see Sec. VII of <cit.>) 𝒲_C≈ (1-e^-βΔ E)^N [dashed lines in Fig. <ref>(d)], such that T/Δ E≈-1/ln(1-𝒲_C^1/N) . If e^-βΔ E≪ 1/N, i.e., T/Δ E ≪ 1/ln(N), further approximation yields 𝒲_C≈ 1-N e^-βΔ E and T/Δ E ≈ 1/ln[N/(1- 𝒲_C)]. For our example illustrated in Fig. <ref>(a), Eq. (<ref>) is plotted in Fig. <ref>(c), for fixed coherences 𝒲_C, as a function of N. We see that T/Δ E only gradually decreases for increasing N. This explains why the onset of decoherence, manifest in the drop of 𝒲_C in Figs. <ref>(b,d), only marginally shifts towards smaller critical temperatures with increasing N. In particular, this observation suggests that MBI remains observable, with high visibilities, in experiments with large particle numbers, at temperatures T not much below Δ E. As a second example, let us assess the case of N photons propagating along distinct, possibly coupled optical modes (external degrees of freedom) <cit.>. Different injection times (internal degrees of freedom) – a typical error source in photonic experiments – render the photons mutually partially distinguishable <cit.>. As above, we describe the internal state of every photon by the same mixed single-particle internal state ρ_1p= ∫_-∞^∞dt P(t) |t⟩⟨t|, with arrival time probability distribution P(t), and |t⟩=(2 πΔ^2)^-1/4∫_-∞^∞dω e^ω t e^-(ω-Ω)^2/4Δ^2|ω⟩ a single photon's (internal) state with arrival time t, Gaussian frequency spectrum of spectral width Δ around the central frequency Ω, and ωω'=δ(ω-ω'). Figure <ref> provides an example of normally distributed arrival times, P(t)=exp(-(t-t_0)^2/(2σ^2))/√(2πσ^2), with mean ⟨t|=⟩t_0 and standard deviation σ=(⟨t^2|-⟩⟨t|^⟩2)^1/2, for which we calculate the behavior of 𝒲_C as a function of σΔ and N via Eqs. (<ref>) and (<ref>) in the following. The finite width of the photonic arrival time distribution in terms of the temporal width of a photonic wave packet, σΔ, now reduces, similar to the finite temperature distribution over atomic energy bands above, the normalized coherence 𝒲_C of the N-photon state, plotted in Fig. <ref>(b). We observe a qualitatively similar transition from a fully coherent to a fully incoherent many-body state, as a function of σΔ. However, since the here considered internal degrees of freedom – the arrival times – are continuous, 𝒲_C truly vanishes in the limit of large σΔ, since m→∞ in Eq. (<ref>). While a direct application of Eq. (<ref>) requires the spectral decomposition of ρ_1p, we here restrict to a qualitative observation, with Fig. <ref>(d) indicating a convergence into an exponential decay of 𝒲_C in the limit of large N. For N!≫ 1 and small distinguishabilities σΔ≪ 1/√(2), we show in Sec. VIII of <cit.> that Eq. (<ref>) can be reformulated as 𝒲_C≈ (1-σ^2Δ^2)^N (dashed lines in Fig. <ref>(b); compare to the average mutual fidelity in <cit.>). Accordingly, we have σΔ≈√(1-𝒲_C^1/N). For very faint distinguishabilities σΔ≪ 1/√(N), this simplifies to σΔ≈√((1-𝒲_C)/N). In Fig. <ref>(c) we plot Eq. (<ref>) for our example of normally distributed arrival times. In particular, for σΔ≪ 1/√(N), it confirms the power law σΔ∝ N^-1/2 for a targeted level of coherence 𝒲_C, with σΔ sharply decreasing with increasing particle number N. Thus, the photons' arrival times must be increasingly well controlled (with respect to their inverse spectral width) in order to harvest the interference of an increasing number of particles. We have thus quantified the challenge to witness mutual interference of a large number of identical particles, given that, the larger this number, the more difficult to prevent the system constituents from interactions with environmental degrees of freedom. Our present analysis relies on the “static" description of a given many-body state with finite mixedness of its individual constituents, and thereby sets limits to the acceptable level of noise if a certain level of many-body coherence is to be guaranteed. What is not resolved in this analysis is the time dependence of the decoherence process, and a dynamical description of many-body decoherence appears an attractive topic for future theoretical research – much as the development of distillation <cit.> or error correction protocols to stabilize many-body coherences in actual experiments. We thank Jonathan Brugger, Eric Brunner, Christian Haen, and Philipp Preiss for fruitful discussions. We are also thankful to Gabriel Dufour for fruitful discussions and comments at an early stage of the manuscript. C.D. acknowledges the Georg H. Endress Foundation for support and the Freiburg Institute for Advanced Studies for a FRIAS Junior Fellowship. 63 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ghirardi et al.(1986)Ghirardi, Rimini, and Weber]Ghirardi-UD-1986 author author G. C. Ghirardi, author A. Rimini, and author T. Weber, title title Unified dynamics for microscopic and macroscopic systems, https://doi.org/10.1103/PhysRevD.34.470 journal journal Physical Review D volume 34, pages 470 (year 1986)NoStop [Brune et al.(1992)Brune, Haroche, Raimond, Davidovich, and Zagury]Brune-MP-1992 author author M. Brune, author S. Haroche, author J. M. Raimond, author L. Davidovich, and author N. Zagury, title title Manipulation of photons in a cavity by dispersive atom-field coupling: Quantum-nondemolition measurements and generation of “Schrödinger cat” states, https://doi.org/10.1103/PhysRevA.45.5193 journal journal Physical Review A volume 45, pages 5193 (year 1992)NoStop [Brune et al.(1996)Brune, Hagley, Dreyer, Maître, Maali, Wunderlich, Raimond, and Haroche]Brune-OP-1996 author author M. Brune, author E. Hagley, author J. Dreyer, author X. Maître, author A. Maali, author C. Wunderlich, author J. M. Raimond, and author S. Haroche, title title Observing the progressive decoherence of the “meter” in a quantum measurement, https://doi.org/10.1103/PhysRevLett.77.4887 journal journal Physical Review Letters volume 77, pages 4887 (year 1996)NoStop [Nakamura et al.(1999)Nakamura, Pashkin, and Tsai]Nakamura-CC-1999 author author Y. Nakamura, author Y. A. Pashkin, and author J. S. Tsai, title title Coherent control of macroscopic quantum states in a single-Cooper-pair box, https://doi.org/10.1038/19718 journal journal Nature volume 398, pages 786 (year 1999)NoStop [Friedman et al.(2000)Friedman, Patel, Chen, Tolpygo, and Lukens]Friedman-QS-2000 author author J. R. Friedman, author V. Patel, author W. Chen, author S. K. Tolpygo, and author J. E. Lukens, title title Quantum superposition of distinct macroscopic states, https://doi.org/10.1038/35017505 journal journal Nature volume 406, pages 43 (year 2000)NoStop [Zurek(2003)]Zurek-DE-2003 author author W. H. Zurek, title title Decoherence, einselection, and the quantum origins of the classical, https://doi.org/10.1103/RevModPhys.75.715 journal journal Reviews of Modern Physics volume 75, pages 715 (year 2003)NoStop [Hackermüller et al.(2004)Hackermüller, Hornberger, Brezger, Zeilinger, and Arndt]Hackermueller-DM-2004 author author L. Hackermüller, author K. Hornberger, author B. Brezger, author A. Zeilinger, and author M. Arndt, title title Decoherence of matter waves by thermal emission of radiation, https://doi.org/10.1038/nature02276 journal journal Nature volume 427, pages 711 (year 2004)NoStop [Schlosshauer(2005)]Schlosshauer-DM-2005 author author M. Schlosshauer, title title Decoherence, the measurement problem, and interpretations of quantum mechanics, https://doi.org/10.1103/RevModPhys.76.1267 journal journal Reviews of Modern Physics volume 76, pages 1267 (year 2005)NoStop [Hornberger et al.(2012)Hornberger, Gerlich, Haslinger, Nimmrichter, and Arndt]Hornberger-QI-2012 author author K. Hornberger, author S. Gerlich, author P. Haslinger, author S. Nimmrichter, and author M. Arndt, title title Colloquium: Quantum interference of clusters and molecules, https://doi.org/10.1103/RevModPhys.84.157 journal journal Reviews of Modern Physics volume 84, pages 157 (year 2012)NoStop [Nimmrichter and Hornberger(2013)]Nimmrichter-MM-2013 author author S. Nimmrichter and author K. Hornberger, title title Macroscopicity of mechanical quantum superposition states, https://doi.org/10.1103/PhysRevLett.110.160403 journal journal Physical Review Letters volume 110, pages 160403 (year 2013)NoStop [Arndt and Hornberger(2014)]Arndt-TL-2014 author author M. Arndt and author K. Hornberger, title title Testing the limits of quantum mechanical superpositions, https://doi.org/10.1038/nphys2863 journal journal Nature Physics volume 10, pages 271 (year 2014)NoStop [Schlosshauer(2019)]Schlosshauer-QD-2019 author author M. Schlosshauer, title title Quantum decoherence, https://doi.org/https://doi.org/10.1016/j.physrep.2019.10.001 journal journal Physics Reports volume 831, pages 1 (year 2019)NoStop [Delić et al.(2020)Delić, Reisenbauer, Dare, Grass, Vuletić, Kiesel, and Aspelmeyer]Delic-CL-2020 author author U. Delić, author M. Reisenbauer, author K. Dare, author D. Grass, author V. Vuletić, author N. Kiesel, and author M. Aspelmeyer, title title Cooling of a levitated nanoparticle to the motional quantum ground state, https://doi.org/10.1126/science.aba3993 journal journal Science volume 367, pages 892 (year 2020)NoStop [Dittel et al.(2021)Dittel, Dufour, Weihs, and Buchleitner]Dittel-WP-2021 author author C. Dittel, author G. Dufour, author G. Weihs, and author A. Buchleitner, title title Wave-particle duality of many-body quantum states, https://doi.org/10.1103/PhysRevX.11.031041 journal journal Physical Review X volume 11, pages 031041 (year 2021)NoStop [Mintert et al.(2005)Mintert, Carvalho, Kuś, and Buchleitner]Mintert-MD-2005 author author F. Mintert, author A. R. Carvalho, author M. Kuś, and author A. Buchleitner, title title Measures and dynamics of entangled states, https://doi.org/https://doi.org/10.1016/j.physrep.2005.04.006 journal journal Physics Reports volume 415, pages 207 (year 2005)NoStop [Tichy et al.(2013)Tichy, de Melo, Kuś, Mintert, and Buchleitner]Tichy-EI-2013 author author M. Tichy, author F. de Melo, author M. Kuś, author F. Mintert, and author A. Buchleitner, title title Entanglement of identical particles and the detection process, https://doi.org/https://doi.org/10.1002/prop.201200079 journal journal Fortschritte der Physik volume 61, pages 225 (year 2013)NoStop [Ketterer et al.(2019)Ketterer, Wyderka, and Gühne]Ketterer-CM-2019 author author A. Ketterer, author N. Wyderka, and author O. Gühne, title title Characterizing multipartite entanglement with moments of random correlations, https://doi.org/10.1103/PhysRevLett.122.120505 journal journal Physical Review Letters volume 122, pages 120505 (year 2019)NoStop [Benatti et al.(2020)Benatti, Floreanini, Franchini, and Marzolino]Benatti-EI-2020 author author F. Benatti, author R. Floreanini, author F. Franchini, and author U. Marzolino, title title Entanglement in indistinguishable particle systems, https://doi.org/https://doi.org/10.1016/j.physrep.2020.07.003 journal journal Physics Reports volume 878, pages 1 (year 2020)NoStop [Hong et al.(1987)Hong, Ou, and Mandel]Hong-MS-1987 author author C. K. Hong, author Z. Y. Ou, and author L. Mandel, title title Measurement of subpicosecond time intervals between two photons by interference, https://doi.org/10.1103/PhysRevLett.59.2044 journal journal Physical Review Letters volume 59, pages 2044 (year 1987)NoStop [Lim and Beige(2005)]Lim-ME-2005 author author Y. L. Lim and author A. Beige, title title Multiphoton entanglement through a bell-multiport beam splitter, https://doi.org/10.1103/PhysRevA.71.062311 journal journal Physical Review A volume 71, pages 062311 (year 2005)NoStop [Tichy et al.(2010)Tichy, Tiersch, de Melo, Mintert, and Buchleitner]Tichy-ZT-2010 author author M. C. Tichy, author M. Tiersch, author F. de Melo, author F. Mintert, and author A. Buchleitner, title title Zero-transmission law for multiport beam splitters, https://doi.org/10.1103/PhysRevLett.104.220405 journal journal Physical Review Letters volume 104, pages 220405 (year 2010)NoStop [Tichy et al.(2012)Tichy, Tiersch, Mintert, and Buchleitner]Tichy-MP-2012 author author M. C. Tichy, author M. Tiersch, author F. Mintert, and author A. Buchleitner, title title Many-particle interference beyond many-boson and many-fermion statistics, https://doi.org/10.1088/1367-2630/14/9/093015 journal journal New Journal of Physics volume 14, pages 093015 (year 2012)NoStop [Aaronson and Arkhipov(2013)]Aaronson-CC-2013 author author S. Aaronson and author A. Arkhipov, title title The computational complexity of linear optics, https://doi.org/10.4086/toc.2013.v009a004 journal journal Theory of Computing volume 9, pages 143 (year 2013)NoStop [Spring et al.(2013)Spring, Metcalf, Humphreys, Kolthammer, Jin, Barbieri, Datta, Thomas-Peter, Langford, Kundys, Gates, Smith, Smith, and Walmsley]Spring-BS-2013 author author J. B. Spring, author B. J. Metcalf, author P. C. Humphreys, author W. S. Kolthammer, author X.-M. Jin, author M. Barbieri, author A. Datta, author N. Thomas-Peter, author N. K. Langford, author D. Kundys, author J. C. Gates, author B. J. Smith, author P. G. R. Smith, and author I. A. Walmsley, title title Boson sampling on a photonic chip, https://doi.org/10.1126/science.1231692 journal journal Science volume 339, pages 798 (year 2013)NoStop [Crespi et al.(2013)Crespi, Osellame, Ramponi, Brod, Galvão, Spagnolo, Vitelli, Maiorino, Mataloni, and Sciarrino]Crespi-IM-2013 author author A. Crespi, author R. Osellame, author R. Ramponi, author D. J. Brod, author E. F. Galvão, author N. Spagnolo, author C. Vitelli, author E. Maiorino, author P. Mataloni, and author F. Sciarrino, title title Integrated multimode interferometers with arbitrary designs for photonic boson sampling, https://doi.org/10.1038/nphoton.2013.112 journal journal Nature Photonics volume 7, pages 545 (year 2013)NoStop [Tillmann et al.(2013)Tillmann, Dakić, Heilmann, Nolte, Szameit, and Walther]Tillmann-EB-2013 author author M. Tillmann, author B. Dakić, author R. Heilmann, author S. Nolte, author A. Szameit, and author P. Walther, title title Experimental boson sampling, https://doi.org/10.1038/nphoton.2013.102 journal journal Nature Photonics volume 7, pages 540 (year 2013)NoStop [Shchesnovich(2015)]Shchesnovich-PI-2015 author author V. S. Shchesnovich, title title Partial indistinguishability theory for multiphoton experiments in multiport devices, https://doi.org/10.1103/PhysRevA.91.013844 journal journal Physical Review A volume 91, pages 013844 (year 2015)NoStop [Menssen et al.(2017)Menssen, Jones, Metcalf, Tichy, Barz, Kolthammer, and Walmsley]Menssen-DM-2017 author author A. J. Menssen, author A. E. Jones, author B. J. Metcalf, author M. C. Tichy, author S. Barz, author W. S. Kolthammer, and author I. A. Walmsley, title title Distinguishability and many-particle interference, https://doi.org/10.1103/PhysRevLett.118.153603 journal journal Physical Review Letters volume 118, pages 153603 (year 2017)NoStop [Flamini et al.(2018)Flamini, Spagnolo, and Sciarrino]Flamini-PQ-2019 author author F. Flamini, author N. Spagnolo, and author F. Sciarrino, title title Photonic quantum information processing: a review, https://doi.org/10.1088/1361-6633/aad5b2 journal journal Reports on Progress in Physics volume 82, pages 016001 (year 2018)NoStop [Tichy(2011)]Tichy-PhDThesis-2011 author author M. C. Tichy, title Entanglement and interference of identical particles, https://freidok.uni-freiburg.de/data/8233 Ph.D. thesis, school University of Freiburg, urn:nbn:de:bsz:25-opus-82337 (year 2011)NoStop [Walschaers(2016)]Walschaers-PhDThesis-2016 author author M. Walschaers, title Efficient quantum transport, https://doi.org/10.6094/UNIFR/11065 Ph.D. thesis, school University of Freiburg, urn:nbn:de:bsz:25-freidok-110653 (year 2016)NoStop [Brünner(2018)]Bruenner-PhDThesis-2018 author author T. Brünner, title Signatures of partial distinguishability in the dynamics of interacting bosons, https://doi.org/10.6094/UNIFR/16683 Ph.D. thesis, school University of Freiburg, urn:nbn:de:bsz:25-freidok-166833 (year 2018)NoStop [Dittel et al.(2018)Dittel, Dufour, Walschaers, Weihs, Buchleitner, and Keil]Dittel-TD-2018 author author C. Dittel, author G. Dufour, author M. Walschaers, author G. Weihs, author A. Buchleitner, and author R. Keil, title title Totally destructive many-particle interference, https://doi.org/10.1103/PhysRevLett.120.240404 journal journal Physical Review Letters volume 120, pages 240404 (year 2018)NoStop [Dittel(2019)]Dittel-AI-2019 author author C. Dittel, title About the interference of many particles, https://resolver.obvsg.at/urn:nbn:at:at-ubi:1-47210 Ph.D. thesis, school University of Innsbruck, urn:nbn:at:at-ubi:1-47210 (year 2019)NoStop [Njoya Mforifoum(2022)]Njoya-PhdThesis2022 author author M. K. Njoya Mforifoum, title Many-body quantum interference of composite particles on a 1D lattice, https://doi.org/10.6094/UNIFR/229297 Ph.D. thesis, school University of Freiburg, urn:nbn:de:bsz:25-freidok-2292974 (year 2022)NoStop [Brunner(2023)]Brunner-PhDThesis-2023 author author E. Brunner, title Interference & interactions: robust signatures of coherence & correlations in many-body quantum systems, https://doi.org/10.6094/UNIFR/238244 Ph.D. thesis, school University of Freiburg, urn:nbn:de:bsz:25-freidok-2382447 (year 2023)NoStop [Seron(2023)]Seron-PhdThesis-2023 author author B. Seron, title Distinguishability in Quantum Multiphoton Interference: From Bunching Phenomena to the Validation of Boson Sampling, @noop Ph.D. thesis, school University of Brussels (year 2023)NoStop [Englbrecht(2023)]Englbrecht-PhDThesis-2023 author author M. M. Englbrecht, title Entanglement and correlations in multipartite systems, @noop Ph.D. thesis, school University of Innsbruck (year 2023)NoStop [Mayer et al.(2011)Mayer, Tichy, Mintert, Konrad, and Buchleitner]Mayer-CS-2011 author author K. Mayer, author M. C. Tichy, author F. Mintert, author T. Konrad, and author A. Buchleitner, title title Counting statistics of many-particle quantum walks, https://doi.org/10.1103/PhysRevA.83.062307 journal journal Physical Review A volume 83, pages 062307 (year 2011)NoStop [Walschaers et al.(2016)Walschaers, Kuipers, and Buchleitner]Walschaers-FM-2016 author author M. Walschaers, author J. Kuipers, and author A. Buchleitner, title title From many-particle interference to correlation spectroscopy, https://doi.org/10.1103/PhysRevA.94.020104 journal journal Physical Review A volume 94, pages 020104 (year 2016)NoStop [Brunner et al.(2023)Brunner, Pausch, Carnio, Dufour, Rodríguez, and Buchleitner]Brunner-MB-2023 author author E. Brunner, author L. Pausch, author E. G. Carnio, author G. Dufour, author A. Rodríguez, and author A. Buchleitner, title title Many-body interference at the onset of chaos, https://doi.org/10.1103/PhysRevLett.130.080401 journal journal Physical Review Letters volume 130, pages 080401 (year 2023)NoStop [Arndt et al.(1999)Arndt, Nairz, Vos-Andreae, Keller, van der Zouw, and Zeilinger]Arndt-WP-1999 author author M. Arndt, author O. Nairz, author J. Vos-Andreae, author C. Keller, author G. van der Zouw, and author A. Zeilinger, title title Wave–particle duality of C60 molecules, https://doi.org/10.1038/44348 journal journal Nature volume 401, pages 680 (year 1999)NoStop [Teufel et al.(2011)Teufel, Donner, Li, Harlow, Allman, Cicak, Sirois, Whittaker, Lehnert, and Simmonds]Teufel-SC-2011 author author J. D. Teufel, author T. Donner, author D. Li, author J. W. Harlow, author M. S. Allman, author K. Cicak, author A. J. Sirois, author J. D. Whittaker, author K. W. Lehnert, and author R. W. Simmonds, title title Sideband cooling of micromechanical motion to the quantum ground state, https://doi.org/10.1038/nature10261 journal journal Nature volume 475, pages 359 (year 2011)NoStop [Chan et al.(2011)Chan, Alegre, Safavi-Naeini, Hill, Krause, Gröblacher, Aspelmeyer, and Painter]Chan-LC-2011 author author J. Chan, author T. P. M. Alegre, author A. H. Safavi-Naeini, author J. T. Hill, author A. Krause, author S. Gröblacher, author M. Aspelmeyer, and author O. Painter, title title Laser cooling of a nanomechanical oscillator into its quantum ground state, https://doi.org/10.1038/nature10461 journal journal Nature volume 478, pages 89 (year 2011)NoStop [Andrews et al.(1997)Andrews, Townsend, Miesner, Durfee, Kurn, and Ketterle]Andrews-OI-1997 author author M. R. Andrews, author C. G. Townsend, author H.-J. Miesner, author D. S. Durfee, author D. M. Kurn, and author W. Ketterle, title title Observation of interference between two Bose condensates, https://doi.org/10.1126/science.275.5300.637 journal journal Science volume 275, pages 637 (year 1997)NoStop [Wallis and Steck(1998)]Wallis-IT-1998 author author H. Wallis and author H. Steck, title title Inseparable time evolution of anisotropic Bose-Einstein condensates, https://doi.org/10.1209/epl/i1998-00177-6 journal journal Europhysics Letters volume 41, pages 477 (year 1998)NoStop [Bakr et al.(2009)Bakr, Gillen, Peng, Fölling, and Greiner]Bakr-QG-2009 author author W. S. Bakr, author J. I. Gillen, author A. Peng, author S. Fölling, and author M. Greiner, title title A quantum gas microscope for detecting single atoms in a Hubbard-regime optical lattice, https://doi.org/10.1038/nature08482 journal journal Nature volume 462, pages 74 (year 2009)NoStop [Sherson et al.(2010)Sherson, Weitenberg, Endres, Cheneau, Bloch, and Kuhr]Sherson-SA-2010 author author J. F. Sherson, author C. Weitenberg, author M. Endres, author M. Cheneau, author I. Bloch, and author S. Kuhr, title title Single-atom-resolved fluorescence imaging of an atomic mott insulator, https://doi.org/10.1038/nature09378 journal journal Nature volume 467, pages 68 (year 2010)NoStop [Bayha et al.(2020)Bayha, Holten, Klemt, Subramanian, Bjerlin, Reimann, Bruun, Preiss, and Jochim]Bayha-OE-2020 author author L. Bayha, author M. Holten, author R. Klemt, author K. Subramanian, author J. Bjerlin, author S. M. Reimann, author G. M. Bruun, author P. M. Preiss, and author S. Jochim, title title Observing the emergence of a quantum phase transition shell by shell, https://doi.org/10.1038/s41586-020-2936-y journal journal Nature volume 587, pages 583 (year 2020)NoStop [Meinert et al.(2014)Meinert, Mark, Kirilov, Lauber, Weinmann, Gröbner, Daley, and Nägerl]Meinert-OM-2014 author author F. Meinert, author M. J. Mark, author E. Kirilov, author K. Lauber, author P. Weinmann, author M. Gröbner, author A. J. Daley, and author H.-C. Nägerl, title title Observation of many-body dynamics in long-range tunneling after a quantum quench, https://doi.org/10.1126/science.1248402 journal journal Science volume 344, pages 1259 (year 2014)NoStop [Preiss et al.(2015)Preiss, Ma, Tai, Lukin, Rispoli, Zupancic, Lahini, Islam, and Greiner]Preiss-SC-2015 author author P. M. Preiss, author R. Ma, author M. E. Tai, author A. Lukin, author M. Rispoli, author P. Zupancic, author Y. Lahini, author R. Islam, and author M. Greiner, title title Strongly correlated quantum walks in optical lattices, https://doi.org/10.1126/science.1260364 journal journal Science volume 347, pages 1229 (year 2015)NoStop [Roos et al.(2017)Roos, Alberti, Meschede, Hauke, and Häffner]Roos-RQ-2017 author author C. F. Roos, author A. Alberti, author D. Meschede, author P. Hauke, and author H. Häffner, title title Revealing quantum statistics with a pair of distant atoms, https://doi.org/10.1103/PhysRevLett.119.160401 journal journal Physical Review Letters volume 119, pages 160401 (year 2017)NoStop [Münzberg et al.(2021a)Münzberg, Dittel, Lebugle, Buchleitner, Szameit, Weihs, and Keil]Muenzberg-SA-2021 author author J. Münzberg, author C. Dittel, author M. Lebugle, author A. Buchleitner, author A. Szameit, author G. Weihs, and author R. Keil, title title Symmetry allows for distinguishability in totally destructive many-particle interference, https://doi.org/10.1103/PRXQuantum.2.020326 journal journal PRX Quantum volume 2, pages 020326 (year 2021a)NoStop [Minke et al.(2021)Minke, Buchleitner, and Dittel]Minke-CF-2021 author author A. M. Minke, author A. Buchleitner, and author C. Dittel, title title Characterizing four-body indistinguishability via symmetries, https://doi.org/10.1088/1367-2630/ac0fb1 journal journal New Journal of Physics volume 23, pages 073028 (year 2021)NoStop [Shchesnovich(2014)]SC-Shchesnovich-2014 author author V. S. Shchesnovich, title title Sufficient condition for the mode mismatch of single photons for scalability of the boson-sampling computer, https://doi.org/10.1103/PhysRevA.89.022333 journal journal Physical Review A volume 89, pages 022333 (year 2014)NoStop [Marshall(2022)]Marshall-DI-2022 author author J. Marshall, title title Distillation of indistinguishable photons, https://doi.org/10.1103/PhysRevLett.129.213601 journal journal Physical Review Letters volume 129, pages 213601 (year 2022)NoStop [Note1()]Note1 note Supplemental MaterialNoStop [Note2()]Note2 note Note that for a bosonic state ρ _E, tight lower bounds of Tr (Π _Sρ _E ) can be efficiently measured <cit.>.Stop [Note3()]Note3 note d-fold degeneracy of λ _max instead leads to 𝒲_C∝ (N+1)^d-1λ _max^N (Sec. IV of <cit.>)NoStop [Brunner et al.(2022)Brunner, Buchleitner, and Dufour]Brunner-MC-2022 author author E. Brunner, author A. Buchleitner, and author G. Dufour, title title Many-body coherence and entanglement probed by randomized correlation measurements, https://doi.org/10.1103/PhysRevResearch.4.043101 journal journal Physical Review Research volume 4, pages 043101 (year 2022)NoStop [Ra et al.(2013)Ra, Tichy, Lim, Kwon, Mintert, Buchleitner, and Kim]Ra-NQ-2013 author author Y.-S. Ra, author M. C. Tichy, author H.-T. Lim, author O. Kwon, author F. Mintert, author A. Buchleitner, and author Y.-H. Kim, title title Nonmonotonic quantum-to-classical transition in multiparticle interference, https://doi.org/10.1073/pnas.1206910110 journal journal Proceedings of the National Academy of Sciences volume 110, pages 1227 (year 2013)NoStop [Münzberg et al.(2021b)Münzberg, Dittel, Lebugle, Buchleitner, Szameit, Weihs, and Keil]Muenzberg-WP-2019 author author J. Münzberg, author C. Dittel, author M. Lebugle, author A. Buchleitner, author A. Szameit, author G. Weihs, and author R. Keil, title title Symmetry allows for distinguishability in totally destructive many-particle interference, https://doi.org/10.1103/PRXQuantum.2.020326 journal journal PRX Quantum volume 2, pages 020326 (year 2021b)NoStop [Englbrecht et al.(2024)Englbrecht, Kraft, Dittel, Buchleitner, Giedke, and Kraus]Englbrecht-II-2024 author author M. Englbrecht, author T. Kraft, author C. Dittel, author A. Buchleitner, author G. Giedke, and author B. Kraus, title title Indistinguishability of identical bosons from a quantum information theory perspective, https://doi.org/10.1103/PhysRevLett.132.050201 journal journal Physical Review Letters volume 132, pages 050201 (year 2024)NoStop Supplemental Material: Distinguishability-induced many-body decoherence Andreas Buchleitner September 9, 2024 ======================================================================= § SIMPLIFIED EXPRESSIONS OF 𝒲_C In the following we provide a simplified expression for 𝒲_C from Eq. (<ref>) of the main text, assuming many-body internal states of the form ρ=ρ_1p⊗⋯⊗ρ_1p. To this end, we plug the matrix elements from Eq. (<ref>) of the main text into the definition of the many-body coherence from Eq. (<ref>) and use the cyclic property of the trace, 𝒲_C =1/N!(N!-1)∑_π,π'∈N π≠π'Π_πρΠ^†_π' =1/N!-1( 1/N!∑_π,π'∈NΠ^†_π'Π_πρ -1). Since N forms a group, the summation can be reduced to 𝒲_C =1/N!-1( ∑_π∈NΠ_πρ -1). As detailed in the main text, we now assume that all particles are in the same internal single-particle state ρ_1p such that ρ=ρ_1p⊗⋯⊗ρ_1p. Following our considerations from the main text, we assume that ρ_1p has a discrete spectrum with m eigenvalues λ_j ≥ 0 and corresponding eigenvectors |j⟩. Hence, its eigendecomposition reads ρ_1p=∑_j=1^m λ_j |j⟩⟨j| and the many-body internal state ρ becomes ρ=∑_ℐ⃗∈{1,…,m}^Nλ_ℐ⃗ |ℐ⃗⟩⟨ℐ⃗|, where ℐ⃗=(ℐ_1,…,ℐ_N) is the internal assignment list with N elements ℐ_α∈{1,…,m}, |ℐ⃗⟩=|ℐ_1⟩⊗⋯⊗|ℐ_N⟩, and λ_ℐ⃗=∏_α=1^N λ_ℐ_α. Using this, the traces in Eq. (<ref>) become Π_πρ =∑_ℐ⃗∈{1,…,m}^Nλ_ℐ⃗ Π_π|ℐ⃗⟩⟨ℐ⃗| =∑_ℐ⃗∈{1,…,m}^Nλ_ℐ⃗ ℐ⃗ℐ⃗_π, where Π_π|ℐ⃗⟩=|ℐ⃗_π⟩=|ℐ_π(1)⟩⊗…⊗|ℐ_π(N)⟩. Thus, since ℐ⃗ℐ⃗_π =∏_α=1^N δ_ℐ_α,ℐ_π(α)≥ 0 and λ_ℐ⃗≥ 0, we have Π_πρ≥ 0. Accordingly, we can drop the modulus in (<ref>), yielding the simplified expression 𝒲_C =1/N!-1( ∑_π∈NΠ_πρ -1). With the help of the projector Π_S=1/N! ∑_π∈S_NΠ_π onto the symmetric N-particle subspace, this can further be written as 𝒲_C =N!/N!-1Π_Sρ -1/N!-1, which, in the limit N!≫1, simplifies to 𝒲_C≈Π_Sρ. § RELATION BETWEEN 𝒲_C AND THE EXPECTATION VALUES Π_S(A)Ρ_E AND Π_SΡ To calculate the expectation value of the projector Π_S(A)=1/N! ∑_τ∈S_N (-1)^τ _B(F)Π_τ onto the (anti)symmetric N-particle subspace with respect to the particles' external degrees of freedom, we use the matrix elements of ρ_E as provided in Eq. (<ref>) of the main text, Π_S(A)ρ_E =1/N!^2∑_τ,π,π' ∈S_N (-1)^τππ'_B(F)Π_πρΠ_π'^†⟨E⃗_π'|Π_τ|E⃗_π⟩ =1/N!^2∑_τ,π,π' ∈S_N (-1)^τππ'_B(F)Π_π(π')^-1ρE⃗E⃗_πτ (π')^-1 =1/N!^2∑_π,π' ∈S_NΠ_π(π')^-1ρ =1/N!∑_π∈S_NΠ_πρ =Π_Sρ. That is, we just showed that Π_S(A)ρ_E=Π_Sρ. Using this in Eq. (<ref>) finally yields 𝒲_C≈Π_S(A)=Π_Sρ, as stated in the main text. § PROOF OF EQ. (<REF>) In the following we prove Eq. (<ref>) of the main text. We start wit plugging Eq. (<ref>) into Eq. (<ref>), 𝒲_C =1/N!-1( ∑_ℐ⃗∈{1,…,m}^Nλ_ℐ⃗∑_π∈Nℐ⃗ℐ⃗_π -1). Now, let us introduce the internal occupation list J⃗=(J_1,…,J_m), with J_j the number of particles in the internal state |j⟩ such that ∑_j=1^m J_j=N. Note that an internal occupation J⃗ can give rise to several assignment lists ℐ⃗ [see below Eq. (<ref>)], which differ by permutations of their elements. In particular, for the internal occupation J⃗, let I⃗ be the corresponding assignment list whose elements are listed in ascending order. Using the notation |I⃗_π⟩=|I_π(1)⟩⊗⋯⊗|I_π(N)⟩ for π∈N, we see that |I⃗_ξ⟩=|I⃗⟩ if and only if ξ∈J⃗ = J_1⊗⋯⊗J_m, which is a Young subgroup of N. Hence, for π∈N all permutations of the right coset J⃗π={ξπ|ξ∈J⃗} result in the same state, i.e., |I⃗_π⟩=|I⃗_π'⟩ for all π'∈J⃗π. Therefore, let us construct the transversal Σ(J⃗) of the set of right cosets of J⃗ in N containing one permutation of each distinct right coset such that I⃗_⃗μ⃗I⃗_⃗ν⃗=δ_μ,ν for μ,ν∈Σ(J⃗) <cit.>. Note that Σ(J⃗) has cardinality J≡ |Σ(J⃗)| = N!/∏_j=1^m J_j!. Using this, we can rewrite the sum over all assignment lists ℐ⃗ in Eq. (<ref>), resulting in 𝒲_C =1/N!-1( ∑_J⃗∑_μ∈Σ(J⃗)λ_I⃗_μ∑_π∈NI⃗_μI⃗_μπ -1). Using ∑_π∈NI⃗_μI⃗_μπ=N!/J, λ_I⃗_μ=λ_I⃗≡λ_J⃗, with λ_J⃗ =∏_j=1^m λ_j^J_j, this simplifies to 𝒲_C =1/N!-1( ∑_J⃗∑_μ∈Σ(J⃗)N!/Jλ_J⃗ -1) =1/N!-1( N! ∑_J⃗λ_J⃗ -1) =N!/N!-1∑_J⃗λ_J⃗ - 1/N!-1. Accordingly, with the help of Eq. (<ref>) we find Π_Sρ= ∑_J⃗λ_J⃗ such that for N!≫ 1, we get 𝒲_C ≈∑_J⃗λ_J⃗. Using λ_J⃗ =∏_j=1^m λ_j^J_j and ∑_j=1^m J_j=N, this can also be written as 𝒲_C ≈∑_J_1+J_2+⋯+J_m=Nλ_1^J_1λ_2^J_2⋯λ_m^J_m, which coincides with Eq. (<ref>) of the main text. § PROOF OF EQ. (<REF>) We start our proof of Eq. (<ref>) of the main text by considering 𝒲_C from Eq. (<ref>) of the main text [see also Eq. (<ref>)]. Without loss of generality we can assume that the eigenvalues of ρ_1p satisfy λ_1 ≤λ_2 ≤⋯≤λ_m. That is, λ_max=λ_m is the maximal eigenvalue of ρ_1p. We further suppose that λ_m is d-fold degenerate, i.e., λ_m=λ_m-1=⋯=λ_m+1-d. With this in mind, let us rewrite 𝒲_C from Eq. (<ref>), 𝒲_C ≈∑_J_1+J_2+⋯+J_m=Nλ_1^J_1λ_2^J_2⋯λ_m^J_m = ∑_J_1=0^N λ_1^J_1∑_J_2=0^N λ_2^J_2⋯∑_J_m-1=0^N λ_m-1^J_m-1 λ_m^N-J_1-J_2-… -J_m-1 Θ(N-J_1-J_2-…-J_m-1) = λ_m^N ∑_J_1=0^N ( λ_1/λ_m)^J_1∑_J_2=0^N ( λ_2/λ_m)^J_2⋯∑_J_m-1=0^N ( λ_m-1/λ_m)^J_m-1 Θ(N-J_1-J_2-…-J_m-1), where Θ(N-J_1-J_2-…-J_m-1)= 1 for J_1+J_2+…+ J_m-1≤ N 0 otherwise is the Heaviside function. By the degeneracy of the maximum eigenvalue, λ_m=λ_m-1=⋯=λ_m+1-d, this becomes 𝒲_C ≈λ_m^N ∑_J_1=0^N ( λ_1/λ_m)^J_1⋯∑_J_m-d=0^N ( λ_m-d/λ_m)^J_m-d∑_J_m+1-d=0^N ⋯∑_J_m-1=0^N Θ(N-J_1-J_2-…-J_m-1). Now note that in the limit N→∞, the Heaviside function (<ref>) can be approximated by unity. Hence, in the limit N→∞ the many-body coherence is well approximated by 𝒲_C ≈λ_m^N ∑_J_1=0^N ( λ_1/λ_m)^J_1⋯∑_J_m-d=0^N ( λ_m-d/λ_m)^J_m-d∑_J_m+1-d=0^N ⋯∑_J_m-1=0^N Next, we use the geometric series ∑_J_j=0^N (λ_j/λ_m)^J_j=(1-λ_j/λ_m)^-1- (λ_j/λ_m)^N+1(1-λ_j/λ_m)^-1 such that 𝒲_C from Eq. (<ref>) becomes 𝒲_C ≈ (N+1)^d-1 λ_m^N ∏_j=1^m-d[(1-λ_j/λ_m)^-1 - (λ_j/λ_m)^N+1(1-λ_j/λ_m)^-1]. After performing the product, we see that there are factors of the form λ_m^N, λ_m^N (λ_j/λ_m)^N+1, λ_m^N (λ_j/λ_m)^N+1 (λ_k/λ_m)^N+1, etc. However, since λ_j/λ_m<1 for all j=1,…,m-d, in the limit N→∞ the dominant factors are those of the form λ_m^N. That is, the approximation of 𝒲_C simplifies to 𝒲_C ≈ (N+1)^d-1 λ_m^N ∏_j=1^m-d(1-λ_j/λ_m)^-1. Note that one arrives at the same result faster by setting the upper limit of the first m-d sums in (<ref>) to infinity. Equation (<ref>) describes the behaviour of 𝒲_C in the thermodynamic limit. In the case of a non-degenerate maximum eigenvalue λ_m, i.e., d=1, the approximation simplifies to 𝒲_C ≈λ_m^N ∏_j=1^m-1(1-λ_j/λ_m)^-1 as stated in Eq. (<ref>) of the main text. § PROOF OF EQ. (<REF>) In the following we prove Eq. (<ref>) of the main text. As stated in the main text, in the case of small deviations from perfectly indistinguishable particles, ρ_1p can be written as ρ_1p =(1-ϵ)|ϕ⟩⟨ϕ| + ϵρ̃_1p, with ϵ≪ 1/2. Note that ρ̃_1p is Hermitian and has unit trace. Using this decomposition of ρ_1p, the many-body internal state ρ becomes ρ =[(1-ϵ)|ϕ⟩⟨ϕ| + ϵρ̃_1p] ⊗⋯⊗[(1-ϵ)|ϕ⟩⟨ϕ| + ϵρ̃_1p] =(1-ϵ)^N |ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ| +(1-ϵ)^N-1ϵ( ρ̃_1p⊗|ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ| +… + |ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ|⊗ρ̃_1p) +(1-ϵ)^N-2ϵ^2 ( ρ̃_1p⊗ρ̃_1p⊗|ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ| +… + |ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ|⊗ρ̃_1p⊗ρ̃_1p) +… Now note that for ϵ≪ 1/2, we have (1-ϵ)^N-jϵ^j ≫ (1-ϵ)^N-j-1ϵ^j+1. Accordingly, the first term of the sum in (<ref>) dominates. Therefore we can approximate 𝒲_C from Eq. (<ref>) as 𝒲_C ≈1/N!-1( (1-ϵ)^N ∑_π∈NΠ_π|ϕ⟩⟨ϕ|⊗…⊗|ϕ⟩⟨ϕ| -1) = N!/N!-1 (1-ϵ)^N -1/N!-1. In the limit N!≫ 1 this simplifies to 𝒲_C≈ (1-ϵ)^N, as stated in Eq. (<ref>) in the main text. § LOWER ORDER COHERENCES Let us consider the reduced external state [see Eq. (<ref>) in the main text] ρ_E= ∑_π,π' ∈N [ρ_E ]_π,π'|E⃗_π⟩⟨E⃗_π'| with [ρ_E ]_π,π'=(-1)_B(F)^ππ'1/N!Π_πρΠ^†_π'. By tracing out a particle, we obtain the reduced external N-1-particle state ρ_E^(N-1) = ∑_π,π' ∈N [ρ_E ]_π,π'E_π'(N)E_π(N) |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'| =∑_π,π' ∈N π(N)=π'(N) [ρ_E ]_π,π' |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'|, where |E⃗^(N-1)_π⟩ = |E_π(1)⟩⊗…⊗|E_π(N-1)⟩. With the help of the Young subgroup S_N-1;α=S_{1,…,N}∖{α}⊗S_{α}, this can be written as ρ_E^(N-1) =∑_α=1^N ∑_π,π' ∈S_N-1;α [ρ_E ]_π,π' |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'| =1/N∑_α=1^N ∑_π,π' ∈S_N-1;α [ρ_E^(N-1) ]_π,π' |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'| =1/N∑_α=1^N ρ_E^(N-1;α), where [ρ_E^(N-1)]_π,π'=N [ρ_E]_π,π', and ρ_E^(N-1;α)=∑_π,π' ∈S_N-1;α [ρ_E^(N-1) ]_π,π' |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'|. We recognize that ρ_E^(N-1;α) corresponds to the reduced external state with the αth particle excluded. Since we consider the internal product state ρ=ρ_1p⊗…⊗ρ_1p in Eq. (<ref>), for different α, the states ρ_E^(N-1;α) only differ by the labeling of the external states, and, thus, must have equal many-body coherences. Accordingly, by Eq. (<ref>), and the linearity of 𝒲_C^(N-1) with respect to ρ_E^(N-1), the states ρ_E^(N-1) and ρ_E^(N-1;N) have equal many-body coherences. Thus, since ρ_E^(N-1;N)=∑_π,π' ∈S_N-1 [ρ_E^(N-1) ]_π,π' |E⃗^(N-1)_π⟩⟨E⃗^(N-1)_π'| coincides with the reduced external state ρ_E from Eq. (<ref>) with N-1 instead of N particles, we can conclude that the normalized many-body coherence 𝒲_C^(N-1) of the reduced N-1-particle state ρ_E^(N-1) coincides with the normalized many-body coherence 𝒲_C [see Eq. (<ref>) in the main text] in the case of N-1 instead of N particles. Similarly, by tracing out further particles, the same reasoning lets us conclude that the many-body coherence 𝒲_C^(k) of the reduced k-particle state ρ_E^(k)=N-kρ_E coincides with 𝒲_C in the case of k instead of N particles. The same conclusion can be drawn faster by considering the expression of 𝒲_C from Eq. (<ref>): Since each term of the reduced k-particle state ρ_E^(k) is associated with the unsymmetrized internal state ρ^(k)=ρ_1p^⊗ k, by Eq. (<ref>), we must have 𝒲_C^(k) =k!/k!-1Π_S^(k)ρ^(k) -1/k!-1, with Π_S^(k) = 1/k! ∑_π∈S_kΠ_π. § ATOMS IN THE SMALL TEMPERATURE LIMIT In the small temperature limit T ≪Δ E/ln(2) the atoms are with hight probability (e^-β E_1 Z(β)^-1) in the internal ground state |1⟩. Hence, by rewriting the single-particle internal state ρ_1p= ∑_j=1^m e^-β E_jZ(β)^-1|j⟩⟨j| as ρ_1p =e^-β E_1/Z(β)|1⟩⟨1|+ ∑_j=2^m e^-β E_j/Z(β)|j⟩⟨j| =(1-ϵ) |1⟩⟨1|+ϵ ρ̃_1p, we can identify 1-ϵ=e^-β E_1/Z(β)≈e^-β E_1/e^-β E_1+e^-β E_2 = 1/1+e^-βΔ E≈ 1- e^-βΔ E, i.e., we have ϵ≈ e^-βΔ E. For particle numbers N! ≫ 1 we can then apply Eq. (<ref>) [see also Eq. (<ref>)], resulting in 𝒲_C≈ (1- e^-βΔ E)^N as stated in the main text. § FAINT DISTINGUISHABILITIES OF PHOTONS WITH RANDOM ARRIVAL TIMES Let us consider the single-particle internal state ρ_1p=∫dt P(t) |t⟩⟨t| for any probability distribution P(t), i.e., P(t) is not necessarily a normal distribution. Without loss of generality we suppose that ⟨t|=⟩∫dt t P(t) =0 such that, in the case of faint distingiushabilities σΔ≪ 1/√(2), the photons arrive at time t≈⟨t|=⟩0 with high probability. Hence, we can write the single-particle internal state as ρ_1p = ⟨0|ρ_1p|0⟩|0⟩⟨0| + (1-⟨0|ρ_1p|0⟩) ρ̃_1p =(1-ϵ) |0⟩⟨0| + ϵ ρ̃_1p, and identify 1-ϵ =⟨0|ρ_1p|0⟩. Calculating the expectation value yields ⟨0|ρ_1p|0⟩ =∫_-∞^∞dt P(t) 0t^2 =∫_-∞^∞dt P(t) e^-Δ^2 t^2, where we used |t⟩=(2 πΔ^2)^-1/4∫_-∞^∞dω e^ω t e^-(ω-Ω)^2/4Δ^2|ω⟩ to obtain the overlap 0t^2=e^-Δ^2 t^2. Since P(t) in (<ref>) must be small for times |t|>1/Δ [recall that we consider faint disinguishabilities σΔ≪ 1/√(2), i.e., σ≪ 1/√(2)Δ], we can expand the exponential function in (<ref>), resulting in ⟨0|ρ_1p|0⟩ ≈∫_-∞^∞dt P(t) ( 1-Δ^2 t^2) =1-Δ^2 σ^2, where we used ⟨t^2|=⟩σ^2, since ⟨t|=⟩0. Hence, we find 1-ϵ≈ 1-Δ^2 σ^2 (compare to the average mutual fidelity in <cit.>). By Eq. (<ref>) [see also Eq. (<ref>)], for N!≫ 1, we then have 𝒲_C≈ (1-Δ^2 σ^2)^N as provided in the main text.
http://arxiv.org/abs/2409.03118v1
20240904230227
Generative artificial intelligence for computational chemistry: a roadmap to predicting emergent phenomena
[ "Pratyush Tiwary", "Lukas Herron", "Richard John", "Suemin Lee", "Disha Sanwal", "Ruiyu Wang" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.dis-nn", "cs.LG", "physics.chem-ph" ]
a,b,1]Pratyush Tiwary b,c,2]Lukas Herron d,2]Richard John b,c,2]Suemin Lee a,2]Disha Sanwal a,2]Ruiyu Wang [a]Department of Chemistry and Biochemistry and Institute for Physical Science and Technology, University of Maryland, College Park 20742, USA. [b]University of Maryland Institute for Health Computing, Bethesda 20852, USA. [c]Biophysics Program and Institute for Physical Science and Technology, University of Maryland, College Park 20742, USA. [d]Department of Physics and Institute for Physical Science and Technology, University of Maryland, College Park 20742, USA. Tiwary P.T., L.H., R.J., S.L., D.S. and R.W. wrote the paper. 2Authors two, three, four, five and six contributed equally to this work. 1To whom correspondence should be addressed. E-mail: [email protected] § ABSTRACT The recent surge in Generative Artificial Intelligence (AI) has introduced exciting possibilities for computational chemistry. Generative AI methods have made significant progress in sampling molecular structures across chemical species, developing force fields, and speeding up simulations. This Perspective offers a structured overview, beginning with the fundamental theoretical concepts in both Generative AI and computational chemistry. It then covers widely used Generative AI methods, including autoencoders, generative adversarial networks, reinforcement learning, flow models and language models, and highlights their selected applications in diverse areas including force field development, and protein/RNA structure prediction. A key focus is on the challenges these methods face before they become truly predictive, particularly in predicting emergent chemical phenomena. We believe that the ultimate goal of a simulation method or theory is to predict phenomena not seen before, and that Generative AI should be subject to these same standards before it is deemed useful for chemistry. We suggest that to overcome these challenges, future AI models need to integrate core chemical principles, especially from statistical mechanics. This manuscript was compiled on September 9, 2024 <www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX> Generative artificial intelligence for computational chemistry: a roadmap to predicting emergent phenomena [ September 2024 ========================================================================================================== firststyle shortarticlesinglecolumn The last few years have seen a surge of excitement and explosion of Generative Artificial Intelligence (AI) methods across scientific fields, with computational chemistry being no exception. Pioneering efforts include sampling structures and thermal distributions of complex molecular systems, developing transferable force fields, and performing accelerated simulations<cit.>. With numerous tools emerging, a clear Perspective is now essential to highlight progress and critically examine pitfalls. While the scope of Generative AI's impact in chemistry is broad, this Perspective will focus exclusively on molecular simulation driven computational chemistry. Molecular simulations offer an efficient platform for validating and iterating on new Generative AI techniques, continually improving them using force field-based approximations of reality. Additionally, using Generative AI on molecular simulation data can explore chemical and physical spaces that are otherwise difficult to access. This Perspective is structured as follows. We begin with The Theoretical Minimum, summarizing the essential theoretical concepts and terminology of both Generative AI and computational chemistry. Next, in Generative AI Methods for Computational Chemistry, we provide an overview of widely used Generative AI methods (Fig. <ref>), including autoencoders and their derivatives, generative adversarial networks (GANs), reinforcement learning, flow-based methods, and recurrent neural networks and language models. The Perspective then highlights Selected applications out of very many, focusing on ab initio quantum chemistry, coarse-grained force fields, protein structure prediction and RNA structure prediction. Following this, the section Desirables from Generative AI for Chemistry explores common themes and characteristics of Generative AI tools that are particularly desirable for chemistry applications. Of particular note here is predicting emergent phenomena, which lies at the heart of chemistry and all science. Emergent phenomena occur when new properties arise even in systems with simple underlying interactions if they are large enough and/or studied long enough, as Phil Anderson discussed in his classic essay "More Is Different"<cit.>. Current AI approaches struggle with capturing these emergent behaviors. Recent literature on some of the most powerful Generative AI frameworks such as large language models<cit.> and diffusion models<cit.> has highlighted and quantified the limitations of current AI tools in capturing any emergent behavior, often showing that these tools primarily excel at impressive but essentially memorization and interpolation. We conclude with brief Critical Assessment and Outlook, providing an honest evaluation of the progress so far and challenges that need to be addressed before Generative AI becomes a reliable member of the molecular simulator's toolbox for predicting emergent phenomena. § THE THEORETICAL MINIMUM In this section we will summarize the theoretical concepts and terms key to this Perspective. We do this for computational chemistry and for Generative AI. We recommend Ref. <cit.> for a deeper understanding of computational chemistry concepts. For Generative AI there is no one book that can stay up-to-date with the dizzying pace of development, though Ref. <cit.> covers several key underlying concepts. §.§ Computational chemistry * Potential energy surface: The potential energy surface (PES) is a multidimensional surface representing the energy of a molecular system as a function of its atomic positions. The minima on the PES correspond to stable molecular structures, while the pathways connecting these minima represent possible reaction mechanisms ignoring entropic effects relevant at non-0 temperatures. * Force fields: Force fields are mathematical models used to describe the PES. They consist of a set of parameters and equations that define the interactions between atoms, including bond stretching, angle bending, torsional angles, and non-bonded interactions (e.g., van der Waals forces, electrostatic interactions). The choice of force field significantly influences the accuracy of molecular simulations, as it dictates how well the model can replicate real physical behaviors. Popular force fields include AMBER, CHARMM, OPLS and in recent years, also machine learning force fields (MLFF)<cit.>. * Thermodynamic ensemble: A thermodynamic ensemble is a statistical representation of a system in which all possible microstates are considered according to specific environmental constraints like temperature, pressure, and volume. The choice of ensemble is crucial for accurately modeling real-world conditions and predicting system behavior from molecular simulation or Generative AI. * Collective variables and reaction coordinate: Collective variables (CVs) simplify molecular analysis by reducing dimensionality and capturing essential degrees of freedom. They are key in enhanced sampling techniques for exploring rare events. Choosing the right CVs, which approximate the reaction coordinate (RC), is vital for capturing system dynamics. The RC tracks a system’s progress along a reaction pathway, often identifying the transition state. Often the committor is considered the ideal RC<cit.>, as it quantifies the probability of a system evolving toward a specific product state, accounting for both energetic and entropic effects. * Free energy surface: Free energy surfaces (FES) extend the concept of potential energy surfaces. They quantify the probability of observing the system as a function of one or more CVs, by marginalizing out all other degrees of freedom. Depending on how closely the chosen CVs approximate the true RC, the FES can unfortunately be mechanistically quite misleading it could be masking out true barriers. * Molecular simulations: Molecular Dynamics (MD) and Monte Carlo (MC) are essential for simulating molecular systems. MD solves Newton's equations to model atomic trajectories, while MC uses random sampling to explore configurational space. Ab initio MD combines quantum mechanical calculations like Density Functional Theory (DFT) with MD for greater accuracy but at higher computational cost. §.§ Generative AI * Latent Variables: Latent variables are hidden factors that capture underlying structures in data. In models like autoencoders and pre-deep learning methods such as Principal Component Analyses, these variables represent the reduced-dimensional space that captures essential features of the data, facilitating generation and reconstruction processes. Generally latent variables are often entangled, meaning they mix multiple underlying factors. Disentangling them improves interpretability and control, enabling precise manipulation of control parameters. * Prior: The prior is a distribution over the latent variables that encodes initial beliefs about their values before observing data. The use of priors can help achieve models that are more intuitive and less prone to overfitting. * Loss Function: The loss function quantifies the difference between generated samples and the true data distribution, with different metrics used depending on the method. Root Mean Squared (RMS) error measures prediction accuracy, cross-entropy evaluates how similar predicted and actual distributions are, Kullback-Leibler (KL) divergence assesses how much one distribution diverges from a reference, and Wasserstein distance, while more computationally expensive, provides stability and clearer interpretation by assessing the cost of transforming one distribution into another. * Training, Testing, and Validation: Training fits the model to data, validation tunes it on unseen data to avoid overfitting, and testing evaluates its generalization on a separate dataset. Ideally, the test data should be entirely unseen, but in practice, overlap often occurs, especially in chemistry. Thus, careful data curation is crucial for reliable Generative AI in computational chemistry<cit.>. * Regularization and Mode Collapse: Regularization methods, such as dropout, weight decay, and early stopping, are used to prevent overfitting by penalizing overly complex models and ensuring they generalize well to new data. Especially in generative models, care must be taken to avoid mode collapse, where the model generates limited, repetitive outputs, missing the diversity in the data. * Embedding: Embeddings are dense, low-dimensional representations that map discrete data, like words or items, into continuous vector spaces where similar items are closer together. While latent space emphasizes compressing data and extracting essential features, embedding space prioritizes capturing relationships and semantic meaning. * Attention: Attention mechanisms allow models to focus on different parts of the input data when generating outputs. This is critical in transformer models and helps improve performance in tasks like natural language processing by enabling the model to weigh different input components. § GENERATIVE AI METHODS FOR COMPUTATIONAL CHEMISTRY Here we provide an overview of popular Generative AI methods relevant to computational chemistry. We describe the central ideas and highlight what makes these methods appealing, their limitations and new research directions towards improving them. §.§ Autoencoders and derived methods Autoencoders have become increasingly visible in the field of computational chemistry due to their powerful ability to learn and represent complex high-dimensional molecular data as points in a low-dimensional latent space. Generally speaking, an autoencoder is a type of neural network that compresses input data into a lower-dimensional latent space and reconstructs it as accurately as possible. By sampling from this latent space, new high-dimensional molecular data can be generated, potentially discovering novel molecular configurations. Points close in the latent space correspond to similar molecules, allowing the autoencoder to explore chemical diversity (by sampling near the edges of the latent space) and classify molecular similarity (on the basis of proximity within the latent space). This strategy has been applied to classify and explore chemical space, improving similarity searches and clustering of compounds from experiments or calculations like DFT. Autoencoders are also used in reaction coordinate discovery and enhanced molecular dynamics, enabling visualization and exploration of complex, high-dimensional landscapes by reducing dimensionality while retaining key features <cit.>. Autoencoders are popular in computational chemistry for their easy-to-visualize latent variables, but they are prone to misuse. A central issue is the temptation of assuming Euclidean geometry in the latent space, which can lead to uncontrolled mapping of distances between the latent and high-dimensional spaces. Additionally, latent variables may be correlated or lack physical relevance. Understanding these problems requires a deeper look into autoencoder construction, where traditionally an encoder maps input data to the latent space, and a decoder maps it back, with the goal of minimizing reconstruction loss. With this generic recipe, one can construct different types of autoencoder-inspired methods by varying the following: * Prior for the latent variable: Autoencoders can minimize training loss significantly with expressive encoders and decoders, risking overfitting. To prevent this, a prior distribution is imposed on the latent variable, adding a regularization loss that keeps the latent variable close to the prior while maintaining low reconstruction loss. The choice of prior, such as a Gaussian distribution in variational autoencoders (VAEs) or a mixture of Gaussians<cit.> is an active research area. * Additional loss terms to enforce physics: Physically meaningful latent representations can be obtained by using disentanglement-based loss terms, like in the β-VAE approach <cit.>, or through dynamics-based priors <cit.>. In chemistry, meaningful latents can also be obtained by adding loss terms that maximize specific physical attributes, as shown in Ref. <cit.>, where the two dimensions of the latent space correspond to entropic and enthalpic degrees of freedom. * Generalizing output task: A traditional autoencoder can be extended to predict other quantities, not just reconstruct inputs. For example, using the information bottleneck framework <cit.>, it can predict which metastable state a molecular system will visit after a time delay. This approach <cit.> closely mirrors desirable attributes of the committor function. §.§ Generative adversarial networks (GANs) Generative Adversarial Networks (GANs)<cit.> have gained particular attention for their ability to produce high-quality realistic images, audio, video, and chemical molecules. GANs possess two unique features: the discriminator and the generator (Fig. <ref>), which compete against each other in a zero-sum game. Here, the generator synthesizes new data while the discriminator tries to distinguish between the synthetic data and real data from a training set. Through continuous feedback, the generator progressively improves its ability to generate realistic data until the discriminator is fully deceived by the generator's newly synthesized data. This central idea underlying GANs has been improved through various variants. Conditional GANs (cGANs)<cit.>, for instance, generate data conditioned on specific attributes, which is especially useful in chemistry for creating molecules or materials with desired properties. Wasserstein GANs (WGANs)<cit.> enhance training stability and mitigate issues like mode collapse by optimizing the Wasserstein distance between synthetic and real data. GANs have successfully demonstrated great potential for chemistry also through integration with other neural networks. For molecular discovery, You et al.<cit.> have shown that combining graph neural networks with GANs, as in the Graph Convolutional Policy Network (GCPN), can effectively generate novel molecular structures by optimizing desired properties with Reinforcement Learning (Sec. <ref>). In addition, the idpGAN <cit.> model utilizes GANs by incorporating with transformer architectures (Sec. <ref>) to generate sequence-dependent protein conformational ensembles, which can capture protein dynamics and interactions. Sidky et al. <cit.> proposed latent space simulators that integrate the VAMPnet model <cit.> with WGANs. This allows generating long synthetic MD trajectories that can accurately reproduce atomistic structures and kinetics observed in training trajectories, at a much lower computational cost. Despite these forays in chemistry, GANs have several limitations that restrict their applicability. These limitations include training instability, mode collapse, and a heavy dependence on large datasets. Training instability arises due to the delicate balance between the generator and discriminator, which leads to non-convergence and contributes to the issue of mode collapse. Reliance on training data makes it challenging to generate data that is out-of-distribution (OOD)<cit.>. As a result, GANs are gradually going out of fashion for chemical applications, and the field is shifting toward new state-of-the-art methods, such as diffusion models and reinforcement learning-based approaches, which offer solutions to some of the inherent limitations of GANs <cit.>. §.§ Reinforcement learning In Reinforcement learning (RL) a proverbial agent learns to make optimal decisions by interacting with an environment to maximize cumulative rewards. RL is usually modeled as a Markov decision process, comprising states, actions, an environment, and rewards; the agent receives rewards based on its actions and the current state of the environment, with the reward signal guiding subsequent actions and state transitions. The agent employs trial and error to explore various strategies which helps it improve its actions over time. One of the most widely used applications of RL in the pharmaceutical industry is in context of computationally driven chemistry through the method REINVENT<cit.> and many others that have followed<cit.>, which use RL often combined with other DL approaches, to generate optimized molecules consistent with user defined properties. In computational chemistry it has been used for the learning of transition states<cit.>, diffusive dynamics <cit.> and sampling protein conformational dynamics<cit.>. In spite of its promise, RL for chemistry continues to be plagued with a few issues fundamental to molecular systems. These include: * Curse of dimensionality: Molecular systems exist in high-dimensional spaces, which RL algorithms can struggle to explore and learn efficiently. Often this is dealt with an adaptive use of RL where RL is trained on some existing data set and the trained surrogate model is used to perform further exploration and data generation<cit.>. * Data scarcity problem: Important molecular events, like chemical reactions or conformational changes, occur infrequently and are hard to capture, resulting in incomplete datasets. Consequently, RL models trained on limited data may bias towards common states and miss rare but crucial phenomena. * Novelty problem: Molecular systems often have multimodal data corresponding to different metastable states. RL models often fail to generate a diverse set of outputs, leading to the mode collapse problem (Sec. <ref>). For instance in Ref. <cit.>, the most potent design had a high structural similarity to an existing drug molecule, a problem somewhat common in the use of RL for discovery of chemical matter. We conclude by highlighting how concepts from chemistry, particularly statistical mechanics, are enhancing RL. In Maximum Entropy RL (MaxEnt RL) <cit.>, the goal is to maximize both expected reward and policy entropy, promoting stochasticity in actions. This approach improves RL algorithms' adaptability to real-world complexities. Recent work on Maximum Diffusion RL <cit.>, based on the Principle of Maximum Caliber <cit.>, has shown efficiency gains over traditional MaxEnt RL. The GFlowNet framework <cit.> treats state-action trajectories as network flows, ensuring robust sampling with detailed balance and importance sampling, outperforming Markov Chain Monte Carlo methods in certain cases. The intersection of statistical mechanics and RL continues to be a promising area for developing innovative RL methods. §.§ Flow based methods Having discussed recent RL variants inspired by statistical physics in the last subsection, we now turn to models rooted in these principles from their inception <cit.>. Flow models aim to sample from an inaccessible target probability distribution using limited available samples – an empirical dataset 𝒟. Physics-inspired methods like simulated tempering <cit.> and annealed importance sampling <cit.> transform samples from the simple prior into samples from the target by constructing a bridge between the distributions. However, constructing the bridge relies on modifying the energy functions of the target and prior distributions. Generative flow models use neural networks to bridge distributions without requiring access to the energy function. Flow-based models transform a simple prior distribution q(𝐳) into a more complex target distribution p(𝐱) through a series of learnable mappings (Fig. <ref>). Normalizing flows is a popular framework for sampling from p(𝐱). A normalizing flow model parameterizes an invertible transformation ℳ. The transformation acts as a change of variables that deforms the prior into the target distribution so that sampling from the prior and applying ℳ produces a sample from the target distribution, i.e. 𝐳 = ℳ(𝐱). The change in probability associated with the change of coordinates is given by the identity log p(𝐱) = log q(𝐳) + log| det J_ℳ(𝐱) |. The identity in eq. <ref> reweights samples between the prior and target distributions via the Jacobian determinant of ℳ. The optimal change of variables ℳ maximizes the likelihood for samples in 𝒟 and can be obtained by training a neural network ℳ_θ to maximize log p(𝐱). However, computing the determinant is a (usually) prohibitively expensive operation that scales as 𝒪(d^3) in the general case, where d is the number of components of 𝐱. Normalizing flows address this by employing architectures that result in structured Jacobians (for example, alternatingly upper- and lower-triangular<cit.>) for which the determinant computation is faster. However, the computationally tractable determinant results in reduced expressivity, and much recent effort has been devoted to addressing this tradeoff <cit.>. Diffusion models<cit.> are generative algorithms that do not require access to the Jacobian determinant during training. Diffusion models learn to invert an Ornstein-Uhlenbeck (OU) diffusion process. The diffusion process generates a probability flow p(𝐱,t) that transports the target distribution p(𝐱,t=0) to the prior q(𝐳) ≡ p(𝐱,t=1)<cit.>. Diffusion models are made possible by a theorem stating that the drifts of the forward- and reverse-time diffusion processes differ only by the score ∇_𝐱log p(𝐱,t) <cit.>. Similar to normalizing flows, a neural network is trained to estimate the score from realizations of the forward process. Once parameterized, the score network can be used to simulate the time-reversed diffusion process, thereby generating samples from the target distribution. The ideas behind diffusion models are deeply rooted in statistical physics. The OU diffusion process is naturally studied using techniques from non-equilibrium thermodynamics<cit.>, and parametrizing the gradient of a potential (the score) rather than the potential itself is similar in spirit to force-matching in coarse-grained models <cit.>. Looking forward, tools from statistical physics will be central to understanding how diffusion models operate. For example, using techniques from spin-glass theory it was found that the dynamics of the diffusion processes can be divided into distinct regimes distinguishing generalization and memorization of the training data <cit.>. Recent exciting advancements such as flow-matching<cit.> and stochastic interpolants<cit.> further introduce ideas from optimal transport to model arbitrary diffusion processes. §.§ Recurrent neural networks and large language models Recurrent neural networks (RNNs), like Long Short-Term Memory (LSTM) networks, and transformer-based architectures <cit.>, have made significant strides in natural language processing, speech recognition, and computational chemistry, notably in protein structure prediction with AlphaFold2. These models excel in handling sequential data, whether predicting the next value in a time series, generating sequences, or classifying entire sequences. LSTM networks use memory cells and gating mechanisms to maintain long-term dependencies, while transformers, such as those in large language models (LLMs), use self-attention mechanisms to capture complex relationships, processing sequences more efficiently and effectively. A typical LLM can contain an encoder and a decoder (Fig. <ref>) both composed of multiple transformer layers, each leveraging self-attention to understand the input context. The attention mechanism is key in aligning and focusing on the most relevant parts of the sequence during both encoding and decoding, enabling the model to generate coherent and contextually appropriate outputs. These models are particularly valuable for chemistry, where many processes are non-Markovian and require long-term dependencies for accurate predictions, such as in reaction prediction and molecular dynamics simulations. However, the success of LLMs in other domains doesn't easily transfer to chemistry due to their limitations in extrapolating beyond the training data and not accounting for hidden biases <cit.>. This is critical in a field where novel molecules and reactions often lie outside previously explored chemical spaces. To address this, specialized approaches are needed, such as the one proposed in <cit.>, which integrates statistical physics into LSTM training using a path sampling approach based on the Principle of Maximum Caliber <cit.>. § SELECTED APPLICATIONS §.§ Ab initio quantum chemistry and coarse-grained force fields In quantum chemistry, deep neural networks are used to achieve high quantum-level accuracy while reducing computational costs. For instance, AI algorithms have been developed to solve electronic Schrödinger equations for ground and excited states, reducing the computational complexity from O(N^7) to O(N^4) <cit.>. As for MD simulations, machine learning force fields allow ab initio quantum-quality calculations to approach the speed of classical MD simulations <cit.>. Another approach directly using Generative AI is applying coarse-grained models for macromolecules, which effectively reduce the number of atoms in MD simulations <cit.>. Ab initio MD is computationally expensive, but MLFFs can estimate energies and forces from atomic configurations without electron calculations <cit.>, enabling quantum-quality simulations of hundreds of atoms over nanoseconds. MLFFs have been used to study liquid dielectric constants <cit.>, phase behaviors of water <cit.>, proton transfer for energy materials <cit.>, and prebiotic chemical reactions <cit.>. Beyond MLFFs, diffusion models can generate molecular structures by sampling the Boltzmann distribution, optimizing geometry without force and energy calculations <cit.>, speeding up ground-state searches. Additionally, ML can model charge density from structures <cit.>, which, while not always needed for MD, provides the missing charge density in MLFFs, expanding AI applications to complex quantum chemistry problems like predicting vibrational spectra <cit.>. Developing MLFFs that can generalize beyond training data remains an area of concern and active research interest <cit.> While MLFFs maintain quantum-level accuracy in systems typically studied with classical MD, CG models simplify simulations by representing larger systems with reduced atomic detail. An open challenge in CG models is backmapping to all-atom configurations. One approach uses an auto-encoder architecture, where the encoder learns the CG model, and the decoder backmaps it to an ensemble's average structure <cit.>. Denoising Diffusion Probabilistic Models can also generate atom-resolution structures <cit.>. Additionally, learning the score term of a denoising diffusion process can approximate CG force fields and generate equilibrium Boltzmann distributions around local energy minima <cit.>. §.§ Protein structure and conformation prediction Understanding a protein's structure, both in native and non-native states, is crucial for determining its function, stability, and interactions. Traditionally, structure prediction relied on experimental techniques like X-ray diffraction (XRD), nuclear magnetic resonance (NMR), and cryo-electron microscopy (Cryo-EM) <cit.>. The success of AI-driven approaches like AlphaFold2 (AF2) and RoseTTAFold, which can now predict crystal-like protein structures, is largely due to the availability of such high-quality experimental data deposited in the Protein Data Bank (PDB) <cit.>. AF2 uses co-evolutionary information from multiple sequence alignments (MSAs) to predict structures from amino acid sequences, while RoseTTAFold enhances accuracy by integrating evolutionary information with 3D coordinate refinement <cit.>. Recently, AF3 and RFDiffusion have further advanced predictions by incorporating diffusion-based models capable of handling complex structures <cit.>. While AI methods have transformed biomolecular structure prediction, they remain limited by the quality of their training datasets and struggle to predict metastable non-native structures or the effects of point mutations <cit.>. To explore hidden conformations, approaches like MSA perturbation have been developed, including reduced-MSA <cit.>, AF2Cluster <cit.>, and SPEECH-AF <cit.>, all providing structural diversity. However, proteins are dynamic systems with fluctuations that depend precisely on thermodynamic parameters such as temperature, pressure, and chemical potential <cit.>. To capture this conformational diversity, methods like AF2RAVE <cit.> and AF2 integrated with MSM <cit.> generate hypothetical structures through MSA perturbation and rank them according to their Boltzmann weight via MD simulations. AF2RAVE, for example, uses reduced-MSA for initial predictions, clusters them with the State Predictive Information Bottleneck approach <cit.>, and ranks them using enhanced sampling MD simulations. This method has been effective in capturing conformational changes, such as DFG-in and -out conformations of tyrosine-protein kinases, improving accuracy in downstream docking <cit.>. In some cases, MD simulations are used as training data for AI-driven methods. The Boltzmann generator <cit.> combines MD simulations with normalizing flows to generate Boltzmann-weighted molecular conformations, potentially applicable to proteins. AlphaFlow <cit.> integrates AlphaFold with flow matching <cit.> to produce diverse protein conformations, training on MD simulations from the ATLAS dataset <cit.> to enhance diversity and capture fast mode dynamics. However, it struggles with slow, emergent properties, limiting its ability to model long-term protein behavior. Extending training with longer MD simulations could address this but raises questions about the added value of Generative AI. Two recent approaches, Distributional Graphormer (DiG) <cit.> and NeuralPLexer <cit.>, combine Generative AI with MD to predict conformational distributions. DiG uses a diffusion algorithm within a graphformer architecture to generate protein conformations from a primary sequence, with MD simulations as training data, while NeuralPLexer employs a diffusion-based model to predict state-specific protein-ligand complex structures, capturing dynamic conformational changes by integrating biophysical constraints. As demonstrated in these examples, Generative AI and its further incorporation with MD simulations have proven effective in accurately predicting protein structures and exploring protein conformations, bringing out the best of both AI and MD. We expect this will remain an area of active research with the continued development of more robust and accurate methods. That said, we urge caution in training Generative AI on synthetic MD data. It's essential these models respect physical laws, as deviations could lead to non-physical predictions. Like with DALL-E and ChatGPT, deepfakes could proliferate, leading to unreliable outcomes and unreal chemistry. §.§ RNA structure prediction RNAs are an emerging frontier in medicinal chemistry<cit.>, yet despite their growing interest, experimentally solved RNA structures remain scarce<cit.>. As a result, computational methods have become indispensable for modeling RNA tertiary structures. Among all methods, physics-based computational methods – like minimum free energy secondary structure prediction algorithms<cit.> and energy-based simulations<cit.> – are especially prevalent. These methods model energetic and entropic contributions to RNA structure formation and are a starting point for successful generative approaches. An intuitive approach to integrating AI with physics-based methods is using a neural network to predict geometries that are later refined with an energy function <cit.>. While these methods have the potential to generalize well across diverse RNA sequences, their accuracy currently lags behind methods that make use of large databases<cit.> of RNA sequences. Future advancements in this space might be more improved energy<cit.> or scoring functions<cit.> and conformational sampling algorithms <cit.>. Language models for RNA were developed following their widespread success in machine translation and text generation<cit.>. These models parameterize embeddings that can serve as inputs for other networks fine-tuned for specific tasks like structure or function prediction. AF3's MSA module and Pairformer<cit.> learn embeddings from multiple sequence alignments, while foundation models like RNA-FM<cit.> or ATOM-1<cit.> learn from single sequences or chemical mapping data. The premier application of RNA language models to structure prediction is as a module within a larger multi-component AI system. Recent years have seen the emergence of large-scale pre-trained Generative AI models for RNA structure prediction<cit.>. These are neural networks composed of different “blocks” or “modules” that learn specific aspects of biomolecular structure. Together, the modules predict tertiary structures from sequence and template information. AF3<cit.> and RoseTTAFold-2NA<cit.> (RF2NA) are state-of-the-art in RNA tertiary structure prediction, though the accuracy of predicted RNA structures does not yet rival that of proteins structures<cit.>. Still, both methods aim to predict one structure per sequence – a modeling paradigm increasingly challenged by advances in structural biology. The field of structural biology is undergoing a paradigm shift, moving away from the historically prevalent native-centric view of biomolecular structure<cit.>. A single structure may be insufficient to account for biomolecular function; the appropriate description is instead an ensemble – a set of representative Boltzmann-weighted conformers. Currently, the goal of most Generative AI methods for RNA structure prediction is to predict one native structure per sequence<cit.>. To fully embrace this paradigm shift future generative methods should aim to predict Boltzmann-weighted structural ensembles. To complicate matters further, the ensemble is not static: it responds to environmental factors like temperature<cit.>, ions<cit.>, or small molecules<cit.>. While AF3 can model the environment explicitly to some extent, it is perhaps more feasible to model the environment implicitly. Thermodynamic Maps<cit.> (TMs) follow a generative framework that infers the structural ensemble's dependence on environmental conditions. TMs are diffusion models where thermodynamic parameters (e.g. temperature) are implicitly represented within the Langevin dynamics of the diffusion process. The central idea is to map molecular conformations onto those of a simple, idealized system where the effect of the environment is straightforward to account for. Though still in its early stages, TMs have demonstrated the ability to predict the temperature dependence of RNA structural ensembles. § DESIRABLES FROM GENERATIVE AI FOR CHEMISTRY We believe the ultimate predictive power of any tool—whether theory, molecular simulations, or Generative AI—lies in starting from chemical identity and accurately predicting function while rigorously accounting for environmental conditions. However, function is not an inherent property of a given sequence or a chemical formula; rather, it is an emergent property that arises from the dynamic interactions and feedback loops across multiple scales. Achieving this requires navigating through increasing complexity—structure, thermodynamic ensemble, and environment (Fig. <ref>)—where modeling challenges intensify due to intricate fluctuations governed by equilibrium and non-equilibrium statistical mechanics. While current Generative AI methods like AlphaFold2 predict the most stable structure given chemical identity, much more remains to be desired. This section outlines key attributes for advancing Generative AI in this direction. * Chemistry and AI, Not AI for Chemistry: It is indubitable that AI's ability to handle large, complex datasets and uncover hidden structures makes it especially valuable in computational chemistry. At the same time, integrating chemistry, particularly statistical mechanics, quantum mechanics, and thermodynamics, with AI creates a powerful synergy. These fields provide essential priors, frameworks for hypothesis testing, and tools for extrapolation that enhance Generative AI's effectiveness in chemistry. * Interpretability and Reliability Testing: The interpretability and reliability of AI models are paramount in chemistry. Current internal confidence measures, such as AlphaFold’s pLDDT scores, have shown limitations in providing reliable assessments of model predictions<cit.>. A focus on developing more robust interpretability frameworks and reliability testing is necessary to ensure that AI models not only generate accurate predictions but also provide meaningful insights into their confidence levels and potential errors. * Out-of-Distribution Generalization and Efficacy in Data-Sparse Regimes: AI excels at smart interpolation within well-covered data regions, but the real challenge is generalizing to out-of-distribution data to reduce hallucinations and spurious predictions. In chemistry, where data is often sparse, Generative AI must be effective with limited data, transferring learned knowledge across diverse chemical types to ensure models remain robust, versatile, and reliable in scientific discovery. * Rethinking Data - More data is not always better data: In chemistry, the typical scaling laws seen in large language models—where more data improves test loss—may not hold true. More data doesn't always enhance performance and can sometimes obscure meaningful insights. For instance, in an MD trajectory trapped in metastable states, adding more data might amplify noise rather than provide useful information about rare transitions. This underscores the need to rethink data handling in AI for chemistry, ensuring that additional data is genuinely useful and doesn't obscure key physical insights. Moreover, traditional AI methods of splitting data into training, testing, and validation sets can be problematic due to data leakage and challenges in quantifying overlap. * Emergent Phenomena and correct coupling to environmental variables: The power and thrill of MD and computational chemistry lie in predicting new chemistry and physics that emerge naturally when simulating a large system for a long time—phenomena that couldn't be guessed from a simple force field or Hamiltonian. Emergent behavior, sensitive to control parameters and environmental variables, often arises in such simulations, closely aligning with theoretical predictions. However, Generative AI often struggles to produce novel physics or chemistry beyond its training data, with emergent phenomena sometimes being artifacts. Most AI models also fail to account for environmental conditions, limiting their predictive power in new scenarios. Hybrid approaches like AF2RAVE <cit.> and Thermodynamic Maps <cit.> show promise in this context by integrating AI with physical principles. § CRITICAL ASSESSMENT AND OUTLOOK Generative AI has made impressive strides in computational chemistry, particularly in force field development, structure prediction, and accelerated molecular simulations, showing its potential to tackle complex chemical challenges. However, significant obstacles remain before AI can fully integrate into the molecular simulation toolbox. The ultimate goal of any simulation method or theory is to reliably predict chemical function directly from chemical identity—a dream yet unrealized. We argue that the same aspirations should be applied to Generative AI for physical sciences. By integrating chemistry, particularly statistical mechanics, into AI models, considering the roles of structural and dynamical ensembles with precise fluctuations, and accounting for environmental influences, this goal may be achievable. While chemistry has much to gain from Generative AI, it also has much to teach it. Grounding AI in chemistry's principles can create more accurate, adaptable, and interpretable models. This integration could transform AI into a powerful tool for predicting novel emergent phenomena, driving discoveries, and deepening our understanding of chemical processes. This work was supported by NIH/NIGMS under award number R35GM142719. We thank UMD HPC’s Zaratan and NSF ACCESS (project CHE180027P) for computational resources. P.T. is an investigator at the University of Maryland-Institute for Health Computing, which is supported by funding from Montgomery County, Maryland and The University of Maryland Strategic Partnership: MPowering the State, a formal collaboration between the University of Maryland, College Park and the University of Maryland, Baltimore. 100 anstine2023generative DM Anstine, O Isayev, Generative models as an emerging paradigm in the chemical sciences. Journal of the American Chemical Society 145, 8736–8750 (2023). du2024machine Y Du, et al., Machine learning-aided generative molecular design. Nature Machine Intelligence pp. 1–16 (2024). rotskoff2024sampling GM Rotskoff, Sampling thermodynamic ensembles of molecular systems with generative neural networks: Will integrating physics-based models close the generalization gap? Current Opinion in Solid State and Materials Science 30, 101158 (2024). mehdi2024enhanced S Mehdi, Z Smith, L Herron, Z Zou, P Tiwary, Enhanced sampling with machine learning. Annual Review of Physical Chemistry 75 (2024). anderson1972more PW Anderson, More is different: Broken symmetry and the nature of the hierarchical structure of science. Science 177, 393–396 (1972). schaeffer2024emergent R Schaeffer, B Miranda, S Koyejo, Are emergent abilities of large language models a mirage? Advances in Neural Information Processing Systems 36 (2024). biroli2023generative G Biroli, M Mezard, Generative diffusion in very large dimensions. Journal of Statistical Mechanics: Theory and Experiment 2023, 093402 (2023). frenkel2023understanding D Frenkel, B Smit, Understanding molecular simulation: from algorithms to applications. (Elsevier), (2023). white2021deep AD White, Deep learning for molecules and materials. Living Journal of Computational Molecular Science 3, 1499 (2021). nerenberg2018new PS Nerenberg, T Head-Gordon, New developments in force fields for biomolecular simulations. Current opinion in structural biology 49, 129–138 (2018). best2005reaction RB Best, G Hummer, Reaction coordinates and rates from transition paths. Proceedings of the National Academy of Sciences 102, 6732–6737 (2005). heid2023characterizing E Heid, CJ McGill, FH Vermeire, WH Green, Characterizing uncertainty in machine learning for chemistry. Journal of Chemical Information and Modeling 63, 4012–4029 (2023). tomczak2018vae J Tomczak, M Welling, Vae with a vampprior in International conference on artificial intelligence and statistics. (PMLR), pp. 1214–1223 (2018). higgins2017beta I Higgins, et al., beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR (Poster) 3 (2017). wang2024latent D Wang, Y Wang, L Evans, P Tiwary, From latent dynamics to meaningful representations. Journal of Chemical Theory and Computation 20, 3503–3513 (2024). beyerle2024thermodynamically ER Beyerle, P Tiwary, Thermodynamically optimized machine-learned reaction coordinates for hydrophobic ligand dissociation. The Journal of Physical Chemistry B 128, 755–767 (2024). wang2021state D Wang, P Tiwary, State predictive information bottleneck. The Journal of Chemical Physics 154 (2021). beyerle2022quantifying ER Beyerle, S Mehdi, P Tiwary, Quantifying energetic and entropic pathways in molecular systems. The Journal of Physical Chemistry B 126, 3950–3960 (2022). zhao2023quantifying R Zhao, Z Zou, JD Weeks, P Tiwary, Quantifying the relevance of long-range forces for crystal nucleation in water. Journal of Chemical Theory and Computation 19, 9093–9101 (2023). Goodfellow2014Generative I Goodfellow, et al., Generative adversarial nets in Advances in Neural Information Processing Systems, eds. Z Ghahramani, M Welling, C Cortes, N Lawrence, K Weinberger. (Curran Associates, Inc.), Vol. 27, (2014). Mirza2014ConditionalGA M Mirza, S Osindero, Conditional generative adversarial nets. ArXiv abs/1411.1784 (2014). Arjovsky2017Wasserstein M Arjovsky, S Chintala, L Bottou, Wasserstein generative adversarial networks in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, eds. D Precup, YW Teh. (PMLR), Vol. 70, pp. 214–223 (2017). You2018Graph J You, B Liu, Z Ying, V Pande, J Leskovec, Graph convolutional policy network for goal-directed molecular graph generation in Advances in Neural Information Processing Systems, eds. S Bengio, et al. (Curran Associates, Inc.), Vol. 31, (2018). Janson2023Direct G Janson, G Valdes-Garcia, L Heo, M Feig, Direct generation of protein conformational ensembles via machine learning. Nature Communications 14, 774 (2023). Sidky2020Molecular H Sidky, W Chen, AL Ferguson, Molecular latent space simulators. Chem. Sci. 11, 9459–9467 (2020). VAMPnets2018Andreas A Mardt, L Pasquali, H Wu, F Noe, Vampnets for deep learning of molecular kinetics. Nature Communications 9, 5 (2018). Gui2023Review J Gui, Z Sun, Y Wen, D Tao, J Ye, A review on generative adversarial networks: Algorithms, theory, and applications. IEEE Transactions on Knowledge and Data Engineering 35, 3313–3332 (2023). Hoang2020Catastrophic H Thanh-Tung, T Tran, Catastrophic forgetting and mode collapse in gans. 2020 International Joint Conference on Neural Networks (IJCNN) pp. 1–10 (2020). dhariwal2021diffusion P Dhariwal, AQ Nichol, Diffusion models beat GANs on image synthesis in Advances in Neural Information Processing Systems, eds. A Beygelzimer, Y Dauphin, P Liang, JW Vaughan. (2021). blaschke2020reinvent T Blaschke, et al., Reinvent 2.0: an ai tool for de novo drug design. Journal of chemical information and modeling 60, 5918–5922 (2020). zhavoronkov2019deep A Zhavoronkov, et al., Deep learning enables rapid identification of potent ddr1 kinase inhibitors. Nature biotechnology 37, 1038–1040 (2019). zhang2021deep J Zhang, et al., Deep reinforcement learning of transition states. Physical Chemistry Chemical Physics 23, 6888–6895 (2021). das2021reinforcement A Das, DC Rose, JP Garrahan, DT Limmer, Reinforcement learning of rare diffusive dynamics. The Journal of Chemical Physics 155 (2021). kleiman2022multiagent DE Kleiman, D Shukla, Multiagent reinforcement learning-based adaptive sampling for conformational dynamics of proteins. Journal of Chemical Theory and Computation 18, 5422–5434 (2022). ziebart2008maximum BD Ziebart, AL Maas, JA Bagnell, AK Dey, , et al., Maximum entropy inverse reinforcement learning. in Aaai. (Chicago, IL, USA), Vol. 8, pp. 1433–1438 (2008). berrueta2024maximum TA Berrueta, A Pinosky, TD Murphey, Maximum diffusion reinforcement learning. Nature Machine Intelligence pp. 1–11 (2024). ghosh2020maximum K Ghosh, PD Dixit, L Agozzino, KA Dill, The maximum caliber variational principle for nonequilibria. Annual review of physical chemistry 71, 213–238 (2020). bengio2023gflownet Y Bengio, et al., Gflownet foundations. The Journal of Machine Learning Research 24, 10006–10060 (2023). sohl2015deep J Sohl-Dickstein, E Weiss, N Maheswaranathan, S Ganguli, Deep unsupervised learning using nonequilibrium thermodynamics in International conference on machine learning. (PMLR), pp. 2256–2265 (2015). SimulatedTempering E Marinari, G Parisi, Simulated tempering: A new monte carlo scheme. Europhysics Letters (EPL) 19, 451?458 (1992). AnnealedImportanceSampling RM Neal, Annealed importace sampling. Statistics and Computing 11, 125?139 (2001). RealNVP L Dinh, J Sohl-Dickstein, S Bengio, Density estimation using real nvp (2016). neural-splines C Durkan, A Bekasov, I Murray, G Papamakarios, Neural spline flows. Advances in neural information processing systems 32 (2019). stochastic-normalizing-flows H Wu, J Kohler, F Noe, Stochastic normalizing flows. Advances in Neural Information Processing Systems 33, 5933–5944 (2020). ddpm J Ho, A Jain, P Abbeel, Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840–6851 (2020). sgm-sde Y Song, et al., Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 (2020). Anderson1982 BD Anderson, Reverse-time diffusion equation models. Stochastic Processes and their Applications 12, 313?326 (1982). Jin2022 J Jin, AJ Pak, AEP Durumeric, TD Loose, GA Voth, Bottom-up coarse-graining: Principles and perspectives. Journal of Chemical Theory and Computation 18, 5759?5791 (2022). dynamical-regimes-mezard G Biroli, T Bonnaire, V deBortoli, M Mezard, Dynamical regimes of diffusion models (2024). flow-matching Y Lipman, RT Chen, H Ben-Hamu, M Nickel, M Le, Flow matching for generative modeling. arXiv preprint arXiv:2210.02747 (2022). CFM Y Lipman, RTQ Chen, H Ben-Hamu, M Nickel, M Le, Flow matching for generative modeling (2022). stochastic-interpolants MS Albergo, NM Boffi, E Vanden-Eijnden, Stochastic interpolants: A unifying framework for flows and diffusions. arXiv preprint arXiv:2303.08797 (2023). hochreiter1997long S Hochreiter, J Schmidhuber, Long short-term memory. Neural computation 9, 1735–1780 (1997). vaswani2017attention A Vaswani, Attention is all you need. arXiv preprint arXiv:1706.03762 (2017). bender2021dangers EM Bender, T Gebru, A McMillan-Major, S Shmitchell, On the dangers of stochastic parrots: Can language models be too big? in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. pp. 610–623 (2021). tsai2022path ST Tsai, E Fields, Y Xu, EJ Kuo, P Tiwary, Path sampling of recurrent neural networks by incorporating known physics. Nature Communications 13, 7231 (2022). hermann2020deep J Hermann, Z Schatzle, F Noe, Deep-neural-network solution of the electronic schrödinger equation. Nature Chemistry 12, 891–897 (2020). david2024accurate D Pfau, S Axelrod, H Sutterud, I von Glehn, JS Spencer, Accurate computation of quantum excited states with neural networks. Science 385, eadn0137 (2024). ratcliff2017challenges LE Ratcliff, et al., Challenges in large scale quantum mechanical calculations. WIREs Computational Molecular Science 7, e1290 (2017). tiwary2024modeling P Tiwary, Modeling prebiotic chemistries with quantum accuracy at classical costs. Proceedings of the National Academy of Sciences 121, e2408742121 (2024). majewski2023machine M Majewski, et al., Machine learning coarse-grained potentials of protein thermodynamics. Nature communications 14, 5739 (2023). noe2020machine F Noe, A Tkatchenko, KR Muller, C Clementi, Machine learning for molecular simulation. Annual review of physical chemistry 71, 361–390 (2020). zhang2023why C Zhang, S Yue, AZ Panagiotopoulos, ML Klein, X Wu, Why dissolving salt in water decreases its dielectric permittivity. Physical Review Letters 131, 076801 (2023). gartner2022liquid TE Gartner, PM Piaggi, R Car, AZ Panagiotopoulos, PG Debenedetti, Liquid-liquid transition in water from first principles. Physical Review Letters 129, 255702 (2022). quaranta2017proton V Quaranta, M Hellstrom, J Behler, Proton-transfer mechanisms at the water?zno interface: The role of presolvation. The Journal of Physical Chemistry Letters 8, 1476–1483 (2017). benayad2024prebiotic Z Benayad, R David, G Stirnemann, Prebiotic chemical reactivity in solution with quantum accuracy and microsecond sampling using neural network potentials. Proceedings of the National Academy of Sciences 121, e2322040121 (2024). rothchild2024investigating D Rothchild, et al., Investigating the behavior of diffusion models for accelerating electronic structure calculations. Chem. Sci. pp. – (2024). pope2023towards P Pope, D Jacobs, Towards combinatorial generalization for catalysts: A kohn-sham charge-density approach in Advances in Neural Information Processing Systems, eds. A Oh, et al. (Curran Associates, Inc.), Vol. 36, pp. 60585–60598 (2023). zhang2020prb L Zhang, et al., Deep neural network for the dielectric response of insulators. Phys. Rev. B 102, 041121 (2020). zhai2023short Y Zhai, A Caruso, SL Bore, Z Luo, F Paesani, A short blanket dilemma for a state-of-the-art neural network potential for water: Reproducing experimental properties or the physics of the underlying many-body interactions? The Journal of Chemical Physics 158 (2023). wang2019coarse W Wang, R Gomez-Bombarelli, Coarse-graining auto-encoders for molecular dynamics. npj Computational Materials 5, 125 (2019). jones2023diamondback MS Jones, K Shmilovich, AL Ferguson, Diamondback: Diffusion-denoising autoregressive model for non-deterministic backmapping of cα protein traces. Journal of Chemical Theory and Computation 19, 7908–7923 (2023). arts2023two M Arts, et al., Two for one: Diffusion models and force fields for coarse-grained molecular dynamics. Journal of Chemical Theory and Computation 19, 6151–6159 (2023). Goh2016Computational BC Goh, et al., Computational methodologies for real-space structural refinement of large macromolecular complexes. Annual Review of Biophysics 45, 253–278 (2016). Jumper2021Highly J Jumper, et al., Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). Baek2021Accurate M Baek, et al., Accurate prediction of protein structures and interactions using a three-track neural network. Science 373, 871–876 (2021). Abramson2024Accurate J Abramson, et al., Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature 630, 493–500 (2024). Watson2023Denovo JL Watson, et al., De novo design of protein structure and function with RFdiffusion. Nature 620, 1089–1100 (2023). Buel2022Can GR Buel, KJ Walters, Can AlphaFold2 predict the impact of missense mutations on structure? Nature Structural & Molecular Biology 29, 1–2 (2022). Alamo2022Sampling D del Alamo, D Sala, HS Mchaourab, J Meiler, Sampling alternative conformational states of transporters and receptors with alphafold2. eLife 11, e75751 (2022). Wayment_Steele2024Predicting HK Wayment-Steele, et al., Predicting multiple conformations via sequence clustering and AlphaFold2. Nature 625, 832–839 (2024). Stein2022Sampling RA Stein, HS Mchaourab, Speach af: Sampling protein ensembles and conformational heterogeneity with alphafold2. PLOS Computational Biology 18, 1–16 (2022). Henzler2007Dynamic K Henzler-Wildman, D Kern, Dynamic personalities of proteins. Nature 450, 964–972 (2007). Bowman2024AlphaFold GR Bowman, Alphafold and protein folding: Not dead yet! the frontier is conformational ensembles. Annual Review of Biomedical Data Science (2024). Vani2023AF2RAVE BP Vani, A Aranganathan, D Wang, P Tiwary, Alphafold2-rave: From sequence to boltzmann ranking. Journal of Chemical Theory and Computation 19, 4351–4354 (2023) PMID: 37171364. Meller2023Accelerating A Meller, S Bhakat, S Solieva, GR Bowman, Accelerating cryptic pocket discovery using alphafold. Journal of Chemical Theory and Computation 19, 4355–4363 (2023) PMID: 36948209. Vani2024Exploring BP Vani, A Aranganathan, P Tiwary, Exploring kinase asp-phe-gly (dfg) loop conformational stability with alphafold2-rave. Journal of Chemical Information and Modeling 64, 2789–2797 (2024) PMID: 37981824. Gu2024Empowering X Gu, A Aranganathan, P Tiwary, Empowering alphafold2 for protein conformation selective drug discovery with alphafold2-rave. eLife Sciences Publications, Ltd (2024). Noe2019Boltzmann F Noe, S Olsson, J Kohler, H Wu, Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science 365, eaaw1147 (2019). jing2024alphafold B Jing, B Berger, T Jaakkola, Alphafold meets flow matching for generating protein ensembles (2024). lipman2023flow Y Lipman, RTQ Chen, H Ben-Hamu, M Nickel, M Le, Flow matching for generative modeling in The Eleventh International Conference on Learning Representations. (2023). Vander2023ATLAS Y Vander Meersche, G Cretin, A Gheeraert, JC Gelly, T Galochkina, ATLAS: protein flexibility description from atomistic molecular dynamics simulations. Nucleic Acids Research 52, D384–D392 (2023). Zheng2024Predicting S Zheng, et al., Predicting equilibrium distributions for molecular systems with deep learning. Nature Machine Intelligence 6, 558?567 (2024). qiao2024state Z Qiao, W Nie, A Vahdat, TF Miller, A Anandkumar, State-specific protein–ligand complex structure prediction with a multiscale deep generative model. Nature Machine Intelligence 6, 195–208 (2024). hargrove2021targeting JP Falese, A Donlic, AE Hargrove, Targeting rna with small molecules: from fundamental principles towards the clinic. Chemical Society Reviews 50, 2224–2243 (2021). szikszai2024rna3db M Szikszai, et al., Rna3db: A structurally-dissimilar dataset split for training and benchmarking deep learning models for rna structure prediction. Journal of Molecular Biology p. 168552 (2024). nussinov R Nussinov, AB Jacobson, Fast algorithm for predicting the secondary structure of single-stranded rna. Proceedings of the National Academy of Sciences 77, 6309–6313 (1980). zuker1989 M Zuker, On finding all suboptimal foldings of an rna molecule. Science 244, 48–52 (1989). mfold2003 M Zuker, Mfold web server for nucleic acid folding and hybridization prediction. Nucleic acids research 31, 3406–3415 (2003). xu2014vfold X Xu, P Zhao, SJ Chen, Vfold: a web server for rna structure and folding thermodynamics prediction. PloS one 9, e107504 (2014). parisien2008mc-fold M Parisien, F Major, The mc-fold and mc-sym pipeline infers rna structure from sequence data. Nature 452, 51–55 (2008). sripakdeevong2011enumerative P Sripakdeevong, W Kladwang, R Das, An enumerative stepwise ansatz enables atomic-accuracy rna loop modeling. Proceedings of the National Academy of Sciences 108, 20573–20578 (2011). watkins2020farfar2 AM Watkins, R Rangan, R Das, Farfar2: improved de novo rosetta prediction of complex global rna folds. Structure 28, 963–976 (2020). pearce2022novo R Pearce, GS Omenn, Y Zhang, De novo rna tertiary structure prediction at atomic resolution using geometric potentials from deep learning. BioRxiv pp. 2022–05 (2022). wang2023trrosettarna W Wang, et al., trrosettarna: automated prediction of rna 3d structure with transformer network. Nature Communications 14, 7266 (2023). li2023integrating Y Li, et al., Integrating end-to-end learning with deep geometrical potentials for ab initio rna structure prediction. Nature Communications 14, 5745 (2023). griffiths2003-rfam S Griffiths-Jones, A Bateman, M Marshall, A Khanna, SR Eddy, Rfam: an rna family database. Nucleic acids research 31, 439–441 (2003). rnacentral2019-RNAcentral Rnacentral: a hub of information for non-coding rna sequences. Nucleic Acids Research 47, D221–D229 (2019). stasiewicz2019qrnas J Stasiewicz, S Mukherjee, C Nithin, JM Bujnicki, Qrnas: software tool for refinement of nucleic acid structures. BMC structural biology 19, 1–11 (2019). briq2021 P Xiong, R Wu, J Zhan, Y Zhou, Pairing a high-resolution statistical potential with a nucleobase-centric sampling algorithm for improving rna model refinement. Nature communications 12, 2777 (2021). wang20153drnascore J Wang, Y Zhao, C Zhu, Y Xiao, 3drnascore: a distance and torsion angle dependent evaluation function of 3d rna structures. Nucleic acids research 43, e63–e63 (2015). townshend2021geometric RJ Townshend, et al., Geometric deep learning of rna structure. Science 373, 1047–1051 (2021). li2023rnajp J Li, SJ Chen, Rnajp: enhanced rna 3d structure predictions with non-canonical interactions and global topology sampling. Nucleic acids research 51, 3341–3356 (2023). chen2023rna-alchemy2 K Chen, Y Zhou, S Wang, P Xiong, Rna tertiary structure modeling with briq potential in casp15. Proteins: Structure, Function, and Bioinformatics 91, 1771–1778 (2023). akiyama2022-RNAbert M Akiyama, Y Sakakibara, Informative rna base embedding for rna structural alignment and clustering by deep representation learning. NAR genomics and bioinformatics 4, lqac012 (2022). chen2022-RNA-FM J Chen, et al., Interpretable rna foundation model from unannotated data for highly accurate rna structure and function predictions. arXiv preprint arXiv:2204.00300 (2022). wang2023-UNI-RNA X Wang, et al., Uni-rna: universal pre-trained models revolutionize rna research. bioRxiv pp. 2023–07 (2023). chen2023-spliceBERT K Chen, et al., Self-supervised learning on millions of pre-mrna sequences improves sequence-based rna splicing prediction. bioRxiv pp. 2023–01 (2023). boyd2023-ATOM-1 N Boyd, et al., Atom-1: A foundation model for rna structure and function built on chemical mapping data. bioRxiv pp. 2023–12 (2023). chu2024-UTR-LLM Y Chu, et al., A 5' utr language model for decoding untranslated regions of mrna and function predictions. Nature Machine Intelligence 6, 449–460 (2024). shen2022-e2efold-rhofold T Shen, et al., E2efold-3d: end-to-end deep learning method for accurate de novo rna 3d structure prediction. arXiv preprint arXiv:2207.01586 (2022). baek2022accurate M Baek, R McHugh, I Anishchenko, D Baker, F DiMaio, Accurate prediction of nucleic acid and protein-nucleic acid complexes using rosettafoldna. bioRxiv pp. 2022–09 (2022). kagaya2023-nufold Y Kagaya, et al., Nufold: a novel tertiary rna structure prediction method using deep learning with flexible nucleobase center representation. bioRxiv (2023). krishna2024generalized R Krishna, et al., Generalized biomolecular modeling and design with rosettafold all-atom. Science 384, eadl2528 (2024). das2023-CASP15 R Das, et al., Assessment of three-dimensional rna structure prediction in casp15. Proteins: Structure, Function, and Bioinformatics 91, 1747–1770 (2023). henzler2007-protein-ensemble K Henzler-Wildman, D Kern, Dynamic personalities of proteins. Nature 450, 964–972 (2007). bonilla2022-rna-cryo-ensemble SL Bonilla, JS Kieft, The promise of cryo-em to explore rna structural dynamics. Journal of molecular biology 434, 167802 (2022). ken2023-rna-ensemble-excited ML Ken, et al., Rna conformational propensities determine cellular activity. Nature 617, 835–841 (2023). bonilla2024-rna-ensemble SL Bonilla, AN Jones, D Incarnato, Structural and biophysical dissection of rna conformational ensembles. Current Opinion in Structural Biology 88, 102908 (2024). kortmann2012-rna-thermometer J Kortmann, F Narberhaus, Bacterial rna thermometers: molecular zippers and switches. Nature reviews microbiology 10, 255–265 (2012). brion1997-rna-ions P Brion, E Westhof, Hierarchy and dynamics of rna folding. Annual review of biophysics and biomolecular structure 26, 113–137 (1997). orlovsky2019-hiv-tar-small-molecule NI Orlovsky, HM Al-Hashimi, TG Oas, Exposing hidden high-affinity rna conformational states. Journal of the American Chemical Society 142, 907–921 (2019). herron2023inferring L Herron, K Mondal, JS Schneekloth, P Tiwary, Inferring phase transitions and critical exponents from limited observations with thermodynamic maps. arXiv preprint arXiv:2308.14885 (2023).
http://arxiv.org/abs/2409.03742v1
20240905175509
Convex decomposition spaces and Crapo complementation formula
[ "Imma Gálvez-Carrillo", "Joachim Kock", "Andrew Tonks" ]
math.CT
[ "math.CT", "math.CO", "05A19, 18N50" ]
#1#⃗1⃗ 0pt0.25ex𝕣0.8pt 0pt0.28ex𝕣0.8pt 0pt0.20ex𝕣0.4pt 0pt0.17ex𝕣0.2pt act /.tip = >| onto/.style=/tikz/commutative diagrams/twoheadrightarrow into/.style=/tikz/commutative diagrams/hookrightarrow inertto/.style=/tikz/commutative diagrams/rightarrowtail rot/.style=shift=(-4.5pt,0pt), rotate=-45 Ubboldmn bboldUbboldmn bbold #1#1 lemmaLemma[subsection] prop[lemma]Proposition cor[lemma]Corollary theorem[lemma]Theorem definition remark[lemma]Remark example[lemma]Example taller[lemma] blanko[1] #1 Convex decomposition spaces and Crapo complementation formula Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks =============================================================== § ABSTRACT We establish a Crapo complementation formula for the Möbius function μ^X in a general decomposition space X in terms of a convex subspace K and its complement: μ^X ≃μ^X K + μ^X*ζ^K*μ^X. We work at the objective level, meaning that the formula is an explicit homotopy equivalence of ∞-groupoids. Almost all arguments are formulated in terms of (homotopy) pullbacks. Under suitable finiteness conditions on X, one can take homotopy cardinality to obtain a formula in the incidence algebra at the level of -algebras. When X is the nerve of a locally finite poset, this recovers the Björner–Walker formula, which in turn specialises to the original Crapo complementation formula when the poset is a finite lattice. A substantial part of the work is to introduce and develop the notion of convexity for decomposition spaces, which in turn requires some general preparation in decomposition-space theory, notably some results on reduced covers and ikeo and semi-ikeo maps. These results may be of wider interest. Once this is set up, the objective proof of the Crapo formula is quite similar to that of Björner–Walker. -0.1pt < g r a p h i c s > This work has received funding from the European Union's Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement No. 101028099 and a Spanish university requalification and mobility grant (UP2021-034, UNI/551/2021) with NextGenerationEU funds. More indirectly it was supported by research grants PID2019-103849GB-I00, PID2020-116481GB-I00, and PID2020-117971GB-C22 (AEI/FEDER, UE) of Spain, grants 2021-SGR-0603 and 2021-SGR-1015 of Catalonia, and was also supported through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D grant number CEX2020-001084-M. § INTRODUCTION The theory of incidence algebras and Möbius inversion for locally finite posets was developed by Rota <cit.> (see also Joni–Rota <cit.>). Leroux <cit.>, <cit.> showed how the theory can be generalised from locally finite posets to certain locally finite categories called Möbius categories. However, beyond the basic constructions, the theory did not develop much for some decades. An important development, independent of Leroux theory, was the simplicial viewpoint taken by Dür <cit.>. Next, an important step was the objective viewpoint of Lawvere and Menni <cit.>, upgrading algebraic identities to bijections of sets and equivalences of groupoids. The present authors <cit.>, <cit.> (see also <cit.>) introduced the notion of decomposition space (the same thing as the 2-Segal spaces of Dyckerhoff and Kapranov <cit.>) as a general framework for incidence algebras and Möbius inversion. Following the direction set out by Leroux <cit.>, <cit.>, the theory is categorical, and in fact ∞-categorical. A benefit of this homotopical viewpoint is that symmetries are built in, which is useful even in classical combinatorial situations that do not have any ∞-category appearance. Following the direction of Dür <cit.> the theory is simplicial and covers a class of simplicial ∞-groupoids which are not Segal spaces. This allows many combinatorial co-, bi- and Hopf algebras to be realised as incidence coalgebras of decomposition spaces while they are not incidence coalgebras of posets or categories. Finally following the direction set out by Lawvere and Menni <cit.>, the theory is objective (with the link to ordinary algebra over given by homotopy cardinality). In particular, remarkably many arguments can be formulated in terms of (homotopy) pullbacks. One benefit of the objective approach is that many formulae can be established without imposing finiteness conditions: they are still valid homotopy equivalences of ∞-groupoids. The finiteness conditions are required only to be able to take cardinality. With the new toolbox at hand it is now an overall programme to upgrade the classical theory from posets to decomposition spaces, and investigate new applications. Beyond the general theory, an important extension of Rota's original contribution was the formula of Carlier <cit.> for the relationship between the Möbius function of two decomposition spaces related by an ∞-adjunction, which generalises Rota's formula for a Galois correspondence of posets. In the present paper we give a generalisation to the decomposition-space setting of another classical formula, namely Crapo's complementation formula, originally formulated in the setting of lattices <cit.> but generalised to arbitrary posets by Björner and Walker <cit.>. To do so, we first have to develop some general theory on convex subspaces of a decomposition space, and some general results about ikeo and semi-ikeo maps. Functoriality is an important aspect of the objective approach to incidence algebras. Culf maps between decomposition spaces induce algebra homomorphisms contravariantly on incidence algebras, whereas ikeo maps induce algebra homomorphisms covariantly on incidence algebras. Culf maps have been exploited a lot already both in the original series of papers <cit.> and in later works (see notably <cit.>). Ikeo maps have not yet received the same attention, and our first task is to develop some basic theory about them needed for the Crapo formula. While the culf condition interacts very nicely with the original characterisation of decomposition spaces in terms of active-inert pullbacks, the ikeo condition interacts better with an alternative characterisation of decomposition spaces in terms of pullbacks with inert covers (to be made precise below), so we take the opportunity to develop that viewpoint (cf. Theorem <ref>). A subtle issue is the preservation of units for the convolution product in the incidence algebras. While for the decomposition-space axioms unitality has turned out to be automatic <cit.> (the incidence algebra of a simplicial set is automatically unital if just it is associative), and while the contravariant functoriality in simplicial maps preserves units automatically if it preserves the convolution product, the same is not true for the covariant functoriality: there are simplicial maps that are not quite ikeo, which preserve the convolution product without preserving the unit. Reluctantly we call them semi-ikeo. We show that full inclusions are such maps. We show that if a simplicial space is semi-ikeo over a decomposition space then it is itself a decomposition space (Lemma <ref>). A full inclusion of simplicial spaces is called convex when it is furthermore culf. A convex subspace of a decomposition space is thus again a decomposition space, and its complement is a decomposition space too (although of course not generally convex). With these preparations we are ready to state and prove the Crapo complementation formula for decomposition spaces: for an arbitrary decomposition space X and a convex subspace K, we have the following formula (Theorem <ref>) relating the Möbius function of X with that of K and its complement: μ^X = μ^X K + μ^X *ζ^K *μ^X . The statement here involves formal differences, since each Möbius function is an alternating sum, but after moving all negative terms to the other side of the equation, the formula is established as an explicit homotopy equivalence of ∞-groupoids. The formula determines μ^X from μ^X K and ζ^K by a well-founded recursion expressed by the convolution product. § DECOMPOSITION SPACES The main contribution of this section is the characterisation of decomposition spaces in terms of squares of reduced covers against active injections (Conditions (3) and (4) in Theorem <ref> below). This condition plays well together with semi-ikeo maps, as we shall see in Section <ref>. Active and inert maps. The simplex category (whose objects are the nonempty finite ordinals [n] and whose morphisms are the monotone maps) has an active-inert factorisation system. An arrow in is active, written a: [m] [n], when it preserves end-points, a(0)=0 and a(m)=n; it is inert, written a: [m] [n], if it is distance preserving, a(i+1)=a(i)+1 for 0≤ i≤ m-1. The active maps are generated by the codegeneracy maps s^i : [n+1] [n] and by the inner coface maps d^i : [n-1] [n], 0 < i < n, while the inert maps are generated by the outer coface maps d^ := d^0 and d^⊤:= d^n. Every morphism in factors uniquely as an active map followed by an inert map. Furthermore, it is a basic fact <cit.> that active and inert maps in admit pushouts along each other, and the resulting maps are again active and inert. Decomposition spaces <cit.>. A simplicial space X: → is called a decomposition space when it takes active-inert pushouts to pullbacks. It has turned out <cit.> that the degeneracy maps are not required among the active maps to state the condition, so to check the decomposition-space axioms, it is enough to check the following squares for all 0<i<n: [column sep=4.5em,between origins, row sep=3.5em,between origins] X_1+n[d, -act, "d_1+i"'] [r, inertto, "d_"] X_n [d, -act, "d_i"] X_n [r, inertto, "d_"'] X_n-1 [column sep=4.5em,between origins, row sep=3.5em,between origins] X_n+1[d, -act, "d_i"'] [r, inertto, "d_⊤"] X_n [d, -act, "d_i"] X_n [r, inertto, "d_⊤"'] X_n-1 As is custom, we use the words (and symbols) `active' and `inert' also for their images in under a functor X: →. Since the decomposition-space axiom is formulated in terms of pullbacks — as are the notions of culf, ikeo, semi-ikeo, fully faithful, convex, and convolution product featured in this work — the following simple lemma becomes an indispensable tool (used a dozen times in this paper): In a prism diagram [cramped] ·[r] [d] ·[r] [d] ·[d] ·[r] ·[r] · the left-hand square is a pullback if and only if the whole rectangle is a pullback. §.§ Decomposition spaces from the inert viewpoint We work towards an alternative characterisation of decomposition spaces, but first we need to set up some terminology. For each [k]∈ there are k inert maps ρ_i : [1] [k] i = 1,…,k , namely picking out the principal edge (i-1,i). For k=0 there are zero such maps. Special reduced-cover squares. For an active map α: [k] [n], write [n_i] for the ordinal [α(i)-α(i-1)] appearing in the active-inert factorisation of α∘ρ_i: [1][r, inertto, "ρ_i"] [d, dotted, -act, "α_i"'] [k][d, -act, "α"] [n_i][r, inertto, dotted, "γ^α_i"'] [n] . If k>0, the maps γ^α_i together constitute a cover of [n], meaning that they are jointly surjective. A cover is called reduced if no edges are hit twice (for (γ^α_i) this is clear) and if there are no copies of [0] involved (which is the case when α is injective). The notions of cover and reduced cover in the inert part of were first studied by Berger <cit.>, including the important characterisation of categories: a simplicial set is a category if and only if it is a sheaf for this notion of cover. The maps α_i : [1] [n_i] together constitute the unique join decomposition of α into active maps with domain [1]: we have α = α_1 ∨⋯∨α_k . The k-tuple of maps γ^α_i (and the k-tuple of squares) thus define for any simplicial space X a diagram SRCS[column sep=10em,between origins] X_1 ×⋯× X_1 [l, "(ρ_1,…,ρ_k)"'] X_k X_n_1×⋯× X_n_k[u, -act, "(α_1×⋯×α_k)"] X_n . [u, -act, "α = (α_1∨⋯∨α_k)"'] [l, "(γ^α_1,…,γ^α_k)"] We refer to these squares as special reduced-cover squares. Note that the vertical maps are active, or products of active maps, while the components of the horizontal maps are inert. Here and in the text below we use notation such as (α_1×⋯×α_k) and (ρ_1,…,ρ_k) for α_1×⋯×α_k and (ρ_1,…,ρ_k) respectively. General reduced-cover squares. More generally, instead of starting with the reduced cover of [k] consisting of the k maps ρ_i: [1] [k], we can start with an arbitrary reduced cover of [k], namely m inert maps τ_i : [k_i] [k] with ∑_i k_i = k and such that they are jointly surjective and k_i ≠ 0. With this data, just as before, we write [n_i] for the ordinal appearing in the active-inert factorisation of α∘τ_i: [k_i][r, inertto, "τ_i"] [d, dotted, -act, "α_i"'] [k][d, -act, "α"] [n_i][r, inertto, dotted, "γ^α,τ_i"'] [n] . Again, if k>0, the maps γ^α,τ_i: [n_i] [n] together constitute a cover of [n], which is reduced if α is injective. Note also that we have α = α_1 ∨⋯∨α_m. For convenience we assume the cover is in the canonical order, that is, τ_i(0)<τ_i+1(0) for 1≤ i≤ m-1, and write β:[m][k] for the active map with β(i)=τ_i+1(0). The squares together define for any simplicial space X a diagram GRCS[column sep=10em,between origins] X_k_1×⋯× X_k_m [l, "(τ_1,…,τ_m)"'] X_k X_n_1×⋯× X_n_m[u, -act, "(α_1 ×⋯×α_m)"] X_n . [u, -act, "α = (α_1∨⋯∨α_k)"'] [l, "(γ^α,τ_1,…,γ^α,τ_m)"] We refer to these squares as general reduced-cover squares. Note again that the vertical maps are active and the components of the horizontal maps are inert. For any simplicial space X, the following are equivalent. * Active-inert squares are pullbacks (i.e. X is a decomposition space). * Squares formed by inert maps and active injections are pullbacks. * For every active injection α: [k] [n] with k≠ 0, the special reduced-cover square (<ref>) is a pullback. * For every reduced cover ( τ_i : [k_i] [k]) _1≤ i ≤ m and every every active injection α: [k] [n] with k≠ 0, the general reduced-cover square (<ref>) is a pullback. The equivalence of (1) and (2) is the content of the theorem of Feller et al. <cit.> (that is, the statement that every 2-Segal space is unital). The special reduced-cover squares (<ref>) are special cases of the general reduced-cover squares (<ref>), so it is clear that (4) implies (3). Conversely, (3) implies (4) by an easy prism-lemma argument. Write an arbitrary general reduced-cover square as the bottom square: [column sep=10em,between origins] X_1 ×⋯× X_1 [l, "(ρ_1,…,ρ_m)"'] X_m X_k_1×⋯× X_k_m[u, -act] [l, "(τ_1,…,τ_m)"'] X_k [u, -act, "β"'] X_n_1×⋯× X_n_m[u, -act, "(α_1 ×⋯×α_m)"] X_n [u, -act, "α"'] [l, "(γ^α,τ_1,…,γ^α,τ_m)"] and complete it by pasting a special reduced-cover square on top of it. Assuming Condition (3), both the upper square and the whole rectangle are pullbacks, so by the prism lemma also the lower square is a pullback, which means that (4) holds. It is not difficult to show that (4) implies (2): we want to establish that the square X_n[d, -act, "d_i"'] [l, inertto, "d_⊤"'] X_n+1[d, -act, "d_i"] X_n-1 X_n [l, inertto, "d_⊤"] is a pullback (for 0 < i <n). (We should of course similarly deal with the analogous squares with bottom face maps.) Decompose the square as X_n[d, -act, "d_i"'] [l, "pr_1"'] X_n × X_1 [d, -act, "d_i×𝕀"'] [l, "(d_⊤,d_^n)"'] X_n+1[d, -act, "d_i"] X_n-1 [l, "pr_1"] X_n-1× X_1 [l, "(d_⊤,d_^n)"] X_n . Now the left-hand square is a pullback since it projects away an identity, and the right-hand square is a pullback since it is a general reduced-cover square as in Condition (4) The most interesting part is to show that (2) implies (4). So we assume that all the squares in Condition (2) are pullbacks, and aim to show that a general reduced-cover square (<ref>) is a pullback. For ease of exposition we describe explicitly the case where there are only m=2 charts in the cover τ. This means that the square has the form [column sep=9em,between origins, row sep=5em,between origins] X_k_1× X_k_2 [l, "(d_⊤^k_2,d_^k_1)"'] X_k X_n_1× X_n_2[u, -act, "(α_1×α_2)"] X_n . [u, -act, "α =(α_1 ∨α_2)"'] [l, "(d_⊤^n_2,d_^n_1)"] Such a square we can decompose into two (or m, in the general case) smaller squares vertically like the solid part of this diagram: [column sep=9em,between origins, row sep=2.4em,between origins] X_k_2 X_k_1× X_k_2[lu, dotted, "pr_2"'] [l, "(d_⊤^k_2,d_^k_1)"'] X_k_1+k_2 X_n_2[uu, -act, dotted, "α_2"] X_k_1× X_n_2[ld, dotted, "pr_1"] [lu, dotted, "pr_2"'] [uu, -act, "(𝕀×α_2)"'] X_k_1+n_2[uu, -act, "(𝕀∨α_2)"'] [l, "(d_⊤^n_2,d_^k_1)"] X_k_1 X_n_1× X_n_2[ld, dotted, "pr_1"] [uu, -act, "(α_1×𝕀)"'] X_n_1+n_2 . [uu, -act, "(α_1 ∨𝕀)"'] [l, "(d_⊤^n_2,d_^n_1)"] X_n_1[uu, -act, dotted, "α_1"] The dotted projection squares, which are pullbacks, serve to show that the two (respectively m) solid squares are pullbacks. Indeed, the horizontal rectangles are pullbacks because they are active-inert pullbacks (under Condition (2)), so by the prism lemma the solid squares are pullbacks, and therefore the vertical solid rectangle is a pullback, as we wanted to show. The equivalences involving Conditions (3) and (4) are new in this generality, as far as we know. A version of (1)⇔(4) but with all active maps instead of only active injections (hence a weaker statement) was given in <cit.> via a detour into the twisted arrow category of the category of active maps. The full strength of Condition (3) is important in the following, because it is the one that immediately interacts with the notion of semi-ikeo map, which we come to next. (In particular, Condition (3) is the key to Proposition <ref>.) §.§ Convolution and Möbius function A combinatorial coalgebra is generally the vector space spanned by the iso-classes of certain combinatorial objects (classically intervals in a given poset), and the comultiplication is given in terms of decomposition of those objects. Linear functionals on such a coalgebra, such as the zeta and Möbius functions, form the convolution algebra. Homotopy linear algebra <cit.> gives a rather systematic way of lifting such constructions to the objective level and transforming algebraic proofs into bijective ones. Instead of the vector space spanned by iso-classes of combinatorial objects, one considers the slice category over the groupoid (or ∞-groupoid) I of the combinatorial objects themselves, with linear functors between such slices. Linear functors are given by spans I ← M → J, and instead of algebraic identities one looks for homotopy equivalences between spans. The reason why this works so well is that the slice category over I is the homotopy-sum completion of I, just as a vector space is the linear-combination completion, and that linear functor means homotopy-sum preserving, just like linear map means linear-combination preserving. Furthermore the span representation of a linear functor corresponds to the matrix representation of a linear map. Thus the standard algebraic identities can be recovered from these homotopy equivalences by taking homotopy cardinality, under certain finiteness conditions. Specifically, all spans must be of finite type meaning that the left leg I ← M must have (homotopy) finite fibres. But it is usually the case that the homotopy equivalences can be established even without the finiteness conditions. Let us briefly see how this procedure looks in the case of interest, Möbius functions <cit.>. Recall that for any decomposition space X, the incidence coalgebra is the ∞-category _/X_1 equipped with the comultiplication Δ and counit ϵ given by the spans X_1 d_1⟵ X_2 (d_2,d_0)⟶ X_1 × X_1 X_1 s_0⟵ X_0 ⟶ 1 . The incidence algebra is the convolution algebra Lin(_/X_1, ) ≃^X_1. Its objects are linear functionals, that is, given by spans X_1 ← F → 1, with the standard convolution product * given by the pullback formula [column sep=6em,between origins, row sep=5em,between origins] X_1 X_2 [u, "d_1"] [d, "(d_2,d_0)"'] F*G [lu] [l] [d] [rd] X_1× X_1 F× G [l] [r] 1, and unit ϵ. The incidence algebra at the level of -vector spaces is obtained by taking homotopy cardinality, provided certain finiteness conditions hold, cf. Subsection <ref> below. The relevance of the `inert' characterisation of decomposition spaces is that it shows to what extent one can compose. Composition in the sense of arrows in a category is not possible, but the convolution product provides an alternative. In a Segal space, given a p-simplex whose last vertex coincides with the zeroth simplex of a q-simplex, one can compose to get an (p+q)-simplex. This is provided by the equivalence X_p ×_X_0 X_q ≃ X_p+q. This is not generally possible in a decomposition space, but it is possible in case the p-simplex and the q-simplex already `sit on a 2-simplex': if the long edges of the two simplices form the short edges of a 2-simplex, then the 2-simplex serves as a mould for the gluing. This is precisely what the convolution product allows, thanks to the decomposition-space axiom, which naturally appears in the `inert' form: to convolve the linear functionals X_1 ← X_p→ 1 and X_1 ← X_q→ 1 (where the left-hand maps send a simplex to its long edge), we follow the pullback formula above to get [column sep=6em,between origins, row sep=5em,between origins] X_1 X_2 [u, "d_1"] [d, "(d_2,d_0)"'] X_p+q[lu] [-act,l] [d] [rd] X_1× X_1 X_p× X_q [-act,l] [r] 1, That X_p+q appears as the pullback is precisely one of the basic instances of the decomposition space axiom, inert version (Theorem <ref>). Completeness. A decomposition space is called complete <cit.> when s_0 : X_0 → X_1 is mono. The complement is then denoted X_1, the space of nondegenerate edges, so as to be able to write X_1 = X_0 + X_1. Since in a decomposition space all degeneracy maps are pullbacks of this first s_0 (cf. <cit.>), it follows that they are all mono, and there is a well-defined space X_n ⊂ X_n of nondegenerate n-simplices. An n-simplex of a complete decomposition space is nondegenerate if and only if each of its principal edges is nondegenerate. Phi functors. For each n, we define Φ_n to be the linear functional given by the span X_1 ← X_n → 1 . The left-hand map sends an n-simplex to its long edge. The Φ-notation goes back to Leroux <cit.> (his éléments remarquables), and was preserved by Lawvere and Menni <cit.>. The convolution formula X_p * X_q = X_p+q from above restricts to nondegenerate simplices to give the following fundamental formula. For any complete decomposition space we have Φ_p * Φ_q = Φ_p+q . Möbius function. The importance of the Phi functors is that the Möbius function can be described as μ = - = ∑_n∈ (-1)^n Φ_n . More precisely, it is the linear functional _/X_1→ given by the span X_1 ←∑_n∈ (-1)^n Φ_n → 1 . The minus signs does not immediately make sense at the objective level, but the equation that the Möbius function is required to satisfy, μ * ζ = ϵ can be rewritten by spelling out in terms of Phi functors and then moving the negative terms to the other side of the equation. The resulting formula * ζ = ϵ + * ζ makes sense at the objective level, and it can be established as an explicit homotopy equivalence of ∞-groupoids <cit.>. § IKEO AND SEMI-IKEO MAPS A simplicial map f:Y → X defines a linear map on incidence algebras f : ^Y_1→^X_1 by sending a linear functional Y_1 ← F → 1 to the linear functional X_1 ← Y_1 ← F → 1. If f: Y → X is ikeo, then this linear map will preserve the convolution product and the unit ϵ so as to define an algebra map ^Y_1→^X_1. In the situation of this paper, f will not be ikeo, but it will still be semi-ikeo (cf. below). This condition is enough to ensure that f preserves the convolution product (although it will not preserve the algebra unit ϵ). §.§ Ikeo maps A simplicial map Y→ X is called ikeo when for every active map α: [k] [n] the square [column sep=10em,between origins] Y_n_1×⋯× Y_n_k[d] Y_n [d] [l, "(γ^α_1,…,γ^α_k)"'] X_n_1×⋯× X_n_k X_n [l, "(γ^α_1,…,γ^α_k)"] is a pullback. The following two more economical criteria are useful. For a general simplicial map Y → X, the ikeo condition is equivalent to demanding that for each n≥ 0 the square [column sep=10em,between origins] Y_1 ×⋯× Y_1 [d] Y_n [d] [l, "( ρ_1,…,ρ_n )"'] X_1 ×⋯× X_1 X_n [l, "( ρ_1,…,ρ_n )"] is a pullback. Since square (<ref>) is a special case of square (<ref>) where α is the identity map, it is clear that ikeo implies the condition of the lemma. Conversely suppose the condition of the lemma is satisfied, and consider a general square, as on the right in this diagram: [column sep=5.5em] (Y_1 ×⋯× Y_1) ×⋯× (Y_1 ×⋯× Y_1) [d] [l] Y_n_1×⋯× Y_n_k[d] Y_n [d] [l, "(γ^α_1,…,γ^α_k)"'] (X_1 ×⋯× X_1) ×⋯× (X_1 ×⋯× X_1) [l] X_n_1×⋯× X_n_k X_n . [l, "(γ^α_1,…,γ^α_k)"] The outer rectangle is the n-instance of square (<ref>), so it is a pullback. The left-hand square is the product of k squares, which are the n_i-instances of (<ref>), so it is a pullback too. Therefore the right-hand-square is a pullback, by the prism lemma. To check that a simplicial map Y → X is ikeo, it is enough to check it for active maps [0] [0] and [2] [n]. In other words, it is enough to check that the squares 1 [d] Y_0 [d] [l] 1 X_0 [l] [column sep=6em,between origins] Y_n_1× Y_n_2[d] Y_n [d] [l] X_n_1× X_n_2 X_n [l] are pullbacks for all n=n_1+n_2. Assuming the indicated pullback squares for k=0 and k=2, we need to consider the corresponding square for a general active map α: [k] [n]. For k=1 the square is a pullback since its horizontal maps are identities. For k≥ 2, the square can be decomposed as the pasting of squares [column sep=9em,between origins] Y_n_1×⋯× Y_n_k[d] xxx⋯xxx[l] [l] Y_n_1× Y_n_2+⋯+n_k[d] Y_n_1+⋯+n_k[d] [l] X_n_1×⋯× X_n_k xxx⋯xxx[l] X_n_1× X_n_2+⋯+n_k[l] X_n_1+⋯+n_k[l] Here the rightmost square is a (k=2)-instance, and the remaining squares to the left are products of (k=1)-instances with a (k=2)-instance. (The case k=0 is not covered by this argument, which is why it has to be listed separately in the lemma.) Note that the identity map [0] [0] gives the square 1 [d] Y_0 [d] [l] 1 X_0 [l] which is a pullback if and only if Y_0 → X_0 is an equivalence, that is, if the simplicial map is an `equivalence on objects'. Note also that the identity map [2] [2] gives the square Y_1× Y_1 [d] Y_2 [d] [l, "(d_2,d_0)"'] X_1 × X_1 X_2 . [l, "(d_2,d_0)"] These two squares are common to both the previous lemmas, and in fact we have: If X and Y are decomposition spaces, then to check that a simplicial map Y→ X is ikeo, it is enough to check the two squares (<ref>) and (<ref>). By Lemma <ref> it is enough to establish for each α:[2] [n] (the join of α_1:[1] [n_1] and α_2:[1] [n_2]) that the following back face is a pullback: [column sep=6em,between origins] Y_n_1× Y_n_2[d] Y_n [rrd, pos=0.2, "α"] [d] [l] X_n_1× X_n_2[rrd, pos=0.3, "(α_1×α_2)"'] X_n [rrd, pos=0.2, "α"] [l] Y_1 × Y_1 Y_2 [l] [d] X_1 × X_1 X_2 [l] [l] [from=1-1, to=2-3, crossing over, pos=0.3, "(α_1×α_2)"'] [from=2-3, to=3-3, crossing over] But this follows by a prism-lemma argument from the fact that the front face is a pullback by assumption. Indeed, the top and bottom faces are pullbacks since Y and X are decomposition spaces (by Condition (3) in Theorem <ref>). The word ikeo is an acronym standing for `inner Kan and equivalence on objects', but these two notions have a meaning individually, and it is actually a lemma that the notions match up. Recall that a simplicial map is called inner Kan (or relatively Segal) if for each n≥ 2 the square Y_1 ×_Y_0⋯×_Y_0 Y_1 [d] Y_n [d] [l] X_1 ×_X_0⋯×_X_0 X_1 X_n [l] is a pullback. Note that if both X and Y are Segal spaces, then every simplicial map Y → X is relatively Segal. A map is ikeo if and only if it is inner Kan and an equivalence on objects. The n=0 case of the ikeo condition says that the map is an equivalence on objects. We show that the n=2 instance of (<ref>) is a pullback if and only if the n=2 instance of (<ref>) is a pullback, and leave the rest to the reader. In the prism diagram Y_1 × Y_1 [d] Y_1 ×_Y_0 Y_1 [l] [d] Y_2 [d] [l] X_1 × X_1 X_1 ×_X_0 X_1 [l] X_2 [l] the left-hand square is a pullback because Y_0 → X_0 is mono, by Lemma <ref> below. By the prism lemma the right-hand square is a pullback if and only if the outer rectangle is a pullback. In fact the key argument in the proof gives more generally: Let Y → X be a simplicial map such that Y_0 → X_0 is mono, then Y_1 ×_Y_0 Y_1 [d] Y_2 [d] [l] X_1 ×_X_0 X_1 X_2 [l] ⇔ Y_1 × Y_1 [d] Y_2 [d] [l] X_1 × X_1 X_2 , [l] and similarly for all n≥ 2. §.§ Semi-ikeo maps The importance of ikeo maps is that they induce algebra homomorphisms at the level of incidence algebras. In our situation we will not have ikeo maps but only something weaker, where the convolution product is preserved but the convolution unit ϵ is not. Provisionally we call a simplicial map f: Y → X semi-ikeo when for every active injection α: [k] [n] between nonzero ordinals, the square (<ref>) is a pullback. Observe that there are semi-ikeo versions of Lemma <ref>, characterising semi-ikeo maps in terms of n≥ 1, of Lemma <ref>, referring only to active injections [2] [n], and of Lemma <ref>, saying that if both X and Y are already known to be decomposition spaces then the semi-ikeo condition can be checked on the single square Y_1× Y_1 [d] Y_2 [d] [l, "(d_2,d_0)"'] X_1 × X_1 X_2 . [l, "(d_2,d_0)"] Given a semi-ikeo simplicial map between simplicial spaces Y → X, if X is a decomposition space, then also Y is a decomposition space. By Theorem <ref>, it is enough to establish that the special reduced-cover square [column sep=10em,between origins] Y_1 ×⋯× Y_1 [l] Y_k Y_n_1×⋯× Y_n_k[u] Y_n [u, "α"'] [l, "(γ^α_1,…,γ^α_k)"] is a pullback for every active injection α : [k] [n] with k≠ 0. We have [column sep=9em,between origins] X_1 ×⋯× X_1 [l] X_k Y_1 ×⋯× Y_1 [u] [l] Y_k [u] Y_n_1×⋯× Y_n_k[u] Y_n [u, "α"'] [l, "(γ^α_1,…,γ^α_k)"] = [column sep=9em,between origins] X_1 ×⋯× X_1 [l] X_k X_n_1×⋯× X_n_k[u] [l, "(γ^α_1,…,γ^α_k)"] X_n [u, "α"'] Y_n_1×⋯× Y_n_k[u] Y_n . [u] [l, "(γ^α_1,…,γ^α_k)"] On the right, the top square is a pullback since X is a decomposition space, and the bottom square is a pullback since Y→ X is semi-ikeo. So the outer rectangle (either on the left or on the right) is a pullback. But on the left, the top square is a pullback since Y → X is semi-ikeo. So it follows from the prism lemma that also the bottom square is a pullback, which is what we needed to prove. Note also that Lemma <ref> actually establishes the following result. If Y → X is mono on objects, then semi-ikeo is equivalent to relatively Segal. Without the mono condition, it is not true that relatively Segal implies semi-ikeo. For example, any simplicial map between Segal spaces is relatively Segal. Now take a map from a Segal space Y to the terminal simplicial set, then the semi-ikeo condition says that Y_1 × Y_1 ← Y_2 is an equivalence, or equivalently Y_1× Y_1 ← Y_1 ×_Y_0 Y_1 is an equivalence. Of course this is not generally true (but is clearly true if Y_0=1). A morphism of posets f:Y → X is ikeo if and only if it is a bijection on objects. (It does not have to be an isomorphism: for example, Y could be the discrete poset of objects of Y.) To be semi-ikeo, it is enough to be injective on objects. § FULL INCLUSIONS AND CONVEXITY §.§ A few standard facts about monomorphisms of spaces Recall that a map of spaces f:T → S is called a monomorphism (or just mono, for short) when it is (-1)-truncated. That is, its fibres are (-1)-truncated, meaning they are each either contractible or empty. We denote monomorphisms by . Alternatively, f is a mono when T [r, "="] [d, "="'] T [d, "f"] T [r, "f"'] S is a pullback. This last characterisation is just a reformulation of the standard fact that f:T → S is mono if and only if the diagonal map T → T ×_S T is an equivalence. This in turn is a special case of the general fact that a map f:T → S is n-truncated if and only if its diagonal map T → T×_S T is (n-1)-truncated (see Lurie <cit.>). A map of spaces f:T → S is mono if and only if the square T × T [d, "f× f"'] T [l, " diag"'] [d, "f"] S × S S [l, " diag"] is a pullback. The square is a pullback if and only if, for each s∈ S, the induced map on fibres (f× f)^-1(s,s) ← f^-1(s) is an equivalence. But this map is the diagonal of f^-1(s)→ 1, so it is an equivalence if and only if f^-1(s)→ 1 is mono, by Lemma <ref>. This condition for each s∈ S is the condition for f to be mono. The following easy lemma is standard; we state it since it is used several times. (We include the proof because it is pleasant.) In the situation Y [d] X [r] T [rd, into, "f"] S , when f is mono, then the canonical map X ×_T Y → X×_S Y is an equivalence. In the diagram P [d] [r] Y [d] [r, "="] Y [d] X [d, "="'] [r] T [d, "="'] [r, "="] T [d, into, "f"] X [r] T [r, into, "f"'] S we see that P is both the pullbacks. If a simplicial map Y → X is mono on objects, then the square Y_1 × Y_1 [d] Y_1 ×_Y_0 Y_1 [d] [l] X_1 × X_1 X_1 ×_X_0 X_1 [l] is a pullback. We can use Y_1 ×_X_0 Y_1 instead of Y_1 ×_Y_0 Y_1, by Lemma <ref>. Now write the prism Y_1 × Y_1 [d] Y_1 ×_X_0 Y_1 [d] [l] X_1 × X_1 [d] X_1 ×_X_0 X_1 [l] [d] X_0 × X_0 X_0 . [l, "diag"] Here both the bottom square and the outer rectangle are pullbacks, so it follows (by the prism lemma) that the top square is a pullback. §.§ Full inclusions A simplicial map Y → X is called fully faithful when for each n≥ 0 the square Y_0 ×⋯× Y_0 [d] Y_n [l] [d] X_0 ×⋯× X_0 X_n [l] is a pullback. The horizontal maps send an n-simplex to the (n+1)-tuple of vertices. Note that in the case where X and Y are Segal spaces, this condition is equivalent to the n=2 case, so for Segal spaces the definition agrees with the usual definition of fully faithful. A full inclusion of simplicial sets is by definition a fully faithful simplicial map which is furthermore a monomorphism in simplicial degree 0. Recall (from <cit.>) that a simplicial map is called conservative if it is cartesian on all degeneracy maps. (Note that for simplicial maps between decomposition spaces, this can be measured on the first degeneracy map s_0 : X_0 → X_1 alone.) A full inclusion is conservative. Let f: Y → X be a full inclusion. In the cube diagram (for 0 ≤ i < n) Y_n [ldd][d] Y_n-1[d][l, pos=0.7, "s_i"'] X_n [ldd] X_n-1[ldd] [l,pos=0.7, "s_i"'] Y_0 ⋯ Y_0 [d] Y_0 ⋯ Y_0 [d] [l,crossing over] X_0 ⋯ X_0 X_0 ⋯ X_0 [l] [from=1-3,to=3-2,crossing over] the sides are pullbacks since f is fully faithful. In the front square, there are n+1 factors on the left and n factors on the right, and the horizontal maps are given by a diagonal in position i. So this square is a pullback by Lemma <ref> since f is mono on objects. Therefore by the prism lemma, the back square is a pullback, and since this holds for all 0 ≤ i < n, this is precisely to say that f is conservative. If f:Y→ X is a full inclusion and if X is complete, then also Y is complete. This follows since clearly conservative over complete is complete. A full inclusion f:Y → X is relatively Segal (inner Kan), and so semi-ikeo. We do the n=2 case. We need to show that the square Y_1 ×_Y_0 Y_1 [d] Y_2 [d] [l] X_1 ×_X_0 X_1 X_2 [l] is a pullback. Consider the prism diagram Y_0 × Y_0 × Y_0 [d] [r, phantom, "=" description] [d] (Y_0 × Y_0) ×_Y_0 (Y_0 × Y_0) [l] Y_1 ×_Y_0 Y_1 [d] Y_2 [d] [l] X_0 × X_0 × X_0 [r, phantom, "=" description] (X_0 × X_0) ×_X_0 (X_0 × X_0) [l] X_1 ×_X_0 X_1 X_2 . [l] The middle square is a pullback since it is the fibre product over X_0 of two copies of the pullback square Y_0 × Y_0 [d] Y_1 [l] [d] X_0 × X_0 X_1 [l] expressing that f is fully faithful. Note that this is where we use that Y_0 → X_0 is mono, so that pullbacks over Y_0 can be computed over X_0 (cf. Lemma <ref>). The outer rectangle is a pullback since Y → X is fully faithful. The prism lemma now tells us that the right-hand square is a pullback. Full hull. Generally, for a subset T of points of X_0, let Y_0 denote the full subspace of X_0 spanned by T, to get a monomorphism Y_0 → X_0. Now consider all simplices of X that have vertices in Y_0. Formally define Y_n to be the pullback Y_0 ×⋯× Y_0 [d] Y_n[l] [d] X_0 ×⋯× X_0 X_n . [l] The Y_n assemble into a simplicial space, where the face and degeneracy maps are induced from those of X. §.§ Convexity A simplicial map Y → X is called convex if it is a full inclusion which is also culf. Recall that culf means cartesian on active maps. For decomposition spaces, this can be measured on the single square Y_1 [d] Y_2 [l, "d_1"'] [d] X_1 X_2 . [l, "d_1"] Non-example. The full inclusion of simplicial spaces Δ^{0,2}⊂Δ^2 is not convex, as it is not culf. Non-example. The inclusion of simplicial spaces {0,1}⊂Δ^1 is culf but not convex as it is not full. Convex hull. Let X be a decomposition space. Any collection of points (subset S⊂π_0 X_0) defines a unique convex hull Y⊂ X. To form it, first consider all simplices whose zeroth and last vertex belong to S, and add all their vertices to the collection. This gives us S. Now take the full hull of S. This defines a simplicial space Y, and we claim it is convex in X. It is thanks to the decomposition-space axiom that the convex-hull construction stabilises after one step: if we start with points x and z, and a new point y is introduced between them, then one could ask if there is a new simplex from x to y which will then introduce further points between x and y. This does not happen because these points would have been introduced already in the first step: indeed, if there is a simplex from x to y, and since x and y already form the short edge of a simplex in X, there is also a simplex obtained by gluing these two simplices. So anything between x and y will have been introduced already in the first step. Suppose K ⊂ X is convex. If σ∈ X_n is an n-simplex whose last vertex belongs to K, then there is a unique index 0≤ j ≤ n such that vertex j belongs to K, every face after j belongs to K and no face before j belongs to K. Denote by x_0,x_1,…,x_n the vertices of σ. Let j be minimal such that x_j belongs to K. Since both x_j and x_n belong to K, it follows from fullness that the 1-simplex x_j x_n belongs to K. But x_j x_n is the long edge of an (n-j)-simplex, and this whole (n-j)-simplex must belong to K since the inclusion is culf. By minimality of j, no earlier faces can belong to K. § CRAPO COMPLEMENTATION FORMULA Let X be a complete decomposition space, and let K ⊂ X be a convex subspace. In particular, the full inclusion map f: K → X is semi-ikeo (by Proposition <ref>), and therefore K is again a complete decomposition space. Observe that the complement X K is the full hull on the complement X_0 K_0. So the inclusion map g: X K → X is also semi-ikeo, so the complement X K is again a decomposition space, by Proposition <ref>, and it is complete by Corollary <ref>. With these arguments, we have everything prepared, and the symbols in the following all make sense. (Culfness of the inclusion K → X is not required for the statement, but it will be crucial for the proof.) Recall from <ref> that the Möbius function is defined as the formal difference μ = -, and that its defining equation μ*ζ = ϵ = ζ*μ should be interpreted as * ζ = ϵ + * ζ (for the left-hand equation), which is now an explicit homotopy equivalence (established in <cit.>). §.§ Symbolic version In this subsection we state the Crapo formula in symbolic form, meaning that we employ the symbol μ for the Möbius function. This is shorthand for something that is not exactly a linear functional but only a formal difference of linear functionals, and therefore it does not directly have an objective meaning. To interpret it, we should first expand each μ-symbol in terms of Φ-symbols, and then move all negative terms to the other side of the equation. Once this is done we have an equation that we can aspire to establish as an explicit homotopy equivalence. This expansion procedure is a bit cumbersome, but quite routine. Once we have the objective statement, involving various - and -functionals, we can further break it down to an equivalence involving only individual Φ_n-functionals. These we finally establish as explicit homotopy equivalences; the global ones are then obtained by summing over all n in a suitable way. We shall do all that in the next subsection, but it is enlightening first to see the symbolic proof. Here is the formula, symbolic version: For K ⊂ X convex we have μ^X = μ^X K + μ^X *ζ^K *μ^X . First of all, the equation takes place for linear functionals on X. When we write ζ^K, we mean f (ζ^K). Here it should be stressed that since f:K→ X is semi-ikeo, f preserves the convolution product *, but is not unital since f is not an equivalence on objects. One of the ingredients in the proof is deduced from a formula in K, so this is where we need that f preserves *. Intuitively, the formula says that the nondegenerate simplices in X are either nondegenerate simplices that do not meet K (the summand μ_X K) or they are nondegenerate simplices that meet K — that is the summand μ^X *ζ^K *μ^X, which is less obvious. We can derive the formula from four auxiliary propositions, which we list next. Each of these propositions will be proved (in Subsection <ref>) by expanding the μ symbols into Φ symbols, and then sorting by sign. The `nicknames' listed for these propositions serve to stress the correspondence between them and the lemmas of the next subsection: there will be, in each case, a lemma explaining the homotopy equivalence for a fixed Φ_n. First we have a proposition only about K (not about X): μ^K = μ^K * ζ^K * μ^K . All the following lemmas amount to analysing how a simplex of X lies with respect to K. For example, the following `meet proposition' says that if a simplex of X has a vertex in K, then by convexity a whole middle part of the simplex must lie in K, and altogether the simplex must be composed of three parts: a first part with edges outside (before) K, then a middle part wholly inside K, and finally a part with edges outside (after) K. The convolutions are the formal expression of these descriptions. Define μ^∉ K to be the space of nondegenerate n-simplices of X for any n≥ 0 (with sign (-1)^n) such that no edges belong to K. (Note that a vertex is allowed to belong to K.) Define μ^∩ K to be the space of nondegenerate n-simplices of X for any n≥ 0 such that at least one vertex belongs to K. μ^∩ K = μ^∉ K * μ^K * μ^∉ K . μ^∉ K * μ^K = μ * Φ_0^K . μ^K * μ^∉ K = Φ_0^K * μ . Note here that Φ_0^K is the convolution unit for the decomposition space K, but since f is does not preserve the unit (f is not an equivalence on objects, semi-ikeo, not ikeo) the pushforwarded linear functional f (Φ_0^K) is not the convolution unit in X. It is the linear functional X_1 ← K_0 → 1 . Convolving with it from the right (resp. from the left) has the effect of imposing the condition that the last (resp. the zeroth) vertex is in K. We clearly have μ = μ^X K + μ^∩ K: in terms of nondegenerate simplices, either it does or it doesn't have a vertex in K. Now apply Proposition <ref> (the meet prop) to get = μ^X K + μ^∉ K * μ^K * μ^∉ K. Now apply Proposition <ref> (the K-proposition) to the middle factor μ^K to get = μ^X K + μ^∉ K * μ^K * ζ^K * μ^K * μ^∉ K . Now apply Proposition <ref> and Proposition <ref> to get μ^X K + μ * ζ^K * μ (where we suppressed two instances of Φ_0^K, since they are next to ζ^K anyway, and within K, the linear functional Φ_0^K is the neutral element for convolution). §.§ Explicit homotopy equivalences Here are the individual pieces. For any complete decomposition space K, and for each m≥ 0, we have Φ_m + ∑_j=0^m-1Φ_j * Φ_1 * Φ_m-(j+1) = ∑_k=0^m Φ_k * Φ_m-k . There are m+1 terms on each side, and they match up precisely, once we identify Φ_j * Φ_1 = Φ_j+1. In detail, the separate term Φ_m on the LHS is the k=0 term on the RHS, Φ_0 * Φ_m; the remaining terms on the LHS correspond to the terms on the RHS by sending the jth term to the term indexed by k:=j+1: indeed Φ_j * Φ_1 * Φ_m-(j+1)≃Φ_j+1 * Φ_m-(j+1) = Φ_k * Φ_m-k by Lemma <ref>. Variant. There is another equivalence, where Φ_m on the LHS is matched with the last summand on the RHS instead of the zeroth. For any complete decomposition space K, we have + Φ_1 + Φ_1 = Φ_0 + Φ_0 , and + Φ_1 + Φ_1 = Φ_0 + Φ_0 . This is just to add up instances of Lemma <ref> for all m even and for all m odd. In Proposition <ref> we stated the following: μ^K = μ^K * ζ^K * μ^K . This is shorthand for an explicit homotopy equivalence of ∞-groupoids. To expand, use first μ= -: - = ( - ) * (Φ_0+Φ_1) * ( - ) , and then expand and move all minus signs to the other side of the equation to obtain finally the sign-free meaning of the proposition: [ + Φ_1 + Φ_1; + Φ_0 + Φ_0 ] = [ + Φ_0 + Φ_0; + Φ_1 + Φ_1 . ] This is the explicit homotopy equivalence we establish. The equation has been arranged so that the first line of the equation is the even equation in Corollary <ref> and the second line is the odd equation in Corollary <ref>. This is the objective proof of Proposition <ref>. Let now f: K → X be a convex inclusion. All linear functionals pertaining to K are decorated with a superscript K (such as in ζ^K, μ^K, Φ_n^K), but we use those symbols also for their pushforth along f, so that the symbols occurring really stand for f(ζ^K), f(μ^K), f(Φ_n^K), and so on. By multiplicativity of f (the fact that f is semi-ikeo), the equation for K of Lemma <ref> holds also in X. We use the same convention for the full inclusion g: X K → X. Finally we shall use two more decorations, ∉K and ∩K, such as in Φ_n^∩ K and Φ_n^∉ K. These linear functionals on X are not a pushforth, and the symbols will be defined formally along the way. Define X^∉ K_r to be the space of nondegenerate r-simplices of X such that no edges belong to K. (Note that a vertex is allowed to belong to K.) Formally, this is defined as a pullback: X_r^∉ K[d] [r] ( X_1 K_1) ×⋯× ( X_1 K_1) [d] X_r [r] X_1 ×⋯× X_1 . (On the right-hand side there are r factors.) Now Φ^∉ K_r is defined to be the linear functional given by the span X_1 ← X_r^∉ K→ 1 . (Note that the r=0 case is Φ_0^∉ K = Φ_0.) Denote by Φ^∩ K_n the space of nondegenerate n-simplices of X for which there exists a vertex in K. Φ^∩ K_n = ∑_p+m+q=nΦ^∉ K_p * Φ^K_m * Φ^∉ K_q . If an n-simplex σ∈ X_n has some vertex in K, then there is a minimal vertex x_p in K and a maximal vertex x_p+m in K. (They might coincide, which would be the case m=0.) Since K → X is full, the edge from x_p to x_p+m is contained in K, and since K→ X is also culf, all intermediate vertices and faces belong to K too. So the simplex σ necessarily has first p edges not belonging to K, then m edges that belong to K, and finally q edges not belonging to K. So far we have referred to arbitrary simplices, but we know from Lemma <ref> that σ is nondegenerate if and only if its three parts are. Now we get the formula at the level of the Φ-functionals from the fundamental equivalence Φ_p+q = Φ_p * Φ_q (Lemma <ref>). Φ_s * Φ_0^K = ∑_p+i=sΦ^∉ K_p * Φ^K_i . Note again that convolution from the right with Φ_0^K serves to impose the condition that the last vertex belongs to K. So intuitively the equation says that an s-simplex whose last vertex is in K must have p edges outside K and then i edges inside K. (It is because of convexity that there are no other possibilities.) Note also that specifying an s-simplex by imposing conditions on specific edges like this is precisely what the convolution product expresses. Φ_0^K * Φ_t = ∑_j+q=tΦ^K_j * Φ^∉ K_q . This is the same, but for t-simplices whose zeroth vertex is in K. §.§ Crapo formula as a homotopy equivalence For K ⊂ X convex we have μ^X = μ^X K + μ^X *ζ^K *μ^X . What it really means is ^X - ^X = (^X K - ^X K) + (^X - ^X) * (Φ_0^K+Φ^K_1) * (^X - ^X) , and then expand and move all minus signs to the other side of the equation to obtain finally the sign-free meaning of the theorem: [ ^X + ^X Φ^K_1 ^X + ^XΦ^K_1^X; ^X K + ^XΦ^K_0^X + ^XΦ^K_0^X ] = [ ^X K + ^XΦ^K_0^X + ^XΦ^K_0^X; ^X + ^XΦ^K_1^X + ^XΦ^K_1^X . ] This is the explicit homotopy equivalence we establish. Scholium. Φ^X_n = Φ^X K_n + Φ^∩ K_n . From the viewpoint of X, this is clear: a nondegenerate n-simplex in X either has a vertex in K or it does not have a vertex in K. A simplex in X without a vertex in K is the same thing as a simplex in X K. We can therefore interpret the symbol as g (Φ^X K_n). Φ^∩ K_n + ∑_s+1+t=nΦ^X_s * Φ^K_1 * Φ^X_t = ∑_s+t=nΦ^X_s * Φ^K_0 * Φ^X_t . This has the same overall shape as the K-lemma, but note that unlike in the K-lemma, the terms on the LHS do not simply identify with those on the RHS. We expand the term Φ^∩ K_n using the meet lemma; we expand the s-indexed terms using the S-lemma; we expand the t-indexed terms using the T-lemma. The claim thus becomes ∑_p+m+q=nΦ^∉ K_p * Φ^K_m * Φ^∉ K_q + ∑_p+i+1+j+q=nΦ^∉ K_p * Φ^K_i * Φ^K_1 * Φ^K_j * Φ^∉ K_q = ∑_p+i+j+q=nΦ^∉ K_p * Φ^K_i * Φ^K_0 * Φ^K_j * Φ^∉ K_q . But this equation is precisely the K-lemma convolved with Φ^∉ K_p from the left and with Φ^∉ K_q from the right. The following figure illustrates the relationship among the indices: (-3.9, 0.6) –++ (0, 0.1) –++ (3.6, 0) –++ (0, -0.1); at (-2.1,0.9) s; (3.9, 0.6) –++ (0, 0.1) –++ (-3.6, 0) –++ (0, -0.1); at (2.1,0.95) t; [col3] (-2.0,-0.25) rectangle ++(-1.9,0.5); at (-2.9,0) p; [col1] (-0.3,-0.25) rectangle ++(-1.6,0.5); at (-1.2,0.04) i; at (0,0) (0|1); [col1] (0.3,-0.25) rectangle ++(1.6,0.5); at (1.2,0) j; [col3] (2.0,-0.25) rectangle ++(1.9,0.5); at (2.9,0) q; (-1.9, -0.6) –++ (0, -0.1) –++ (3.8, 0) –++ (0, 0.1); at (0.0,-0.9) m; (-3.9, -1.2) –++ (0, -0.1) –++ (7.8, 0) –++ (0, 0.1); at (0.0,-1.6) n; Here is an alternative proof: Start with the K-lemma: Φ_m + ∑_i+1+j=mΦ_i * Φ_1 * Φ_j = ∑_i+j=mΦ_i * Φ_j . Now convolve with Φ^∉ K_p from the left and with Φ^∉ K_q from the right, and sum to n to obtain ∑_p+m+q=nΦ^∉ K_p * Φ^K_m * Φ^∉ K_q + ∑_p+i+1+j+q=nΦ^∉ K_p * Φ^K_i * Φ^K_1 * Φ^K_j * Φ^∉ K_q = ∑_p+i+j+q=nΦ^∉ K_p * Φ^K_i * Φ^K_0 * Φ^K_j * Φ^∉ K_q . The first sum on the left gives Φ^∩ K_n by the meet lemma. In the other sums, apply the S-lemma to the s-indexed terms and apply the T-lemma to the t-indexed terms. Altogether we arrive at Φ^∩ K_n + ∑_s+1+t=nΦ_s * Φ^K_1 * Φ_t = ∑_s+t=nΦ_s * Φ^K_0 * Φ_t , which is what we wanted to prove. §.§ Finiteness conditions and cardinality In order to take homotopy cardinality to deduce results at the level of -algebras, some finiteness conditions must be imposed. First of all, for the incidence (co)algebra of X to admit a cardinality, X should be locally finite, meaning that all active maps are finite. Second, for the general Möbius inversion formula to admit a cardinality, we must ask that for each 1-simplex f, there are only finitely many non-degenerate n-simplices (any n) with long edge f. This is the Möbius condition for decomposition spaces <cit.>. We should only remark that if X is a Möbius decomposition space, and if K⊂ X is a convex subspace then also K is Möbius (this follows since anything culf over a Möbius decomposition space is Möbius again. We should also check whether it is true that for any full inclusion Y → X, we have that Y is Möbius again. Note that for an ikeo map to admit a cardinality (which will then be an algebra homomorphism) it must be a finite map. In the present case this is OK since the maps are even mono. Recall that a simplicial space Y is locally finite if all active maps are finite. (Note that in <cit.> it was also demanded that Y_1 be locally finite, but this has turned out not to be necessary. The following results remain true with this extra condition, though.) If F: Y → X is a full inclusion of simplicial spaces, and if X is locally finite, then also Y is locally finite. Let g: Y_n → Y_1 be the unique active map. We need to show that the fibre over any a∈ Y_1 is finite. In the cube diagram [column sep=6em,between origins, row sep=3.5em,between origins] 1 [ldd][d, "a"] (Y_n)_a [d][l] Y_1 [ldd] Y_n [ldd] [l, pos=0.7, "g"'] 1 [d, "F(a)"'] (X_n)_F(a)[d] [l,crossing over] X_1 [l] X_n [l, "g"] [from=1-3,to=3-2,crossing over] the back and front faces are pullbacks by definition of the fibres we are interested in. The left-hand face is a pullback since Y_1 → X_1 is mono. By the prism lemma it now follows that also the right-hand face is a pullback. Finally we see that (Y_n)_a → (X_n)_F(a) is mono because it is a pullback of Y_n → X_n, which is mono since F is a full inclusion. Since (X_n)_F(a) is finite, it follows that (Y_n)_a is finite. Recall (from <cit.>) that the length of a 1-simplex a∈ Y_1 is defined as the dimension of the biggest effective n-simplex σ with long edge a. Effective means that all principal edges are nondegenerate. For decomposition spaces, and more generally for so-called stiff simplicial spaces <cit.>), this is equivalent to σ being nondegenerate. Let Y→ X be a conservative simplicial map between locally finite simplicial spaces. If X is (complete and) of locally finite length, then also Y is (complete and) of locally finite length. Note first that if X is complete, then so is Y, since the map is conservative. If Y were not of locally finite length, that would mean there is a 1-simplex a∈ Y_1 for which ( Y_n)_a is nonempty for all n. (This is not the definition of locally finite length, but for locally finite simplicial spaces this is equivalent.) But each σ∈ ( Y_n)_a witnessing this nonemptiness is sent to fσ∈ ( X_n)_fa witnessing also infinite length of fa. Note that effective simplices are preserved, as a consequence of being conservative. If Y → X is a full inclusion of simplicial spaces, and if X is a Möbius decomposition space, then also Y is a Möbius decomposition space. With these preparations we see that in the Crapo formula, if just the ambient decomposition space X is Möbius, then also K and X K are Möbius, so that all the objects in the formula admit a cardinality. The formula therefore holds at the level of -vector spaces. 10 Berger:Adv2002 Clemens Berger. A cellular nerve for higher categories. Adv. Math. 169 (2002), 118–175. Bjoerner-Walker Anders Björner and James W. Walker. A homotopy complementation formula for partially ordered sets. European J. Combin. 4 (1983), 11–19. Carlier:1801.07504 Louis Carlier. Incidence bicomodules, Möbius inversion and a Rota formula for infinity adjunctions. Algebr. Geom. Topol. 20 (2020), 169–213. ArXiv:1801.07504. Content-Lemay-Leroux Mireille Content, François Lemay, and Pierre Leroux. Catégories de Möbius et fonctorialités: un cadre général pour l'inversion de Möbius. J. Combin. Theory Ser. A 28 (1980), 169–190. Crapo Henry H. Crapo. The Möbius function of a lattice. J. Combin. Theory 1 (1966), 126–131. Dur:1986 Arne Dür. Möbius functions, incidence algebras and power series representations, vol. 1202 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1986. Dyckerhoff-Kapranov:1212.3563 Tobias Dyckerhoff and Mikhail Kapranov. Higher Segal spaces, vol. 2244 of Lecture Notes in Mathematics. Springer, Cham, 2019. ArXiv:1212.3563. Feller-Garner-Kock-Proulx-Weber:1905.09580 Matthew Feller, Richard Garner, Joachim Kock, May U. Proulx, and Mark Weber. Every 2-Segal space is unital. Commun. Contemp. Math. 23 (2021), 2050055. ArXiv:1905.09580. Galvez-Kock-Tonks:1612.09225 Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks. Decomposition spaces in combinatorics. Preprint, arXiv:1612.09225. Galvez-Kock-Tonks:1512.07573 Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks. Decomposition spaces, incidence algebras and Möbius inversion I: Basic theory. Adv. Math. 331 (2018), 952–1015. ArXiv:1512.07573. Galvez-Kock-Tonks:1512.07577 Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks. Decomposition spaces, incidence algebras and Möbius inversion II: Completeness, length filtration, and finiteness. Adv. Math. 333 (2018), 1242–1292. ArXiv:1512.07577. Galvez-Kock-Tonks:1512.07580 Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks. Decomposition spaces, incidence algebras and Möbius inversion III: The decomposition space of Möbius intervals. Adv. Math. 334 (2018), 544–584. ArXiv:1512.07580. Galvez-Kock-Tonks:1602.05082 Imma Gálvez-Carrillo, Joachim Kock, and Andrew Tonks. Homotopy linear algebra. Proc. Royal Soc. Edinburgh A 148 (2018), 293–325. ArXiv:1602.05082. Hackney-Kock:2210.11191 Philip Hackney and Joachim Kock. Culf maps and edgewise subdivision. With an appendix coauthored with Jan Steinebrunner. Preprint, arXiv:2210.11191. JoniRotaMR544721 Saj-nicole A. Joni and Gian-Carlo Rota. Coalgebras and bialgebras in combinatorics. Stud. Appl. Math. 61 (1979), 93–139. LawvereMenniMR2720184 F. William Lawvere and Matías Menni. The Hopf algebra of Möbius intervals. Theory Appl. Categ. 24 (2010), No. 10, 221–265. Leroux:1976 Pierre Leroux. Les catégories de Möbius. Cahiers Topologie Géom. Différentielle 16 (1976), 280–282. Lurie:HTT Jacob Lurie. Higher topos theory, vol. 170 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 2009. ArXiv:math/0608040. Rota:Moebius Gian-Carlo Rota. On the foundations of combinatorial theory. I. Theory of Möbius functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 2 (1964), 340–368. Universidad de Málaga, IMTECH-UPC, and Centre de Recerca Matemàtica E-mail address: University of Copenhagen, Universitat Autònoma de Barcelona, and Centre de Recerca Matemàtica E-mail address: Universidad de Málaga E-mail address:
http://arxiv.org/abs/2409.02306v1
20240903213223
Limits and Periodicity of Metamour $2$-Distance Graphs
[ "William Q. Erickson", "Daniel Herden", "Jonathan Meddaugh", "Mark R. Sepanski", "Mitchell Minyard", "Kyle Rosengartner" ]
math.CO
[ "math.CO", "Primary: 05C12, 05C76, Secondary: 05C38" ]
[2020]Primary: 05C12, 05C76; Secondary: 05C38 § ABSTRACT Given a finite simple graph G, let (G) denote its 2-distance graph, in which two vertices are adjacent if and only if they have distance 2 in G. In this paper, we consider the periodic behavior of the sequence G, (G), ^2(G), ^3(G), … obtained by iterating the 2-distance operation. In particular, we classify the connected graphs with period 3, and we partially characterize those with period 2. We then study two families of graphs whose 2-distance sequence is eventually periodic: namely, generalized Petersen graphs and complete m-ary trees. For each family, we show that the eventual period is 2, and we determine the pre-period and the two limit graphs of the sequence. Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker Tarik P. Cysne September 9, 2024 – Version 1.0 ============================================================================================ § INTRODUCTION The notion of an n-distance graph was introduced by Harary–Hoede–Kadlecek <cit.>, and defined as follows: given a finite simple graph G, its n-distance graph is obtained by placing an edge between two vertices if and only if those vertices have distance n in the original graph G. Since then, the study of n-distance graphs (in particular, the special case n=2) has developed in several directions, and has garnered increased interest within the last decade. This recent interest includes work on the connectivity of 2-distance graphs <cit.>, their regularity <cit.>, their diameter <cit.>, 2-distance graphs which have certain maximum degree <cit.> or are isomorphic to the original graph <cit.>, and general characterizations of certain 2-distance graphs <cit.>. Another topic of interest is the periodicity of graphs with respect to the 2-distance operation. This question was first addressed in the note <cit.> in 2000 (see also <cit.>), and is the subject of the present paper. Throughout the paper, we adopt the term metamour graph, which was recently introduced in <cit.> as a synonym for the 2-distance graph. Starting with some finite simple graph G, we consider the sequence G, (G), ^2(G), ^3(G), …, obtained by repeatedly taking metamour graphs. A natural problem is to describe the periodic behavior of this metamour sequence, in as much generality as possible. Our paper solves this problem in several special cases, which we highlight below: * In Section <ref>, we discuss graphs G with the property M(G)=G≅ G. In particular, we provide two proofs that every graph is an induced subgraph of some graph G with M(G)=G≅ G, see Theorems <ref> and <ref>. * In Theorem <ref>, we determine the periodic behavior of the metamour sequence in the case where G is formed by joining an arbitrary number of graphs together in a cycle. (This generalizes Proposition 2 in Zelinka <cit.>.) * Theorem <ref> states that for any even positive integer k, every graph is an induced subgraph of a graph whose metamour sequence is k-periodic. (This theorem strengthens the main theorem in Zelinka <cit.>, which merely asserted that k-periodic graphs exist for each even k.) * In Theorem <ref>, we show that if G has metamour period 2, then G and (G) have the same diameter, which is at most 3. * In fact, when this diameter equals 3, we identify a certain subgraph C_5 (see Theorem <ref>) which must be contained in G. * In Theorem <ref>, we prove that the only connected graphs with metamour period 3 are the two cycle graphs C_7 and C_9. In the final two sections of the paper, we consider graphs whose metamour sequence is eventually periodic. In particular, we study two fundamental families in graph theory: the generalized Petersen graphs G(m,2), and the complete m-ary trees. In each case (see Theorems <ref> and <ref>), we show that the eventual period is 2, we determine where in the sequence G, (G), ^2(G), ^3(G), … this periodicity begins, and we explicitly describe the two graphs that are the limits of this metamour sequence. In addition to our main results summarized above, we also answer (in Theorem <ref>) a question posed in <cit.>, regarding the classification of metamour graphs. Moreover, the work in this paper led us to Questions <ref> and <ref>, and to Conjecture <ref>, regarding the parity of metamour periods: roughly speaking, graphs with even metamour periods are “common” (as demonstrated in Theorem <ref>) and those with odd metamour periods are “rare” (conjecturally, finitely many). Hence a natural direction for further research is the problem of proving Conjecture <ref>, and classifying those graphs whose metamour period is even (respectively, odd). § NOTATION We write for the nonnegative integers and ^+ for the positive integers. Throughout the paper, we use standard graph theoretical notation and terminology, which we summarize as follows. We write G=(V,E) to denote a simple graph with vertex set V and edge set E. We also use V(G) and E(G) to denote the vertex and edge sets of G. We abbreviate the edge {x,y} by writing xy. Let d_G:V× V→∪{∞} be the distance function on G. Recall that the diameter of a connected graph G, denoted by (G), is the maximum value attained by d_G. A disconnected graph has infinite diameter. We write G_1 ∪ G_2 for the union of two graphs, in which the vertex set is the disjoint union V(G_1) ∪ V(G_2) and the edge set is the disjoint union E(G_1) ∪ E(G_2). We write G_1 ∇ G_2 for the join of two graphs, which is G_1 ∪ G_2 together with all edges connecting V(G_1) and V(G_2). We write G for the complement of G, where V(G)=V(G), and xy ∈ E(G) if and only if xy ∉E(G). We write K_n for the complete graph on n vertices, and C_n for the cycle graph on n vertices. The edgeless graph on n vertices is the graph with no edges, which we denote by K_n. We write G_1 ⊆ G_2 to express that G_1 is a subgraph of G_2. Given a subset S ⊆ V(G), the induced subgraph is the graph with vertex set S, whose edges are all the edges in E(G) with both endpoints in S. We write G_1 = G_2 to denote equality as graphs of distinguishable vertices, and we write G_1 ≅ G_2 to express that two graphs are isomorphic (i.e., equal up to forgetting vertex labels). A graph G is called self-complementary if G≅G. The focus of this paper is the metamour graph of G (also known as the 2-distance graph in the literature): The metamour graph of G, written as (G), has vertex set V(G), and an edge between x,y∈ V(G) if and only if d_G(x,y) = 2. More generally, we write ^0(G) G and inductively define ^k+1(G) (^k(G)), k ∈. Let k∈^+. We say that G has metamour period k if ^k(G) = G and k is minimal with this property. We say that G has pseudo-metamour period k if ^k(G) ≅ G and k is minimal with this property. We say that G is metamour-complementary if (G) = G. Whereas Definition <ref> above pertains to periodic sequences of metamour graphs, the following definition concerns those metamour sequences which are eventually periodic: Let k∈^+. We say that G has metamour limit period k if there exists N ∈ such that, for all integers i ≥ N, we have ^k+i(G) = ^i(G), and k is minimal with this property. In this case, we write lim(G) {^i(G) | i ≥ N} = {^N(G), ^N+1(G), …, ^N+k-1(G) } for the metamour limit set of G. Note that for finite graphs, the metamour limit period always exists, and so the metamour limit set is also finite. § GRAPHS THAT ARE METAMOUR GRAPHS In this section, we collect some basic results on metamour graphs and metamour-complementary graphs. Our main goal is a characterization and discussion of those graphs G such that G=(G') for some graph G', i.e., of graphs which are metamour graphs (see Theorem <ref>). We start by noting the following relation between (G) and G. For any graph G, we have (G)⊆G. If there is a graph G' on V(G) such that (G')=G, then G'⊆G. This follows immediately from Definition <ref>. In particular, if xy ∈ E(G), then d_G(x,y) = 1, thus xy ∉ E(G). Metamour-complementary graphs are defined as the graphs G where equality is achieved in (G)⊆G. We provide an alternative characterization of metamour-complementary graphs. A graph G is metamour-complementary if and only if (G) ≤ 2. Suppose (G) = G. If x,y ∈ V with d_G(x,y) ≥ 2, then xy ∈ E(G) = E((G)) and so d_G(x,y) = 2. Now suppose (G) ≤ 2. Then for xy ∈ E(G), we have d_G(x,y) = 2 and so xy ∈ E((G)). Since (G) ⊆G by Lemma <ref>, we are done. Theorem <ref> can be expanded to provide an answer to <cit.> by characterizing those graphs G which are metamour graphs. Let G=(V,E) be a graph. The following are equivalent: * There exists a graph G' on V such that (G')=G. * G is metamour-complementary, i.e., (G)=G. * For every xy ∈ E, there exists z ∈ V∖{x,y} with xz,yz ∉ E. * (G) ≤ 2. We first show that (1) and (2) are equivalent. It is immediate that (2) implies (1). To show that (1) implies (2), suppose (G) ≠ G. By Definition <ref>, there exists xy ∈ E(G) such that there is no z ∈ V ∖{x,y} with xz,yz ∈ E(G). Therefore, for any G'⊆G, we have xy∉(G'). Since (by Lemma <ref>) any G' satisfying (G')=G satisfies G'⊆G, we are done. The equivalence of (2) and (3) follow immediately from Definition <ref>, and the equivalence of (2) and (4) from Theorem <ref>. We also note the following nice sufficient characterization. Let Δ(G) denote the maximum degree of G=(V,E). If 2 Δ(G) < |V|, then there exists a graph G' such that (G')=G. Suppose xy ∈ E. Since (x) + (y) < |V|, there is some z ∈ V with xz,yz ∉ E. Theorem <ref> finishes the result. If G is j-regular, then Corollary <ref> says that a sufficient condition for the existence of a G' such that (G')=G is 2j < |V|. Since |V| = 2|E|/j in this case, the condition can be rewritten as j^2 < |E|. Theorem <ref> and Corollary <ref> allow us to quickly deduce that many families of graphs have the property that each graph is the metamour graph of its complement: * generalized Petersen graphs P(n,k) for all n≥ 4, * cycle graphs C_n for all n≥ 5, * path graphs P_n for all n≥ 5, * disconnected graphs, * trees T with (T)≥ 4, and * undirected Cayley graphs Γ(G,S) with 2|S| < |G|. On the other hand, graphs of the following types are not the metamour graphs of any graphs: * complete k-partite graphs for all k≥ 2, * complements of generalized Petersen graphs P(n,k) for n≥ 4, * complements of cycle graphs C_n for all n≥ 6, and * complements of path graphs P_n for all n≥ 4. § GRAPHS WITH PSEUDO-METAMOUR PERIOD 1 In this section, we will discuss graphs with metamour period 1 and graphs with pseudo-metamour period 1. We start with the observation that there are only trivial graphs with metamour period 1. A graph G has metamour period 1 if and only if G is edgeless. With Lemma <ref>, M(G)=G implies G = M(G) ⊆G, and G=K_n for some n∈^+ follows. The converse is trivial. In contrast, there exist many nontrivial examples of graphs with pseudo-metamour period 1. In the following, we will provide two different families of graphs G' with M(G')=G'≅ G', i.e., of metamour-complementary self-complementary graphs G'. In both cases we are going to see that any graph G can be embedded as an induced subgraph into some metamour-complementary self-complementary graph. Thus, the class of metamour-complementary self-complementary graphs is large and of a complex structure. Our first example is based on properties of Paley graphs. Let q be a prime power such that q≡ 1 4. Then the Paley graph QR(q) has as vertex set the elements of the finite field 𝔽_q, with two vertices being adjacent if and only if their difference is a nonzero square in 𝔽_q. Every Paley graph is metamour-complementary and self-complementary. Moreover, for any finite graph G there exists a prime power q≡ 1 4 such that G embeds as an induced subgraph into QR(q). Let q be a prime power such that q≡ 1 4. Then QR(q) is self-complementary <cit.>, and such that every pair of distinct nonadjacent vertices shares q-1/4 common neighbors <cit.>. In particular, (G)=2, and G is metamour-complementary with Theorem <ref>. The embedding property follows as Paley graphs are quasi-random <cit.>. An easier and more instructive example of metamour-complementary self-complementary graphs can be given with the help of the following general graph operation. Some similar constructions will be used in the next section. Let G = (V,E) be a graph, and let 𝒢={G_v | v∈ V} be a collection of graphs indexed by V. We define the join of 𝒢 along G to be the graph constructed as follows. Begin with ⋃_v∈ V G_v. For every vw∈ E, include all possible edges between G_v and G_w. Thus, for each vw∈ E, the join G_v ∇ G_w is a subgraph of the join of 𝒢 along G. See Figure <ref> for the visualization of Definition <ref> in the case where G = C_5. (This particular case was constructed by Zelinka <cit.>, just before Proposition 2.) In the figure, an edge between G_i and G_j represents all of the edges in G_i ∇ G_j. Every graph G embeds as an induced subgraph into a metamour-complementary self-complementary graph G' with |V(G')| = 4 |V(G)|+1. Let G' denote the join of the family of graphs {G_1, …, G_5} along C_5; see Figure <ref>. It is easy to verify that G' is metamour-complementary with M(G')=G' as shown in Figure <ref>. Note that this graph M(G')=G' is (isomorphic to) the join of the family of graphs {G_1, G_3, G_5, G_2, G_4} along C_5. In case of the specific choice G_1 := K_1 trivial, G_2 :=G_5 :=G, and G_3 := G_4 := G, any choice of isomorphisms φ_1: G_1 →G_1, φ_2: G_2 →G_3, φ_3: G_3 →G_5, φ_4: G_4 →G_2, and φ_5: G_5 →G_4 extends to an isomorphism φ: G' →G'. Note that any nontrivial metamour-complementary self-comple­men­tary graph has metamour period 2. With Theorem <ref>, this includes Paley graphs. We end this section with a general characterization of metamour-complementary graphs with metamour period 2. A graph G is metamour-complementary with metamour period 2 if and only if (G) = (G) = 2. This follows immediately from Theorem <ref>. (See also <cit.>.) We will have more to say about graphs with metamour period 2 in Section <ref>. § GRAPHS WITH EVEN METAMOUR PERIOD In this section, we will see that for all even k≥ 2, the class of graphs with metamour period k is large and of a complex structure. This is again evidenced by showing that every graph G can be embedded as an induced subgraph into some graph with metamour period k (see Theorem <ref>). We will also show that the same is true for graphs with pseudo-metamour period k. We start by introducing an auxiliary sequence of integers. Let n be an odd positive integer. We define μ(n) min{k∈^+ | 2^k≡±1n}. For example, starting with n=3, the first few values of μ(n) are 1, 2, 3, 3, 5, 6, 4, 4, 9, 6, 11, 10, 9, 14, 5, 5, 12, 18, 12, 10, 7, 12, 23, 21, 8, 26, 20, 9, 29, 30, 6, 6, 33, 22, 35, 9, 20, 30, 39, 27, 41, 8, 28, 11, … This sequence can be found as entry A003558 in the OEIS <cit.>. The following result on odd cycle graphs serves as a good comparison of metamour period versus pseudo-metamour period (see Definition <ref> above). Note that we exclude the case n=3 below, due to the fact that C_3 = K_3 and (K_3)=K_3. Let n ≥ 5 be odd, and let k,ℓ∈. We have the following: * ^k(C_n)≅ C_n. * ^k(C_n)=C_n if and only if μ(n) | k. * ^k(C_n)=^ℓ(C_n) if and only if k≡ℓμ(n). Write the vertices of C_n as v_i, i∈/n, with edges v_i v_i+1. Then it is straightforward to verify that the edges of ^k(C_n) are of the form v_i v_i+2^k. The statements of the theorem follow. Theorem <ref> takes also care of even cycle graphs C_n. In particular, given any n∈^+, we can write n=2^i u with i∈ and some odd u∈^+. Then ^i(C_n) is the disjoint union of 2^i cycle graphs C_u, where we set C_1 := K_1. In the following, we describe a few more constructions based on our Definition <ref>. We start with a characterization of isomorphic graphs. Let n ≥ 7 be odd, and let 𝒢={G_i | 1≤ i ≤ n} and 𝒢'={G'_i | 1≤ i ≤ n} be two collections of graphs, where we interpret all indices as elements of /n. Let G denote the join of 𝒢 along C_n and G' the join of 𝒢' along C_n, respectively. Then G≅ G' if and only if there exists some j∈/n such that either G'_i≅ G_j+i or G'_i≅ G_j-i for all 1≤ i ≤ n. Note that μ(n) ≥ 3 as n≥ 7. It is straightforward to verify that ^2(G) is the join of 𝒢 along ^2(C_n). Thus G ∩^2(G) =⋃_1≤ i ≤ n G_i, and similarly (G) ∩^3(G) =⋃_1≤ i ≤ nG_i. In particular, E(G ∩^2(G))∪ E((G) ∩^3(G))= ⋃_1≤ i ≤ n{xy | x,y ∈ V(G_i), x y}, and we recover V(G_i) (up to the index). With this information, we can rediscover {G_i | 1≤ i ≤ n} from G up to an index shift/flip. We now can generalize Theorem <ref> as follows: Let n ≥ 5 be odd, and let 𝒢={G_i | 1≤ i ≤ n} be a collection of graphs with at least one G_i nontrivial. Let G be the join of 𝒢 along C_n. Similarly, let 𝒢={G_i | 1≤ i ≤ n} and let G^0 be the join of 𝒢 along C_n. Then we have the following: * ^k(G)=G if and only if μ(n) | k and k is even. * ^k(G)=G^0 if and only if μ(n) | k and k is odd. * If n≥ 7 and G_i G_i = G_j for all i,j, then ^k(G) ≅^ℓ(G) if and only if k≡ℓ 2. It is straightforward to verify that ^k(G) is the join of either 𝒢 (when k is even), or 𝒢 (when k is odd), along ^k(C_n). The first two statements of the theorem follow from this and Theorem <ref>. The last statement follows from Theorem <ref>. As an example of Theorem <ref>, let 𝒢={G_i | 1≤ i ≤ 7} where every G_i = K_2, and let G be the join of 𝒢 along C_7, see Figure <ref>. Similarly, let G^0 be the join of {G_i = K_2 | 1≤ i ≤ 7} along C_7. Since μ(7)=3, we have the following three results from Theorem <ref>: * ^k(G)=G if and only if 6|k. * ^k(G)=G^0 if and only if k is an odd multiple of 3. * ^k(G)≅^ℓ(G) if and only if k≡ℓ 2. We continue with a construction of graphs with even metamour period k. Let G be a graph and k∈^+ even. Then G embeds as an induced subgraph into a graph G' with metamour period k. In addition, we can achieve |V(G')|=|V(G)|+4 for k=2 and |V(G')|=|V(G)|+2^k-2 for k≥ 4. Let n=5 for k=2 and n=2^k-1 for k≥ 4. Observe that n≥ 5 with μ(n)=k. Let 𝒢={G_i | 1≤ i ≤ n} with G_1=G and G_i=K_1 for 2≤ i ≤ n. Let G' be the join of 𝒢 along C_n so that G embeds as an induced subgraph into G'. By Theorem <ref> (or Theorem <ref> if G is trivial), G' has metamour period k. In particular, for any even k, there are infinitely many connected graphs whose metamour period is k. This result is in stark contrast to our upcoming Conjecture <ref> (concerning odd metamour periods). We close this section with a variation of Theorem <ref> which provides a construction of graphs with even pseudo-metamour period k. Let G be a graph and k∈^+ even. Then G embeds as an induced subgraph into a graph G' with metamour period k and G'^i(G') for all 1≤ i< k. In particular, G' has pseudo-metamour period k. Let n=2^k+1. Then n≥ 5 with μ(n)=k. Choose any collection of graphs 𝒢={G_i | 1≤ i ≤ n} such that G_1=G and all |V(G_i)| are distinct. Let G' be the join of 𝒢 along C_n, and use Theorem <ref> for the result. § MORE ON GRAPHS WITH METAMOUR PERIOD 2 In this section and the next, we study graphs with metamour period 2 and 3, respectively. In particular, in this section, we will be focusing on the question whether there exist any graphs with metamour period 2 that do not result from joining graphs along C_5. We will aim towards a more complete characterization of graphs with metamour period 2. It is straightforward to verify that ^2(C_5)=C_5. More generally, by adjoining vertices to C_5 as depicted in Figure <ref>, we can construct additional examples of graphs with metamour period 2. These examples lead to a general construction of an infinite family of graphs with metamour period 2, constructed via Definition <ref>. An important element of this construction will be the graph defined below: We write C_5 to denote the graph obtained from C_5 by adding an additional vertex that is connected to two adjacent vertices of C_5. (The notation is meant to suggest C_5 with a “hat.” This is the graph in the center of Figure <ref>.) The graph C_5 already provides a simple example of a graph with metamour period 2 that is not the result of joining graphs along C_5. Can we tell any more about the structure of graphs with metamour period 2? We start with a very general observation. Let G be a graph with metamour period 2, and let G' denote the join of a collection of graphs 𝒢 along G. Then G' has again metamour period 2. In particular, this holds for any join of graphs {G_1, …, G_5} along C_5, see Figure <ref>, and any join of graphs {G_1, …, G_6} along C_5, see Figure <ref>. Let 𝒢= { G_i | i∈ I }, and denote 𝒢= {G_i | i∈ I }. Using Definition <ref>, it is straightforward to check that (G') is the join of 𝒢 along (G). From this, it is straightforward to verify that . The graph on the right side of Figure <ref> is the join of {K_1,K_1,K_1,K_1,K_1,K_2} along C_5. Note also that every nontrivial metamour-complementary self-com­ple­mentary graph G has metamour period 2 and thus qualifies for Theorem <ref>. This includes all Paley graphs, see Theorem <ref>. Our next result can be viewed as a generalization of Theorem <ref>. Suppose G=(V,E) is a connected graph with metamour period 2. Then (G) is connected and either (G) = ((G)) = 2 (G)=G, or (G) = ((G)) = 3 (G)⊊G. Moreover, for all distinct x,y∈ V, we have d_G(x,y)=1 d_(G)(x,y) = 2, d_G(x,y)=2 d_(G)(x,y) = 1, d_G(x,y)=3 d_(G)(x,y) = 3. Since ^2(G) = G is connected, (G) is connected. It follows that (G) ≥ 2 since (K_n)=K_n is disconnected for n≥ 2 while K_1 has metamour period 1. Moreover, if there existed x,y∈ V with d_G(x,y)=4, then there would exist z∈ V so that d_G(x,z)=d_G(z,y)=2. This would force xz,yz∈ E((G)) and xy∉E((G)). By metamour periodicity, this would show that xy∈ E, a contradiction. As (G) also has metamour period 2, the roles of G and (G) are symmetrical and we see that 2 ≤(G), ((G)) ≤ 3. Now, if (G)=2, then for distinct x,y∈ V, either xy∈ E or xy ∈(G). From this, it follows that (G) = G. By periodicity and Theorem <ref>, (G)=2 as well. By symmetry, we see that (G) =2 ((G)) = 2. From this, we also conclude that (G) =3 ((G)) = 3. In this case, the existence of distance-three vertices implies that (G)≠G. Now let x,y∈ V be distinct. By periodicity, d_G(x,y)=1 implies d_(G)(x,y) = 2. Moreover, by Definition <ref>, d_G(x,y)=2 implies d_(G)(x,y) = 1. By symmetry, we thus obtain both equivalences involving distance 1 and 2. In turn, this yields the equivalence involving distance 3. Suppose G is a connected graph with metamour period 2 and (G) = 3. Then G contains a subgraph isomorphic to C_5 (shown in the middle of Figure <ref>). Let x,y ∈ V(G) with d_G(x,y) = 3. As d_(G)(x,y) = 3, let (w_0,w_1,w_2,w_3) be a minimal path from x to y in (G). For 0≤ i ≤ 2, since w_i w_i+1∈ E((G)), there is some u'_i ∈ V(G) with w_iu'_i,u'_iw_i+1∈ E(G). Define a walk (u_0,u_1,…,u_6) by u_i = w_i/2, for i even, u'_(i-1)/2, otherwise. After possibly relabeling (see Figure <ref>), it is straightforward to verify that there are two possibilities: * All of the vertices u_i are distinct. * The only vertices u_i that are not distinct are u_3 = u_5. By minimality, we have u_0u_4,u_2u_6∈ E(G), giving us the two possible configurations depicted in Figure <ref>. If the vertices are all distinct, then, after possibly relabeling, it is straightforward to check that u_0 u_3 ∈ E((G)) and u_3 u_6 ∈ E(G). As a result, in either possibility from above, we have a copy of C_5 (see Figure <ref>). In light of Theorem <ref>, it seems reasonable to ask for a full characterization of diameter-3 graphs with metamour period 2 (see Theorem <ref>). In particular, we have the following question: Is every connected diameter-3 graph with metamour period 2 the join of some graphs along C_5 (as in Figure <ref>)? A negative answer to Question <ref> may result in the discovery of a whole new family of diameter-3 graphs with metamour period 2 similar to Paley graphs as diameter-2 graphs with metamour period 2. In this context, one may also ask the following question: Is every connected diameter-2 graph with metamour period 2 the join of some graphs along a Paley graph? § GRAPHS WITH METAMOUR PERIOD 3 In contrast to Section <ref>, we will provide a full characterization of graphs with metamour period 3 in Theorem <ref> and Corollary <ref>. En route to these results, we begin with a slate of lemmas. Let G be a connected graph with metamour period 3. * Then E(G), E((G)), and E(^2(G)) are pairwise disjoint. * If v_1v_2,v_2v_3 ∈ E(G') for some G' ∈{G,(G),^2(G)}, then v_1v_3 ∉ E(^2(G')). * If v_1v_2,v_2v_3 ∈ E(G') and v_1v_2',v_2'v_3 ∈ E((G')) for some G' ∈{G,(G),^2(G)}, then v_1v_3 ∈ E((G')). The first part follows from Lemma <ref> applied to each of the pairs of graphs ^i(G) and ^i+1(G) for 0≤ i ≤ 2 combined with metamour period 3. For the second part, Definition <ref> shows that v_1v_2,v_2v_3 ∈ E(G') implies v_1 v_3 is in either E(G') or E((G')). Combined with part one, we are done. For the third part, v_1v_2',v_2'v_3 ∈ E((G')) implies v_1 v_3 is in either E((G')) or E(^2(G')). Combined with part two, we are done. Suppose G=(V,E) is a connected graph with metamour period 3. For v_1v_2 ∈ E, there is a walk (w_0,w_1,…,w_8) in G such that w_0=v_1, w_8 = v_2, w_0w_2, w_2w_4, w_4w_6, w_6w_8 ∈ E((G)), and w_0w_4, w_4w_8 ∈ E(^2(G)), see Figure <ref>. Furthermore, one of the following must hold: * All vertices w_i are distinct. * The only vertices w_i that are not distinct are w_0 = w_7 and w_1 = w_8. Begin with w_0=v_1 and w_8=v_2 so that w_0 w_8 ∈ E. By metamour period 3, there exists w_4 ∈ V so that w_0 w_4, w_4 w_8 ∈ E(^2(G)). From this it follows that there exist w_2, w_6 ∈ V so that w_0w_2, w_2w_4, w_4w_6, w_6w_8 ∈ E((G)). Finally, this shows that there exist w_1, w_3, w_5, w_7 ∈ V so that (w_0,w_1,…,w_8) in a walk in G. By construction, any pair of adjacent vertices in Figure <ref> must be distinct in G (solid lines), respectively (G) (dotted lines), and ^2(G) (dashed lines). Furthermore, Lemma <ref> shows that w_0 ∉{w_3,w_5,w_6}, w_1 ∉{w_3,w_4,w_5,w_6}, and w_2 ∉{w_5,w_6,w_7,w_8}. Next we show that w_3 and w_5 are distinct. By way of contradiction, suppose w_3 = w_5. From Lemma <ref>(3), we would have w_2w_6 ∈ E((G)). Therefore, either w_0w_6 ∈ E((G)), which implies w_0w_8 ∉ E, or w_0w_6 ∈ E(^2(G)), which implies w_4w_6 ∉ E((G)). Contradiction. By similar arguments, w_1 ≠ w_7 and w_1w_3,w_5w_7 ∈ E((G)). By symmetry, it only remains to show that assuming w_0 = w_7 and w_1 ≠ w_8 leads to a contradiction, see Figure <ref>. Case 1: w_1w_8 ∈ E. Since w_0w_8∈ E, we get w_8w_2 ∉ E((G)) and hence w_8w_2 ∈ E. Thus, either w_8w_3 ∈ E, which implies w_8w_4 ∉ E(^2(G)), or w_8w_3 ∈ E((G)), which implies w_1w_8 ∉ E. Contradiction. Case 2: w_1w_8 ∈ E((G)). By Lemma <ref>, we have w_1w_6 ∈ E((G)). Since w_1w_6,w_6w_4 ∈ E((G)), we get either w_1w_4 ∈ E((G)), which implies w_1w_2 ∉ E, or w_1w_4 ∈ E(^2(G)), which implies w_8w_1 ∉ E((G)). Contradiction. Suppose G is a connected graph with metamour period 3. Then G contains an induced copy of either C_7 or C_9. Begin with the walk from Lemma <ref>, (w_0,…,w_8). If w_0 = w_7 and w_1 = w_8, we will see that we get an induced C_7. When all w_i are distinct, we will get an induced C_9. As the arguments are similar and overlap, we give details here only for the first case. Suppose that w_0 = w_7, w_1 = w_8, and all other w_i are distinct. By Lemma <ref>(1), we see that w_5w_0, w_0w_2, w_2w_4, w_4w_6, w_6w_1, w_1w_3, w_1w_4, w_4w_0 ∉ E(G). Since w_6w_4,w_4w_2 ∈ E((G)), we have w_6w_2 ∉ E(G) from Lemma <ref>(2). Similar arguments show w_2w_5, w_3w_6 ∉ E(G). Since w_4w_0 ∈ E(^2(G)), we have w_0w_3 ∉ E(G). Similarly, w_5w_1 ∉ E(G). Finally, suppose w_3w_5 ∈ E(G). Then w_2w_5 ∈ E((G)). Since w_4w_2,w_2w_5 ∈ E((G)), we get w_4w_5 ∉ E(G). Contradiction. Suppose G is a connected graph with metamour period 3. Then G contains an induced copy of C_n, n∈{7,9}, on vertices w_i, 0≤ i ≤ n-1, so that w_i w_i± 4∈ E(^2(G)) for all i, with indices interpreted n. Continue the notation and arguments from Lemmas <ref> and <ref>. We only give details in the case where all vertices w_i are distinct since the case of w_0 = w_7 and w_1 = w_8 is similar and straightforward. Note that w_i w_i± 2∈ E((G)) for all i. Thus, w_0w_2,w_2w_4 ∈ E((G)), and either w_0w_4 ∈ E((G)) or w_0w_4 ∈ E(^2(G)). Suppose w_0w_4 ∈ E((G)). Then either w_0w_6 ∈ E((G)), which implies w_0w_8 ∉ E(G), or w_0w_6 ∈ E(^2(G)), which together with w_0w_4 ∈ E(^2(G)) implies w_4w_6 ∉ E((G)). Contradiction. The others statements follow by similar arguments. We arrive at the main theorem of this section. G=(V,E) is a connected graph with metamour period 3 if and only if it is isomorphic to either C_7 or C_9. Let G be a connected graph with metamour period 3. Begin with the induced copy of C_n, n∈{7,9}, on w_0,…,w_n-1 from Lemma <ref>. It remains to show that G has no additional vertices. By way of contradiction, suppose there exists v∈ V ∖{w_0,…,w_n-1}. After relabeling, suppose vw_0 ∈ E. Then either vw_1 ∈ E or vw_1 ∈ E((G)). By way of contradiction, suppose vw_1 ∈ E. As a result, either v w_2 ∈ E or v w_2 ∈ E((G)). In the later case, combining with w_0 w_2 ∈ E((G)), we would get v w_0 ∉ E. Contradiction. Therefore, v w_2 ∈ E. We can similarly conclude vw_3,vw_4 ∈ E. As then w_0v,vw_4 ∈ E, it follows that w_0 w_4 ∉ E(^2(G)). Contradiction. Thus, vw_1 ∈ E((G)). Similarly, vw_n-1∈ E((G)). Since vw_1, w_1w_3 ∈ E((G)), we get either v w_3 ∈ E((G)) or v w_3 ∈ E(^2(G)). If vw_3 ∈ E(^2(G)), then vw_3,w_3w_n-1∈ E(^2(G)) by Lemma <ref>, and so vw_n-1∉ E((G)). Contradiction. Thus vw_3 ∈ E((G)), and either vw_5 ∈ E((G)) or vw_5 ∈ E(^2(G)). If vw_5 ∈ E(^2(G)), then w_1w_5 ∈ E(^2(G)) implies the contradiction vw_1 ∉ E((G)). Thus, vw_5 ∈ E((G)). Case 1: w_0 = w_7 and w_1 = w_8 with n=7. With vw_5, vw_n-1 = vw_6 ∈ E((G)), we have the final contradiction w_5w_6 ∉ E. Case 2: The vertices are all distinct with n=9. With vw_5 ∈ E((G)) we also have vw_4 ∈ E((G)) by symmetry. Now vw_4, vw_5 ∈ E((G)) implies w_4w_5 ∉ E. Contradiction. The following result is now immediate. A nontrivial graph has metamour period 3 if and only if it is the disjoint union of some copies of C_7 and C_9. In general, we conjecture that connected graphs with odd metamour period are rare. If true, it would be especially interesting to classify them. Note the conjectured difference to graphs with even metamour period, Theorem <ref>. For each odd k∈^+, there exist only finitely many connected graphs with metamour period k. § METAMOURS OF GENERALIZED PETERSEN GRAPHS To help with digestion of Definition <ref> below, we will begin with a walk of 2^k + 1 vertices and 2^k associated edges. For each i, 0 ≤ i ≤ k-1, this walk will be broken up into 2^k-i-1 smaller walks of 2^i+1 + 1 consecutive vertices and 2^i+1 edges. For the jth such smaller walk, 0 ≤ j ≤ 2^k-i-1-1, we will look at its first, middle, and last vertex. In particular, there will be 2^i edges each between the middle vertex and the vertices at either end of this small walk. Fix G=(E,V), u,v ∈ V distinct vertices, and k∈^+. A 2-walk of length k from u to v is a walk π = (w_0,w_1,…,w_2^k) in G with u=w_0 and v=w_2^k such that: * For all 0 ≤ i ≤ k-1 and 0 ≤ j ≤ 2^k-i-1-1, w_2j·2^i, w_(2j+1)·2^i, and w_2(j+1)·2^i are distinct. * For all 0 ≤ j ≤ 2^k-1-1, w_2jw_2(j+1)∉ E. Note that restriction of π to (w_2j· 2^i, …, w_2(j+1)· 2^i) gives a 2-walk of length i+1 from w_j· 2^i+1=w_2j· 2^i to w_(j+1)· 2^i+1 =w_2(j+1)· 2^i. We will say that π is fully minimal if, for all 1 ≤ i ≤ k-1 and 0 ≤ j ≤ 2^k-i-1-1, there is no 2-walk from w_j· 2^i+1 to w_(j+1)· 2^i+1 of length i. Again, restricting a fully minimal 2-walk π to (w_j· 2^i+1, …, w_(j+1)· 2^i+1) gives a fully minimal 2-walk of length i+1 from w_j· 2^i+1 to w_(j+1)· 2^i+1. Continue the notation from Definition <ref>. We will say that d_2(u,v) = i if the minimal length of a 2-walk between u and v is i. With this notation, if a 2-walk π = (w_0,w_1,…,w_2^k) of length k from u to v satisfies d_2(w_j· 2^i+1, w_(j+1)· 2^i+1)=i+1 for all 1 ≤ i ≤ k-1 and 0 ≤ j ≤ 2^k-i-1-1, then π will trivially be fully minimal. However, the converse of this statement is not true. Also, neither is it true that every minimal length 2-walk is fully minimal nor that every fully minimal 2-walk is of minimal length. In the case of k=1 in Definition <ref>, the existence of a (fully minimal) 2-walk of length k from u to v is equivalent to uv∈ E(^1(G)). However, this is no longer true when k≥ 2. To that end, from Definition <ref>, we immediately get the following condition to have an edge in ^k(G). Let G=(V,E), u,v ∈ V distinct vertices, and k∈^+. Then uv ∈ E(^k(G)) if and only if there is a 2-walk (w_0,w_1,…,w_2^k) of length k from u to v in G such that, for 0 ≤ i ≤ k and 0 ≤ j ≤ 2^k-i - 1, w_j· 2^iw_(j+1)·2^i∈ E(^i(G)). Lemma <ref> gives a necessary and sufficient condition to have an edge in ^k(G), but it is very hard to verify. Theorem <ref> below gives a sufficient condition that is easier to check if it is satisfied. Let G=(V,E) with distinct vertices u,v ∈ V. If there is a fully minimal 2-walk of length k from u to v in G, then uv ∈ E(^k(G)). Let (w_0,w_1,…,w_2^k) be a fully minimal 2-walk of length k from u to v. As w_2jw_2j+1,w_2j+1w_2(j+1)∈ E for 0 ≤ j ≤ 2^k-1-1 and w_2jw_2(j+1)∉ E, we get w_2jw_2(j+1)∈ E(^1(G)). If uv∉E(^k(G)), then, noting Lemma <ref>, choose the smallest i, 1 ≤ i ≤ k-1, such that there is some j, 0 ≤ j ≤ 2^k-i-1 - 1, so that w_j· 2^i+1w_(j+1)·2^i+1∉ E(^i+1(G)). However, minimality of i gives w_2j· 2^iw_(2j+1)·2^i, w_(2j+1)·2^iw_2(j+1)·2^i∈ E(^i(G)). Thus, we must have w_j· 2^i+1w_(j+1)·2^i+1=w_2j· 2^i w_2(j+1)·2^i∈ E(^i(G)) as w_j· 2^i+1w_(j+1)·2^i+1∉ E(^i+1(G)). Lemma <ref> now shows that there exists a 2-walk of length i from w_j· 2^i+1 to w_(j+1)·2^i+1, which is a contradiction. The next lemma will allow us to bootstrap up metamour orders by expanding 2-walks. Let uv ∈ E(^k(G)) and π =( w_0,w_1,…,w_2^k) a 2-walk from u to v in G such that, for 0 ≤ i ≤ k and 0 ≤ j ≤ 2^k-i - 1, w_j·2^iw_(j+1)·2^i∈ E(^i(G)). Suppose that, for 0 ≤ a ≤ 2^k - 1, there is a fully minimal 2-walk of length 2 from w_a to w_a+1. Then uv ∈ E(^k+2(G)) as well. Construct a 2-walk, π' = (w_0',w_1',…,w_2^k+2'), of length k+2 from π by replacing each edge w_a w_a+1 in π by its corresponding fully minimal 2-walk of length 2 from w_a to w_a+1. Observe that w'_4a = w_a. By construction, note that, for 0 ≤ j ≤ 2^k+1-1, we have w'_2jw'_2(j+1)∈ E(^1(G)). Arguing as in Lemma <ref>, suppose uv ∉E(^k+2(G)). Using Lemma <ref>, choose the smallest i, 1 ≤ i ≤ k+1, such that there is some j, 0 ≤ j ≤ 2^k-i-1 - 1, so that w_j· 2^i+1' w_(j+1)·2^i+1' ∉ E(^i+1(G)). By full minimality of the added 2-walks, we see that i≥ 2. By minimality of i, w'_2j·2^iw'_(2j+1)·2^i,w'_(2j+1)·2^iw'_2(j+1)·2^i∈ E(^i(G)) which forces w_j· 2^i+1' w_(j+1)·2^i+1' ∈ E(^i(G)). Thus w_j·2^i-1 w_(j+1)·2^i-1∈ E(^i(G)), which violates its membership in E(^i-1(G)). Write G(m,j) for the generalized Petersen graph where m,j∈^+ with m≥ 5 and 1≤ j < m/2. We will use {v_i, u_i | 0≤ i < m} as vertex set with edges v_i v_i+1, v_i u_i, and u_i u_i+j for all 0≤ i<m, where indices are to be read modulo m. We may refer to the {v_i} as the exterior vertices and the {u_i} as the interior vertices. Observe that the interior vertices break up into (m,j) cycles of size m/(m,j) each, where (m,j) denotes the greatest common divisor of m and j. Our first main result, Theorem <ref>, will calculate the metamour limit period and metamour limit set of G(m,2). With an eye towards applying Lemma <ref> in the context of certain generalized Petersen graphs, we prove the following lemma. Let m∈^+ with m≥ 5. If uv ∈ E(G(m,2)), there exists a fully minimal 2-walk of length 2 from u to v. It will be sufficient to show that every edge of G(m,2) lies in an induced subgraph isomorphic to C_5. For this, look at the cycle given by (v_i,v_i+1,v_i+2,u_i+2,u_i,v_i), see Figure <ref>. The next theorem shows that metamour edges persist in G(m,2) and are sorted only by parity. Let m,n,ℓ,ℓ_1,ℓ_2∈ with m≥ 5. Then E(^n(G(m,2))) ⊆ E(^n+2ℓ(G(m,2))) and E(^ℓ_1(G(m,2))) ∩ E(^ℓ_2(G(m,2))) = ∅ if ℓ_1 and ℓ_2 have opposite parities. Lemmas <ref> and <ref> show that E(^n(G(m,2))) ⊆ E(^n+2ℓ(G(m,2))). From this, we see that E(^ℓ_1(G(m,2))) ∩ E(^ℓ_2(G(m,2))) is empty if ℓ_1 and ℓ_2 have opposite parities since the two sets can be embedded into adjacent metamour powers of G(m,2). In order to calculate the metamour limit set for G(m,2), we continue developing our notations from Definition <ref>. Let k∈^+. Define the fully minimal k-set, _k(G(m,2)) ⊆ E(G(m,2)) ∪ E(G(m,2)), as the set of all edges uv, with distinct u,v∈ V(G(m,2)), for which there exists a fully minimal 2-walk of length k from u to v. Let _ev(G(m,2)) = ⋃_k even_k(G(m,2)), _od(G(m,2)) = ⋃_k odd_k(G(m,2)). We say that _ev(G(m,2)) stabilizes by N∈^+ if _ev(G(m,2)) = ⋃_k even k ≤ N_k(G(m,2)). Similarly, _od(G(m,2)) stabilizes by N∈^+ if _od(G(m,2)) = ⋃_k odd k ≤ N_k(G(m,2)). Note that N is not unique. By Theorem <ref>, we see that _k(G(m,2)) ⊆^k(G(m,2)). Next we see that the fully minimal k-sets eventually capture all possible edges uv, u,v∈ V(G(m,2)). Let m∈^+ with m≥ 5 and distinct u,v∈ V(G(m,2)). Then there exists k∈^+ such that uv∈_k(G(m,2)). The special case m=6 can easily be verified by direct inspection. For m 6, the proof examines, by symmetry, the three possible options for u and v to be either exterior or interior vertices. As the cases are similar, we give details only for the case where u and v are both exterior vertices. First choose α∈^+, α≥ 3, so that 2^α-1 < m ≤ 2^α. After possibly relabeling, and also conflating _m with when convenient, we may assume u=v_a and v=v_b with 0≤ a<b and 1≤ b-a ≤m/2. For b-a =1, we have uv ∈_2(G(m,2)) with Lemma <ref>, and for b-a =2, one easily checks uv ∈_1(G(m,2)). Thus, we can restrict ourselves to the case 3≤ b-a ≤m/2. Choose β∈^+, β≥ 2 so that 2^β-1 < b-a ≤ 2^β. Note that 2^β < 2(b-a)≤ m ≤ 2^α, thus β≤α -1. We will look at walks π from v_a to v_b of the form (v_a, v_a+1, …, v_a+x, u_a+x, u_a+x+2, …, u_a+x+2y, v_a+x+2y, v_a+x+2y-1, …, v_b) with x,y∈ and a+x+2y ≥ b. Such a walk has length x+1+y+ 1+ (a+x+2y-b) = 2x +3y +2 -(b-a). We will require that 2x +3y +2 -(b-a) = 2^β. Note that it can be verified that x+2y<m so that, in fact, π is always a path. Write c=2^β-2+ b-a≥ 2. If c≡ 0 3, use x=0 and y=c/3 and observe that this choice indeed satisfies a+x+2y ≥ b. In this case, we have π = (v_a,u_a, u_a+2, …, u_a+2y, v_a+2y, v_a+2y-1, …, v_b), and we claim that π is fully minimal. First, as 2^β < 2(b-a)≤ m, we have c=2^β-2+ b-a < 3/2 m and 2y= 2/3 c< m, and π is indeed a path. Moreover, for m 6, condition (2) of Definition <ref> is automatically satisfied, too, and π is a 2-walk of length β. If we relabel π as (w_0,w_1,…,w_2^β), it now suffices to show that there is no 2-walk from w_j· 2^i+1 to w_(j+1)· 2^i+1 of length i for all 1 ≤ i ≤β-1 and 0 ≤ j ≤ 2^β-i-1-1. The argument breaks into four cases depending on the location of w_j· 2^i+1 and w_(j+1)· 2^i+1 with respect to exterior and interior vertices. As the arguments are similar, we only give details here for two representative cases. For the first case considered here, suppose that w_j· 2^i+1 and w_(j+1)· 2^i+1 are both exterior vertices, neither equal to w_0. The exterior path between these two vertices has 2^i+1 edges with 2^i+1≤ 2y-(b-a). As y=1/3(2^β-2+(b-a)), it follows that 2^i+1≤2/3(2^β-2-1/2(b-a)) < 2^β, and i ≤β -2 ≤α -3. However, the shortest possible path between w_j· 2^i+1 and w_(j+1)· 2^i+1 would have at least either 2^i+2 edges going along interior vertices or m-2^i+1/2+2 edges by going in the opposite direction. The first possibility is too large to admit a 2-walk of length 2^i. Turning to the second possibility, the existence of a 2-walk of length 2^i would require m-2^i+1/2+2 ≤ 2^i so that m ≤ 2^i+2-4. Thus m < 2^i+2 and α≤ i+2, which is a contradiction. For the second case considered here, suppose w_j· 2^i+1 and w_(j+1)· 2^i+1 are both interior vertices. Then j≥ 1 and 2^i+2≤ (j+1)· 2^i+1≤ y+1 = 1/3(2^β-2+(b-a))+1 ≤1/3(2^β+1+1) < 2^β so that i≤β -3 ≤α -4. However, the shortest possible path between w_j· 2^i+1 and w_(j+1)· 2^i+1 has either 2^i+1 or m/2-2^i+1 edges. The first possibility is too large to allow a 2-walk of length 2^i. For the second possibility to work, we would need m/2-2^i+1≤ 2^i so that m≤ 2^i+2+2^i+1<2^i+3. Then α≤ i+3, which is a contradiction. Finally, the cases of c≡ 1 3 and c≡ 2 3 are done using x=2 and x=1, respectively. The details are similar and omitted. Finally, we can calculate the metamour limit period and metamour limit set of G(m,2). Let m∈^+ with m≥ 5. Then G(m,2) has metamour limit period 2. The metamour limit set consists of (V(G(m,2)),_ev(G(m,2))) and (V(G(m,2)),_od(G(m,2))), where _ev(G(m,2)) and _od(G(m,2)) stabilize by 2 ⌊ m/2 ⌋ +m-8. Moreover, _ev(G(m,2)) = E(^2ℓ(G(m,2))) and _od(G(m,2)) = E(^2ℓ+1(G(m,2))) for all sufficiently large ℓ∈. Finally, _ev(G(m,2)) ∪_od(G(m,2)) = E(G(m,2)) ∪ E(G(m,2)). By Theorem <ref>, we see that _k(G(m,2)) ⊆ E(^k(G(m,2))). By Theorem <ref> and the fact that G(m,2) is finite, we see that _ev(G(m,2)) ⊆ E(^2ℓ(G(m,2))) and _od(G(m,2)) ⊆ E(^2ℓ+1(G(m,2))) for all sufficiently large ℓ∈. From Theorem <ref> and Lemma <ref>, we see that E(^2ℓ(G(m,2))) ∩ E(^2ℓ+1(G(m,2))) = ∅ and _ev(G(m,2)) ∪_od(G(m,2)) = E(G(m,2)) ∪ E(G(m,2)) so that, in fact, _ev(G(m,2)) = E(^2ℓ(G(m,2))) and _od(G(m,2)) = E(^2ℓ+1(G(m,2))) for all sufficiently large ℓ∈. For the statement on stability, first observe that G(m,2) ∪G(m,2) has 2m2 = m(2m-1) edges, which group up into 2 ⌊ m/2 ⌋ +m equivalence classes with respect to index shift modulo m. For m 5, E(G(m,2)) and E((G(m,2))) consist of 3 and 6 of these classes, respectively. Moreover, by Theorem <ref>, for growing even and odd k, respectively, E(^k(G(m,2))) either stays the same or grows by adding one or several of these equivalence classes. The metamour iterates will stabilize if E(^k(G(m,2)))=E(^k+2(G(m,2))), which will happen for some k≥ 2 ⌊ m/2 ⌋ +m-8. We end with a characterization for the connectedness of (G(m,j)). From Definition <ref>, observe that (G) is connected if and only if for each distinct u,v∈ V(G), there exists n∈^+ and a walk in G, (w_0,w_1,…,w_2n), with w_2i,w_2i+1,w_2(i+1) distinct and w_2iw_2(i+1)∉E(V) for all 0≤ i ≤ n-1. Let m,j ∈^+ with m ≥ 5 and j < m/2. Then (G(m,j)) is connected if and only if either * m is odd or * m and j are even. Otherwise, (G(m,j)) has two connected components. Let u,v∈ G(m,j) be distinct. Consider first the case of odd m. If both u,v are exterior vertices, then moving either in a clockwise or counterclockwise manner around the exterior vertices will furnish a path of even length showing that u and v are connected in (G(m,j)). It remains to show that each interior vertex is connected to an exterior vertex in (G(m,j)). For this, use the path (u_i,u_i+j,v_i+j) for i∈_m. Now consider the case of m,j even. Following an argument similar to the one in the previous paragraph, it is immediate that the sets of vertices {v_i, u_i | i∈ 2_m} and {v_i, u_i | i∈ 2_m+1} are both connected in (G(m,j)). The path (v_1,v_2,u_2) finishes this case. Finally, consider the case of m even and j odd. Following again an argument similar to the one in the first paragraph, it is immediate that the sets of vertices {v_i, u_i+1 | i∈ 2_m} and {v_i, u_i+1 | i∈ 2_m+1} are both connected in (G(m,j)). However, as these two sets of vertices provide a 2-coloring of G(m,j), (G(m,j)) cannot be connected. § METAMOUR GRAPHS OF COMPLETE M-ARY TREES In this section, we give a full description of the periodic behavior and limit set of the complete m-ary tree under the metamour operation. From now on, for h,m ∈^+ with m≥ 2, we let T T(h,m) denote the complete m-ary tree with height h, where the height is the number of levels below the root vertex of T. Recall that the depth of a vertex is its distance from the root. It turns out that the central role in our analysis is played by ^2(T). We thus begin with some auxiliary lemmas relating to this graph. Let x,y ∈ V(T). Then xy ∈ E(^2(T)) if and only if d_T(x,y) = 4. Suppose that xy ∈ E(^2(T)). Then by Lemma <ref>, there exists a 2-walk π = (x, u, v, w, y) in T, in the sense of Definition <ref>; in particular, the vertices x, v and y are all distinct. Since T contains no cycles, it follows that xy ∉ E(T) and therefore u ≠ y and x ≠ w. Moreover, we must have u ≠ w because otherwise d_T(x,y) = 2, which means xy ∈ E((T)) and thus xy ∉ E(^2(T)), contradicting our supposition. Therefore π is a path. Since T contains no cycles, there is no shorter path from x to y, and thus d_T(x,y) = 4. Conversely, suppose that d_T(x,y) = 4. Then there is a fully minimal 2-walk of length 2 from x to y (in the sense of Definition <ref>), and so by Theorem <ref> we have xy ∈ E(^2(T)). In order to describe the relative positions of vertex pairs, we introduce the following shorthand. Given vertices x,y ∈ V(T), there is a unique minimal path from x to y, consisting of a sequence of p upward steps followed by a sequence of q downward steps, with p,q ∈ such that p+q = d_T(x,y). We abbreviate this minimal path as π_T(x,y) = (x, ↑^p, ↓^q, y). For example, x and y are first cousins if and only if we have π_T(x,y) = (x, ↑^2, ↓^2, y); as another example, x is the great-grandchild of y if and only if π_T(x,y) = (x,↑^3,↓^0, y). Let T = T(h,m) with h ≥ 5, and let xy ∈ E(^2(T)). Then there exists z ∈ V(T) such that both xz and yz are in E(^3(T)). We need to find z such that d_^2(T)(x,z) = d_^2(T)(y,z) = 2. Equivalently, z must be distinct from x and y, and must be connected to x (and separately to y) by concatenating two length-4 paths in T; moreover, we must have d_T(x,z) ≠ 4 and d_T(y,z) ≠ 4. Since by hypothesis we have xy ∈ E(^2(T)), Lemma <ref> implies that d_T(x,y)=4. Therefore, there are three cases (up to symmetry) which must be checked: * If π_T(x,y) = (x,↑^4,↓^0,y), then we can take z such that π_T(x,z) = (x,↑^4,↓^2,z), as in Figure <ref>(A). * If π_T(x,y) = (x, ↑^3, ↓^1,y), then we can take z such that π_T(x,z) = (x,↑^4, ↓^4, z) as depicted in Figure <ref>(B), as long as x has depth ≥ 4. We can also take z such that π_T(x,z) = (x, ↑^2, ↓^4, z) as depicted in Figure <ref>(C), as long as x has depth ≤ h-2. The tree T(h,m) admits at least one of these two possibilities if and only if h ≥ 5. * If π_T(x,y) = (x, ↑^2, ↓^2, y), then we can take z such that π_T(x,z) = (x, ↑^4, ↓^2, z) as in Figure <ref>(D), as long as x and y have depth ≥ 4. We can also take z such that π_T(x,z) = (x, ↑^2, ↓^0, z) as in Figure <ref>(E), as long as x and y have depth ≤ h-2. The tree T(h,m) admits at least one of these possibilities if and only if h ≥ 5. Let T = T(h,m), where h ≥ 5. Suppose d_^2(T)(x,y) = 3. Then there exists z ∈ V(T) such that both xz and yz are in E(^3(T)) unless x and y both have depth 1 in T with m = 2 and h ∈{5,6}. We need to find a vertex z with the same properties described in the proof of Lemma <ref>. Up to symmetry, there are 15 relative positions for x and y such that d_^2(T)(x,y) = 3, and such that x and y do not both have depth 1 in T. (To determine these 15 positions, one need only check vertex pairs whose distance is a positive even integer which is at most 3 × 4 = 12.) In Table <ref>, we exhibit a choice of the desired vertex z for each of these 15 positions, using the shorthand in (<ref>). (Note that the vertex z described in the table may not be unique; rather, any z that satisfies the location given in the table has the desired property.) In each case, it is straightforward to verify (via diagrams like those in Figure <ref>) that both xz and yz belong to E(^3(T)). Suppose now that x and y both have depth 1 in T. If m > 2, then we observe that d_^2(T)(x,y) = 2. If m = 2 and h>6, then we can take z such that π_T(x,z) = (x,↑^1, ↓^7, z). If, however, m=2 and h ∈{5,6}, then d_^2(T)(x,y) = 3, and it is straightforward to verify that there is no vertex z with the desired property. Let T = T(h,m) with h ≥ 5. Then ^2(T) has two connected components, namely the vertices with even depth and the vertices with odd depth. The maximum of the diameters of these two connected components is ⌈ h/2 ⌉. It is clear that if d_T(x,y) = 4, then the depths of x and y have the same parity. Thus by Lemma <ref>, if xy ∈ E(^2(T)) then the depth of x and the depth of y have the same parity. Conversely, we claim that any two vertices x and y, whose depths have the same parity, are connected in ^2(T). To see this, observe that because T is a tree, there is a unique path in T between x and y, which necessarily has even length. If this length is divisible by 4, then we are done by Lemma <ref>. Thus, let us assume that the length is 4ℓ+2 for some ℓ∈, and that the path is given by (x,w_1,w_2,…,w_4ℓ+1,y). For ℓ>0, if w_4ℓ-1 is distinct from the root of the tree T, let z denote any neighbor of w_4ℓ-1 distinct from both w_4ℓ-2 and w_4ℓ. Then, (x,w_1,w_2,…,w_4ℓ-2,w_4ℓ-1,z,w_4ℓ-1,w_4ℓ,w_4ℓ+1,y) is a walk in T of length 4ℓ for which, starting with the vertex x, every fourth vertex is distance 4 from the previous vertex. Thus, the path (x,w_4,w_8,…,w_4ℓ-4,z,y) connects x and y in ^2(T). If w_3 is distinct from the root of the tree T, we can make a similar argument to show that x and y are connected in ^2(T). This leaves us to consider the following special cases: * Suppose that ℓ=1, and that w_3 is the root of the tree T. In this situation, x and y have both depth 3 and are connected in ^2(T) as depicted in Figure <ref>(A). * Suppose that ℓ=0, and that x and y have distinct depths. Without loss of generality, let us assume that x has the larger depth. Then, as long as x has depth ≥ 4, x and y are connected in ^2(T) as depicted in Figure <ref>(B). If x has depth ≤ h-2, an alternative connecting path is shown in Figure <ref>(C). * Suppose that ℓ=0, and that x and y have the same depth. Then, as long as x has depth ≥ 2, x and y are connected in ^2(T) as depicted in Figure <ref>(D). If x has depth ≤ h-4, an alternative connecting path is shown in Figure <ref>(E). Hence ^2(T) has exactly two connected components. The argument above also shows that if x and y are in the same connected component with d_T(x,y)≥ 8, then d_^2(T)(x,y) = ⌈ d_T(x,y)/4 ⌉. Therefore, since diam(T) = 2h, the maximum distance between connected vertices in ^2(T) is ⌈ h/2 ⌉. To streamline the statement of our results, we partition the positive integers into segments S_i whose endpoints are consecutive powers of 2: S_i (2^i-1, 2^i], i = 0, 1, 2, …, where we use the standard interval notation restricted to the integers. In other words, we have S_i = { s ∈^+ |⌈log_2 s ⌉ = i}. Note that S_0 = {1} and S_1 = {2}. Let i∈^+ with i ≠ 2. If i is even (resp., odd), then every element of S_i can be written as the sum of two (not necessarily distinct) elements in the union of S_1 (resp., S_0) and S_i-1. The statement is obviously true for i=1. Thus let i ∈^+ with i≥ 3. We have S_i = [2^i-1 + 1, 2^i] and S_i-1 = [2^i-2+1, 2^i-1]. Since the sumset of S_i-1 is {s+t | s,t ∈ S_i-1} = [2min S_i-1, 2max S_i-1] = [2^i-1 + 2, 2^i], we have proved the lemma for all elements of S_i except for 2^i-1+1. Now, if i is even, we complete the proof by writing 2^i-1 + 1 as the sum of 2 ∈ S_1 and 2^i-1-1 ∈ S_i-1. Likewise, if i is odd, we complete the proof by writing 2^i-1 + 1 as the sum of 1 ∈ S_0 and 2^i-1∈ S_i-1. Let ℓ∈^+. Then every element of ⋃_0 ≤ i ≤ ℓ i ≡ ℓ ( mod 2) S_i, except for 1 and 3 (which appear only when ℓ is even), can be written as the sum of two (not necessarily distinct) elements of ⋃_0 ≤ i ≤ ℓ-1 i ≡ ℓ-1 ( mod 2) S_i. This follows immediately from Lemma <ref>. Note that the sets S_0={1} and S_2={3,4} are excluded from Lemma <ref> which leads to the exceptional cases. It turns out that the distances in ^2(T) are sufficient to completely describe the edges in every subsequent metamour graph of T. The key to the proof of the following lemma is Corollary <ref> above, which will allow us always to split a path in ^2(T) into two subpaths of desired lengths. Let T = T(h,m) be the complete m-ary tree with height h ≥ 5. Let x,y ∈ V(T), with the exception of the case depth x = depth y = 1, m=2, h ∈{5,6}. For k ≥ 2, we have xy ∈ E(^k(T)) ⟺ d_^2(T)(x,y) ∈ ⋃_0 ≤ i ≤ k-2 i ≡ k 2 S_i , where the sets S_i are defined in (<ref>). In the case (<ref>), the edge xy does not occur in ^k(T) for any k ≥ 2. We use induction on k. In the base cases k=2 and k=3, the theorem is true by definition, since S_0 = {1} and S_1 = {2}. Note that in the exceptional case (<ref>), we have d_^2(T)(x,y) = 3 (see the proof of Lemma <ref>), and indeed 3 does not belong to S_0 or S_1. As our induction hypothesis, assume that the theorem holds up to some value of k; we now show that it also holds for k+1. We first prove the “⟹” direction of the biconditional in the theorem. For ease of notation, in the rest of this proof we abbreviate d d_^2(T)(x,y). Let xy ∈ E(^k+1(T)). Then xy ∉ E(^k(T)). Thus, by our induction hypothesis, d∉⋃_0 ≤ i ≤ k-2, i ≡ k 2 S_i. Hence, we must have either d∈⋃_0 ≤ i ≤ k-3, i ≡ k+1 2 S_i or d∈⋃_i≥ k-1 S_i. In the latter case, we claim that actually d ∈ S_k-1. To see this, recall that xy ∈ E(^k+1(T)) implies xz,yz ∈ E(^k(T)) for some z∈ V(T). We have d_^2(T)(x,z), d_^2(T)(z,y) ≤max S_k-2 = 2^k-2 by induction hypothesis. Hence, d≤ d_^2(T)(x,z)+d_^2(T)(z,y)≤ 2^k-1 by triangle inequality, which is the largest element of S_k-1. Thus, xy ∈ E(^k+1(T)) implies d∈⋃_0 ≤ i ≤ k-1, i ≡ k+1 2 S_i, which proves the “⟹” direction in the theorem. To prove the converse, suppose that d∈⋃_0 ≤ i ≤ k-1, i ≡ k+1 2 S_i. Then automatically xy ∉E(^k(T)) by induction hypothesis, and we must show that xy ∈ E(^k+1(T)). Assume for now that d ≠ 1,3. Then, by Corollary <ref>, where ℓ = k-1, the number d can be written as the sum of two (possibly equal) elements b,c ∈⋃_i ≤ k-2 i ≡ k mod 2 S_i. Since b+c = d, there exists some z ∈ V(T) such that d_M^2(T)(x,z) = b and d_M^2(T)(z,y) = c. Moreover, by (<ref>) and the induction hypothesis, both xz and yz lie in E(^k(T)). Since xy is not an edge in ^k(T), we conclude that xy ∈ E(^k+1(T)), thereby proving the “⟸” direction in the theorem (as long as d ≠ 1, 3). It remains to treat the cases where d∈{1,3}. In either case, note that k is odd. By Lemmas <ref> and <ref> (for d=1 and d=3, respectively), there exists a vertex z ∈ V(T) such that both xz and yz are in E(^3(T)), except in the exceptional case (<ref>). Hence, outside of (<ref>), since k is odd, it follows that both xz and yz are in E(^k(T)). Therefore we have xy ∈ E(^k+1(T)), which completes the proof. In the case (<ref>), by Lemma <ref> there is no such vertex z, and so the edge xy does not occur in ^4(T), nor in any successive metamour graph as can easily be verified by direct inspection. We now give the main result of this section, showing that T(h,m) has metamour limit period 2, with a pre-period of ⌈log_2 h⌉: Let T = T(h,m) be the complete m-ary tree with height h ≥ 5. Then we have ^k(T) = ^k+2(T) if and only if k ≥⌈log_2 h ⌉. In this range, the two graphs in the metamour limit set of T are given by the edge criterion xy ∈ E(^k(T)) ⟺⌈log_2 d_^2(T)(x,y) ⌉≡ k mod 2 for all x,y ∈ V(T), with the exception of the case where depth x = depth y = 1 with m=2 and h ∈{5,6}; in that case, xy is not an edge in either metamour limit graph. It is clear from Lemma <ref> that if k ≥ 2, then xy ∈ E(^k(T)) implies xy ∈ E(^k+2(T)). By Lemma <ref>, the maximum distance between two connected vertices in ^2(T) is ⌈ h/2 ⌉. By (<ref>), the smallest integer i such that ⌈ h/2 ⌉∈ S_i is given by ⌈log_2 ⌈ h/2 ⌉⌉ = ⌈log_2 h ⌉ - 1. Therefore, if d_M^2(T)(x,y) = ⌈ h/2 ⌉, then by Lemma <ref>, the quantity in (<ref>) is the unique value of k such that xy ∉E(^k(T)) but xy ∈ E(^k+2(T)). Hence we have ^k(T) = ^k+2(T) if and only if k is strictly greater than (<ref>). Finally, for the sake of completeness, we describe the metamour graphs in the cases where h < 5. The following behavior can be verified directly by drawing the first few metamour graphs, and so we leave the details to the reader: * Let T = T(1,m). Then (T) = K_m ∪ K_1, and for all k ≥ 2, we have ^k(T) = K_m+1. * Let T = T(2,m). Then (T) = K_m ∪ Wd(m,m), where Wd(m,m) is the windmill graph obtained by taking m copies of K_m with a common vertex. We then have ^2(T) = (K_m)^∇ m∪K_m+1, followed by ^3(T) = (K_m)^∪ m∪K_m+1. (The notation G^∇ m denotes the join of m copies of G.) For all k ≥ 4, we have ^k(T) = K_m^2 + m + 1. * Let T = T(3,m). We have ^5(T) = (K_m)^∪ m^2∪K_m^2 + m + 1, and so ^k(T) = K_m^3+m^2+m+1 for all k ≥ 6. * Let T = T(4,m). Then T has metamour limit period 2, with the metamour limit set as given in Theorem <ref>. If m=2, then this periodic behavior begins at k = 4. If m ≥ 3, then this periodic behavior begins at k = 6. abbrvnat
http://arxiv.org/abs/2409.02500v1
20240904075555
Geometry of temporal chiral structures
[ "Andres F. Ordonez", "Aycke Roos", "Pablo M. Maier", "David Ayuso", "Olga Smirnova" ]
quant-ph
[ "quant-ph" ]
^1Department of Physics, Imperial College London, SW7 2BW London, United Kingdom ^2Department of Chemistry, Queen Mary University of London, E1 4NS London, United Kingdom ^3Max-Born-Institut, Max-Born-Str. 2A, 12489 Berlin, Germany ^4Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany § ABSTRACT In non-relativistic physics the concepts of geometry and topology are usually applied to characterise spatial structures, or structures in momentum space. We introduce the concept of temporal geometry, which encompasses the geometric and topological properties of temporal shapes, i.e. trajectories traced by a tip of a time-dependent vector in vector space. We apply it to vectors of ultrafast electron current or induced polarization in chiral molecules. The central concepts of temporal geometry – curvature and connection – emerge as ubiquitous features of photoexcited, non-equilibrium, chiral electron dynamics. We demonstrate that curvature and connection (i) rely on the interplay of molecular chirality and the polarization properties of light pulses, (ii) can be introduced for multiphoton processes, and (iii) control enantio-sensitive geometric observables via non-equilibrium electronic dynamics excited by tailored laser fields. Our findings may open a way to ultrafast, topologically non-trivial, and enantio-sensitive chemical dynamics. Geometry of temporal chiral structures Andres F. Ordonez^1,2, Aycke Roos ^3, Pablo M. Maier ^3, David Ayuso^1,2,3 and Olga Smirnova^3,4 September 9, 2024 ==================================================================================================== § 1. INTRODUCTION Synthetic chiral light <cit.> is an example of a locally chiral object: the Lissajous figure of its electric field vector draws a chiral three-dimensional trajectory in time at every point in space. Since its chirality is local, it can induce chiral interactions with electrons in molecules in the electric dipole approximation, promising improvement of the chiral response by orders of magnitude compared to standard methods <cit.> based on inefficient interactions of electrons with the magnetic field component of light. Recently, three-color locally chiral electromagnetic fields <cit.> have been used to harvest the electric-dipole chiral response in the microwave region <cit.>. If the locally chiral electromagnetic field draws the same chiral Lissajous figure at every point in space in the interaction region, then the field is also globally chiral. Since different frequencies of the multicolor light combine at different points in space with different phases, such global chirality is not guaranteed a priory. While various set-ups can be constructed to create locally and globally chiral light <cit.>, the spatial variation of the relative phases of non-collinear multicolor fields constituting locally chiral light can also be used to shape the light's handedness in space in the near field from one-dimensional patterns of alternating handedness in space <cit.> to two-dimensional patterns forming a chiral vortex <cit.>. Such shaping of local handedness allows one to encode the handedness of a molecular medium in the far field emission patterns. Locally chiral temporal structures do not have to be limited to the one traced by the tip of the light polarization vector. Electronic polarization or electronic currents induced in chiral molecules <cit.> can also trace temporal chiral structures. That is, the respective vectors – current or induced polarization – trace a chiral trajectory in vector space as these vectors evolve in time. The chiral shape outlined by this trajectory is local because it is encoded in the temporal evolution of the vector, rather than in its spatial evolution. The specific chiral shape is defined by mutual orientations of the molecule and light's polarization vector: each orientation will give rise to a different temporal chiral shape. Thus, the temporally chiral shape is defined in the configuration space of molecular orientations. If several such mutual orientations are included into the measured response, such as e.g. photoionization from the current carrying superposition of states, the set of different temporal shapes corresponding to the underlying orientations may also combine into a non-trivial global structure. This poses several pertinent questions: (i) how is the temporal chiral geometry connected for two different orientations, and (ii) how does this connection quantify a global response for a full set of orientations? Here, we address these questions by identifying the connection and curvature associated with the temporal geometry of attosecond electronic response. The realization of this program requires the implementation of geometric/topological concepts into the ultrafast electronic response in chiral molecules. While topological aspects of nuclear dynamics at conical intersections in molecules are well-known <cit.>, the connection between topology and chirality in the light-induced electronic response of gas phase chiral molecules is an emergent topic <cit.>. In these pioneering works, the respective configuration or parameter spaces range from inverse lattice vectors (in case of periodic chiral arrangement of atoms)<cit.> to parameters of electromagnetic waves triggering the response <cit.> to propagation vectors of non-linear response <cit.>. The desire to invoke the geometric concepts such as curvature, connection, and geometric phase is hardly surprising as it often enables one to identify novel properties of matter. Examples range from the emergence of the Fermi anti-symmetrization principle from the topology of configuration space <cit.>, to topological materials <cit.>, anyons and topological quantum computing <cit.>, twisted light <cit.>, topological photonics and topological lasers <cit.>. The topological aspects of the electronic response have so far been mainly harvested in condensed matter systems and have yet to find their room in the gas and liquid phases. Since topology has an attractive property of robustness to external perturbations and noise, one may be able to create efficient and robust new enantio-sensitive observables by combining chirality and topology in the ultrafast electronic and optical response of chiral gases and liquids. Relevant experimental imperfections in gas phase experiments involve fluctuations of laser parameters, sensitivity and instrument response of photoelectron detectors, challenges in sample preparation, dilute samples and small or fluctuating enantiomeric excess in a sample. In the combination of efficiency and robustness that we are aiming to achieve, the efficiency comes from driving an ultrafast non-linear response enabling chiral detection via electric-dipole interactions <cit.>, whereas robustness comes from topological concepts represented locally via curvature and connection in relevant configuration or parameter spaces. Among the first steps in this direction <cit.> is a new manifestation of the Berry curvature in the photoionization of chiral molecules <cit.>. Surprisingly, it enables a new class of extremely efficient enantio-sensitive photoionization observables <cit.>, which rely on excitation of ultrafast electronic or vibronic currents in chiral molecules, and serve as unique messengers of charge-directed reactivity <cit.>. One such observable predicts and quantifies molecular orientation dichroism in photoionization (PI-MOCD) <cit.>. In this case, the curvature emerges due to the geometry of the photoelectron continuum states. However, the Berry curvature derived in Ref. <cit.> is limited to adiabatic evolutions in the parameter space and one-photon ionization from a state with a stationary current. Here we generalize our approach to extend it to (i) multiphoton processes, (ii) excitations in bound states of chiral molecules and (iii) non-adiabatic evolution. The first enables the opportunity to use tailored fields, the second opens important applications in photoexcited non-equilibrium chiral dynamics, the third removes the restriction to adiabatic evolutions of molecular degrees of freedom. We identify and describe new enantio-sensitive observables enabled by these genralizations. As a corollary of our approach, we establish the geometric origin of the photo-excitation circular dichroism (PXCD)<cit.> and its detection using photoionization by linearly polarized light (PXECD)<cit.> . Our findings establish geometrical concepts, such as the curvature [Since the term Berry curvature is reserved for adiabatic evolution in parameter space, we shall use generic notation curvature and connection for the respective geometric concepts.] and the geometric phase, as the key quantities underlying the temporally chiral electronic response. It means that (i) there is a well-defined functional dependence of the response on these quantities, (ii) the understanding of these quantities leads to a deeply illuminating understanding of the response, (iii) those quantities can be used as effective means of detection of molecular chirality. This may open a way to generate a topologically non-trivial electronic response in photoexcited bound states of chiral molecules via tailored light fields. The paper is organized as follows. In Section 2 we outline the concept of temporal geometry in the interaction of molecules with laser pulses and present expressions for connection, curvature and vectorial and scalar observables relying on the curvature. In Section 3 we use these expressions to provide a new general (i.e. using the connection and curvature from the outset) derivation of the geometric effect – molecular orientation by photoionization (PI-MOCD)<cit.>. We show that the direction of the curvature defines molecular orientation and is a direct experimental observable. This approach shows how geometric quantities may appear in multiphoton processes, verifies our new method by comparison with the earlier results <cit.> and establishes the path that we follow to derive new results identifying the curvature in bound states in Section 4 and curvature in "mixed" bound and continuum states Section 4. Last but not least, the efficient enantio-sensitive observables generated by temporal geometry, such as enantio-sensitive molecular orientation, can be controlled via excitation of non-equilibrium dynamics. In Sections 2, 3, 6, using the chiral molecule propylene oxide as an example, we demonstrate that simple polarization control of laser pulses allows one to switch the direction of curvature from its bound to its continuum value and thus switch the direction of enantio-sensitive orientation in PI-MOCD. Both the demonstration of the control and its fundamental understanding are new. Superscripts M and L denote that respective vectors belong to molecular or laboratory frame, respectively. Vectors contributing to scalar products will not be marked, because the pair of such vectors can be associated with any frame. § 2. TEMPORAL GEOMETRY IN CHIRAL MOLECULES Light fields E⃗(t) interacting with molecules primarily couple to electrons and trigger their attosecond response. In the laboratory frame [To define the laboratory frame in the electric dipole approximation, we can not rely on the propagation vector of the laser field. Instead, we can define the laboratory frame using the two orthogonal polarization components of the laser field together with their cross product, necessitating circularly or elliptically polarized fields ] circularly polarized light interacts with randomly oriented molecules (Fig. <ref>(a)). Since the strength of such interaction depends on the orientation of a molecule with respect to the laser field, it inevitably couples electronic and rotational degrees of freedom. It means that the electronic current excited in molecules oriented differently will actually be different. It is easier to visualize it by switching to the molecular frame, in which the laser field would have all possible orientations (Fig. <ref>(b)). The coupling term now explicitly depends on the Euler angles, which characterize the orientation of the laser field in the molecular frame. To illustrate the geometric origin of such coupling, it is convenient to associate the normal to the light polarization plane in the molecular frame with a unit radius vector on a sphere characterized by angles (θ, ϕ). The polarization plane is tangent to this sphere. Therefore, the light's polarization vector in the molecular frame E⃗(t,θ, ϕ, α) depends on three angles, where α describes the orientation of the polarization ellipse [In case of circularly polarized field the angle α is redundant. We omit α in the following.] in the tangent plane (Fig. 1). Any change in the molecular orientation will "transport" the polarization vector along the surface of the sphere, while the electronic wave-function describing excited electrons will depend on this transport. At any point on the sphere, the laser field excites a geometrically different temporally chiral electronic response, because the geometry of light polarization vectors as seen by the molecule (Fig. 1(b,c)) changes from one point on the sphere to another. Is there a global geometric property associated with these chiral currents for different molecular orientations? Solving the time-dependent Schrödinger equation (TDSE) locally for each orientation characterized by a point ρ_0 = (θ_0, ϕ_0) ^3 on the sphere of Euler angles in the electric dipole approximation: id/dtψ_el^0(r⃗,ρ_0,t)=[H_el+r⃗·E⃗(ρ_0,t)]ψ_el^0(r⃗,ρ_0,t), we obtain an electron wave-function ψ_el^0(t,E⃗(ρ_0),r⃗) [For simplicity we consider a single electron problem.], which is "agnostic" about other light field geometries. The temporally chiral shapes are generated by induced polarization P⃗(ρ,t) or current j⃗(ρ,t)=-∂P⃗/∂ t vectors: P⃗(ρ,t)≡ -⟨ψ_el^0(t,E⃗(ρ),r⃗)|r̂|ψ_el^0(t,E⃗(ρ),r⃗)⟩. Thus the question about the connection between the two currents at two different orientations reduces to the question about the connection between the two wave functions for two different ρ. Can these "local" solutions be connected in any way? Is there a curvature in the configuration space of molecular orientations, e.g. for randomly oriented molecules? The comparison between the electron wave-function ψ_el^0(t,E⃗(ρ),r⃗) describing the temporal geometry of chiral currents excited by the laser field fixed at a given point on the sphere ρ_0 = (θ_0, ϕ_0) with its "global" counterpart ψ_el(t,E⃗(ρ),r⃗), "informed" about all different laser geometries, encapsulates the essence of the "connection" between different local geometries. The TDSE for the electron wave-function ψ_el^0(t,E⃗(ρ_1),r⃗) at some other point on the sphere has the same general form, but the laser field, of course, has a different geometry leading to a different temporally chiral response. The issue arises due to the fact that these two TDSEs are completely independent, and thus each "local" electron wave-function ψ_el^0(t,E⃗,r⃗) is defined up to an arbitrary phase. It means that one may need to find a consistent procedure to "connect" all such solutions along a given cyclic path in the configuration space by introducing a common phase connecting all local solutions in a well-defined way and encoded in a single, consistent "full" solution ψ_ el(t): |ψ_ el(t)⟩ = e^-iα_ D(t) + iS(t)|ψ_ el^0(t)⟩ . In general, the phase has a dynamical α_ D(t) α_ D(t) = ∫_0^t dτ⟨ψ_ el(τ)|H(τ)|ψ_ el(τ)⟩ , and a geometric S(t) S(t) = i∫_0^t dτ⟨ψ_ el^0(τ)|τ|ψ_ el^0(τ)⟩ components. Our "local" solution ψ_ el^0(t) is known as a closed lift of a state vector defined as |ψ_ el(t)⟩⟨ψ_ el(t)| (see <cit.> and Appendix A). In Appendix B we show that, in the case of a periodic evolution of E⃗(t,ρ) in the space of Euler angles, the connection is a vector potential A⃗(E⃗)=i⟨ψ_el^0(E⃗)|∇⃗_E⃗ψ_el^0(E⃗)⟩, quantifying the change in the local wave-function in response to the change in the laser field geometry. It specifies the geometric phase S accumulated after a full cycle of evolution of E⃗(t,ρ) in the space of Euler angles ρ as detailed in Appendix B: S=∮ i⟨ψ_ el^0(E⃗)|∇⃗_E⃗ψ_ el^0(E⃗)⟩ dE⃗=∮A⃗(E⃗)· dE⃗. This connection leads to the geometric magnetic field (the curvature): Ω⃗=∇⃗_E⃗×A⃗= i⟨∇⃗_E⃗ψ_el^0(E⃗)|× |∇⃗_E⃗ψ_el^0(E⃗)⟩. Eqs. (<ref>,<ref>) encapsulate the origin of the geometric magnetism, emerging due to the necessity to "connect" temporal geometry at different points in configuration space. Eqs. (<ref>,<ref>) emphasize the dynamical, laser-driven origin of these geometric quantities. As derived, Eqs. (<ref>,<ref>) appear general and applicable to any light-driven process. In the next sections, we test the generality of Eq. (<ref>) by applying it to three different types of two-photon processes involving linearly and circularly polarized pump and probe pulses. First, we generalize the relationship between the curvature (Eq. (<ref>)) and the vectorial observables expressing the light-induced enantio-sensitive molecular orientation by ionization of an initially randomly oriented ensemble of molecules, extending our earlier work <cit.>, where we derived the curvature for one-photon ionization from a state with a stationary current. Second, we identify new observables describing enantio-sensitive orientation by excitation, opening new opportunities for controlling the direction of orientation. Third, we derive the relationship between the photoionization yield and the curvature Eq. (<ref>) in two-photon ionization by circularly polarized field. We show that it is similar to the one in one-photon ionization from a state with a stationary current <cit.>, confirming the generality of the link between the enantio-sensitive photoionization yield and the geometric phase. Specifically, we show that enantio-sensitive molecular orientation can be expressed using Eq. (<ref>) as follows (superscripts L and M stand for laboratory and molecular frame correspondingly): ⟨e⃗_Ω^L⟩= σ⟨(Ω⃗^L·ẑ^L)e⃗_Ω^L⟩=σR^(n,p)⟨Ω⃗^M·e⃗_Ω^M⟩ẑ^L. Here ⟨e⃗_Ω^L⟩ is the orientationally averaged value of a unit vector directed along the curvature in the molecular frame (e⃗_Ω^M∥Ω⃗^M), σ=±1 is the direction of rotation of the light field, ⟨Ω⃗^M·e⃗_Ω^M⟩ is the orientationally averaged value of a scalar product Ω⃗^M·e⃗_Ω^M, R^(n,p) is a factor associated with the partial alignment of the molecular ensemble due to excitation or ionization, which depends on the number of absorbed photons n. Superscript p in R^(n,p) indicates that it also depends on polarizations of absorbed photons, which may or may not break the cylindrical symmetry of the set-up. We establish the factor R^(n,p) by explicitly evaluating ⟨(Ω⃗^L·ẑ^L)e⃗_Ω^L⟩ and ⟨Ω⃗^M·e⃗_Ω^M⟩ using Eq. (<ref>). Eq. (<ref>) shows that (i) the vector ⟨e⃗_Ω^L⟩ orients itself along the direction of photon spin ẑ^L, and (ii) the curvature controls the direction and the degree of molecular orientation. To demonstrate the generality of our approach, we use Eqs. (<ref>,<ref>) to evaluate enantio-sensitive molecular orientation for a range of two-photon processes and compare the resulting expressions with the direct evaluation, where the curvature is not employed. We show that the results are equivalent. Our approach not only allows us to express enantio-sensitive molecular orientation in a compact way (Eq. (<ref>)), but also uncovers its fundamental origin, appearing to be invariant for different multiphoton processes. Moreover, we also show that the enantio-sensitive and dichroic part of photoionization yield in a two photon process for an oriented molecule is proportional to the projection of the curvature on the light propagation axis: W_σ(ρ)-W_-σ(ρ) =σ(Ω⃗^L_σ(ρ)+Ω⃗^L_-σ(ρ)) ·ẑ^L. This expression is essentially the same as in the one-photon case <cit.>, leading to a similar relationship between the total, i.e. orientationally averaged yield, and the geometric phase in the multiphoton regime for circularly polarized pulses: W_σ-W_-σ =∫ dρ W_σ(ρ)-W_-σ(ρ)=σ/4π∫(Ω⃗^M_σ(k,θ,ϕ)+Ω⃗^M_-σ(k,θ,ϕ)) · dS⃗^M= σ/4π(Φ_σ^ES+Φ_-σ^ES). Here ∫ dS⃗^M=e⃗^M_r∫_0^πsinθ dθ∫_0^2π dϕ, where e⃗^M_r is a unit radial vector in the molecular frame, corresponding to the orientation of the photon spin in the molecular frame, the third Euler angle α (Fig. 1(d)) is not relevant for circularly polarized pulses. First, Eq. (<ref>) shows that the enantio-sensitive yield is proportional to the orientationally averaged radial component of the curvatures ⟨Ω_r, σ^M(k)⟩ +⟨Ω_r, -σ^M(k)⟩. Second, Eq. (<ref>) expresses the flux of the curvature (for a given σ) through the parameter space of field orientations (Fig. 1(d)), which by definition is equal to the geometric phase Φ_σ^ES. The superscript ES emphasizes the enantio-sensitive nature of the geometric phase. Thus the physical meaning of the curvature is the geometric magnetic field generated by electron current as it is "transported" in the configuration space of rotational degrees of freedom. Importantly, the transport is enabled and mediated by the laser field. This statement becomes evident as we rewrite the TDSE for the global wave-function for any value of E⃗(ρ(t),t) in the form emphasising the contribution of the geometric field via its vector potential and its coupling to the change of the orientation of the laser field in the molecular frame: i∂/∂ tψ_el(r⃗,E⃗,t)=[H_el+r⃗·E⃗(ρ(t),t)-A⃗(E⃗)·Ė⃗̇]ψ_el(r⃗,E⃗,t). § 3. TEMPORAL GEOMETRY IN TWO-PHOTON IONIZATION We can use Eq. (<ref>) to calculate the curvature for a two-photon process, in which a linearly polarized pulse excites a superposition of states in a molecule and a circularly polarized pulse photoionizes this superposition. Since only spin carrying light fields induce temporal geometry, we use a linearly polarized pulse to excite the molecule to separate the curvature due to the continuum states from the curvature due to the bound states (see Section 4). If excitation and ionization are performed with the same circularly polarized pulse, then the contributions of the bound and continuum states to the curvature are mixed (see Section 5). We use perturbation theory to calculate |ψ(E⃗)⟩ explicitly: ψ =ψ_0+a_1ψ_1+a_2ψ_2+∫dΘ_ka_k⃗ψ_k⃗. Here dΘ_k=sinθ_kdθ_kdϕ_k and θ_k, ϕ_k are angles characterizing the direction of the photoelectron momentum k⃗ for fixed k=|k⃗|, a_i and a_k⃗ are the amplitudes of bound (i=1,2) and continuum states: a_i=i∫_0^t dt' d⃗_i·ℰ⃗_i(t')e^iω_i t'=id⃗_i·ℰ⃗_i, a_k⃗=-(D⃗_1·E⃗_k⃗1)(d⃗_1·ℰ⃗_1)e^-iω_1τ-(D⃗_2·E⃗_k⃗2)(d⃗_2·ℰ⃗_2)e^-iω_2τ. Here d⃗_i are bound transition dipoles, and D⃗_i are photoionization dipoles from bound states (i=1,2) to the continuum state k⃗. Writing E⃗_k⃗i=|E_ω_k⃗i|E⃗, with E⃗=1/√(2)(x̂+iσŷ) and taking the space spanned by the probe polarization E⃗ as the parameter space as suggested by Eq. (<ref>), the curvature (Eq. (<ref>)) in the molecular frame yield Ω⃗_ion^M(k,ρ) =-|E_ω_k⃗1||E_ω_k⃗2|Im{(d⃗_1·ℰ⃗_1)^*(d⃗_2·ℰ⃗_2) e^iω_12τ∫dΘ_kD⃗_1^*M×D⃗_2^M} where the subscript 'ion' emphasizes that the curvature Ω⃗_ion^M(k,ρ) relies on the continuum states and manifests itself in photoionization. The superscript M emphasizes that vectors are expressed in the molecular frame. The direction of the curvature in the molecular frame is defined by the direction of the vector 𝖯⃗^+M_12(k), where 𝖯⃗^+M_12(k)≡1/2Im{ i∫dΘ_kD⃗_1^*M×D⃗_2^M}, superscript "+" indicates that only k⃗-even part of the vector product contributes to the integrated quantity (see also <cit.>). We now introduce the polar unit vector e⃗_Ω^M along the direction of the curvature Ω⃗_ion^M(k,ρ) (i.e. along 𝖯⃗^+M_12(k)) and obtain the averaged value of ⟨Ω⃗_ion·e⃗_Ω⟩ [Here and in the following we omit indices M, L in scalar products, since the scalar products are rotationally invariant and can be associated with any frame. ] (see Appendix D): ⟨Ω⃗_ion(k)·e⃗_Ω⟩ =-1/3C(d⃗_1·d⃗_2)(𝖯⃗^+_12(k)·e⃗_Ω)sin(ω_12τ)=-1/3υ C(d⃗_1·d⃗_2)|𝖯⃗^+_12(k)|sin(ω_12τ), where C≡|E_ω_k⃗1||ℰ_ω_1||E_ω_k⃗2||ℰ_ω_2|. Note that 𝖯⃗^+_12(k)·e⃗_Ω=υ|𝖯⃗^+_12(k)|, where υ=±1 is molecular pseudoscalar. Using Eq. (<ref>), we obtain the dichroic part of the photoionization yield for a fixed molecular orientation ρ: W^↕,↺(k,ρ)=1/2σ(Ω⃗_ion^L(k,ρ)·ẑ^L), Here the direction of rotation of E⃗ is characterized by σ=± 1, and ẑ^𝖫 denotes the respective axis of the laboratory frame, the superscript "↕,↺" emphasises the polarization sequence of the pulses: linearly polarized pump, circularly polarized probe. It is convenient to separate the curvature into two parts using the Binet-Cauchy identity: (d⃗_1·ℰ⃗_1^*)(d⃗_2·ℰ⃗_2)=(ℰ⃗_1^*·ℰ⃗_2)(d⃗_1·d⃗_2)-(d⃗_1×ℰ⃗_2)·(d⃗_2×ℰ⃗_1^*). The first term represents the part corresponding to the purely isotropic excitation and the second term is associated with the alignment of randomly oriented molecular ensemble after the absorption of a photon. The isotropic component of the curvature is also rotationally invariant and thus represents its isotropic component in the molecular frame: Ω⃗^𝖬_cont,iso(k,ρ)=-2|E_ω_k1||E_ω_k2|(ℰ⃗_1^*·ℰ⃗_2)(d⃗_1·d⃗_2)𝖯⃗^+_12(k)sin(ω_12τ). The contribution of Ω⃗^M_cont,iso to the orientation-averaged dichroic part of the photoelectron yield can be written as the flux of the curvature through the sphere in molecular frame uniting the points defined by the Euler angles θ,ϕ [The third Euler angle γ describing the rotation around the radial vector falls out for the isotropic component, see Eqs. (21-25) in Methods in Ref. <cit.>]: W^↕,↺_iso(k) =∫ W^↕,↺_iso(k,ρ)dρ=σ/8π∫Ω⃗^𝖬_cont,iso(k,ρ) · dS⃗^𝖬=σ/8πΦ_B^ion, where ρ≡(ϕθγ) is the molecular orientation and (∫ dρ≡1/8π^2∫_0^πsinθ dθ∫_0^2π dϕ∫_0^2πdγ). The enantio-sensitive molecular orientation by ionization (PI-MOCD) for the linear pump - circular probe excitation is associated with the preferential photoionization of the molecules with the curvature oriented along the photon spin. Using Eq. (<ref>) together with Eq. (<ref>) we can calculate (see Appendix D) the orientation averaged value of ⟨e⃗_Ω^L⟩: ⟨e⃗_Ω^L⟩^↕,↺=σ⟨(Ω⃗_ion^L·ẑ^L)e⃗_Ω^L⟩=σR^(2,↕,↺)⟨Ω⃗_ion·e⃗_Ω⟩ẑ^L, where R^(2,↕,↺)≡2/5[1 - 1/2(d⃗_1·E⃗_)(d⃗_2·𝖾⃗_Ω)/(d⃗_1·d⃗_2)]. Up to notations Eq. (<ref>) reproduces our earlier result for PI-MOCD <cit.>, validating the general approach (i.e. calculating ⟨e⃗_Ω^L⟩ using Eqs. (<ref> and <ref>) developed here. Vector ⟨e⃗_Ω^L⟩ can be used to find the expectation value of cosβ, where β is the angle between the unit vector along the curvature e⃗_Ω^L and the direction of photon spin. The value of cosβ (see Fig. <ref>(c)) is given by the normalized (to the total photoionization yield averaged over the pump-probe delay) magnitude of ⟨e⃗_Ω^L⟩. The dichroic and enantio-sensitive component of the photoionization yield W≡∫ |a_k⃗|^2 dΘ_k can be expressed as the flux of the curvature through the sphere in the space of light orientations in the molecular frame (see Appendix E): W^ES_↕,↺ =σ/4π∫Ω⃗_ion(k,θ,ϕ) · dS⃗^𝖬=σ/4π⟨Ω^r_ion(k)⟩= σ/4πΦ_^↕,↺^ES. § 4. TEMPORAL GEOMETRY IN PHOTOEXCITATION The curvature can also be associated with the excitation of bound states. Importantly, the general definition Eq. (<ref>) describes both cases, even though the specific expressions are different. This surprising result demonstrates the importance of laser pulse polarization geometry: applying a circularly polarized pulse either to photoionize or to excite the molecule, one can address these two different "faces" of the curvature. Let a circularly polarized pump pulse excite a superposition of states in a chiral molecule. We now show that the subsequent photoionization by a linearly polarized probe pulse will lead to enantio-sensitive orientation of ions (PI-MOCD). Exchanging E⃗_ki and ℰ⃗_i in Eq. (<ref>), using Eq. (<ref>) and following the same steps as in the previous section we obtain the analog of Eq. (<ref>): Ω⃗_exc^M(k,ρ) =-|E_ω_1||E_ω_2|Im{∫dΘ_k(D⃗_1·ℰ⃗_k⃗1)^*(D⃗_2·ℰ⃗_k⃗2) e^iω_12τ}( d⃗_1^M×d⃗_2^M). Here the subscript 'bound' emphasizes the nature of the curvature, which now relies on bound states. The orientationally averaged expression for the scalar product Ω⃗·e⃗_Ω, where e⃗_Ω is oriented along the direction of ( d⃗_1×d⃗_2) in the molecular frame, can be derived in the same way as in the previous section: ⟨Ω⃗_exc(k)·e⃗_Ω⟩=-1/3C Re{∫dΘ_kD⃗_1^*·D⃗_2} ( d⃗_1×d⃗_2)·e⃗_Ωsin(ω_12τ), where C≡|ℰ_ω_k⃗1||E_ω_1||ℰ_ω_k⃗2||E_ω_2| and we took into account that for the time-reversal- invariant excited states ψ_1 and ψ_2, the time-reversal symmetry implies that (see Appendix C): Im{∫dΘ_kD⃗_1^*·D⃗_2} = 0. Note that ( d⃗_1×d⃗_2)·e⃗_Ω=υ| d⃗_1×d⃗_2|e⃗_Ω, meaning that Eq. (<ref>) can also be rewritten as ⟨Ω⃗_exc(k)·e⃗_Ω⟩=-1/3υ C Re{∫dΘ_kD⃗_1^*·D⃗_2} | d⃗_1×d⃗_2|sin(ω_12τ). Using Eq. (<ref>) together with Eq. (<ref>) we can calculate (see Appendix D) the orientation averaged value of ⟨e⃗_Ω^L⟩^↺,↕=σ⟨(Ω⃗_exc^L·ẑ^L)e⃗_Ω^L⟩=σR^(2,↺,↕)⟨Ω⃗_exc·e⃗_Ω⟩ẑ^L, where R^(2,↺,↕)≡2/5[1 - 1/2Re{∫ d Θ_k(D⃗^*_1·E⃗_)(D⃗_2·𝖾⃗_Ω)}/Re{∫ d Θ_k(D⃗^*_1·D⃗_2)}]. Evaluating it explicitly ⟨e⃗_Ω^L⟩^↺,↕ =∫dϱ W^↕,↺(k,ρ)e⃗^L_Ω, we get the expression equivalent to Eq. (<ref>), up to rearrangement of terms: ⟨e⃗_Ω^L⟩^↺,↕ =iσ/60C∫dΘ_k[4(D⃗_2^*·D⃗_1)e⃗_Ω^M-(D⃗_2^*·e⃗_Ω)D⃗_1^M-(D⃗_1·e⃗_Ω)D⃗_2^*M](d⃗_2^M×d⃗_1^M)e^iω_21tẑ+c.c. Eq. (<ref>) together with Eq. (<ref>) show that PI-MOCD can also emerge due to curvature in bound states: molecules, in which the curvature is oriented along the spin of the laser light, are excited preferentially. Thus, photoionization following photoexcitation creates enantio-sensitive orientation of both neutrals and cations. The expectation value of cosβ, where β is the angle between the unit vector along the curvature e⃗_Ω^L and the direction of photon spin, is given by the normalized (to the total photoionization yield averaged over the pump-probe delay) magnitude of ⟨e⃗_Ω^L⟩. The dichroic and enantio-sensitive component of the photoionization yield W≡∫ |a_k⃗|^2 dΘ_k can be expressed as the flux of the curvature through the sphere in the space of light orientations in the molecular frame (see Appendix D): W^ES_↺,↕ =σ/4π∫Ω⃗_exc(k,θ,ϕ) · dS⃗^𝖬=σ/4π⟨Ω^r_exc(k)⟩= σ/4πΦ_^↺,↕^ES. § 5. INTERPLAY OF BOUND AND CONTINUUM TEMPORAL GEOMETRIES In the previous two sections, we considered photoexcitation followed by photoionization induced by pairs of pump-probe pulses with linear - circular or circular-linear polarizations correspondingly. In the first case, the curvature has emerged due to the temporal geometry in photoelectron continuum states, in the second case it has emerged due to the temporal geometry in bound states. What happens if both the pump and the probe pulses are circularly polarized? To answer this question, we need to apply Eqs. (<ref>) to equations (<ref>-<ref>), where the Fourier components of linearly polarized fields ℰ⃗_2 and ℰ⃗_1 are substituted by the Fourier components of circularly polarized fields: ℰ⃗_2=>E⃗_2 and ℰ⃗_1=>E⃗_1. Note that there is a difference in using Eq. (<ref>) in this case. As detailed in Appendix B, the gradients ∇⃗_E⃗ in Eq. (<ref>) should be taken only with respect to fields carrying spin angular momentum, because only these fields uniquely define the orientation of the laboratory frame with respect to the molecular frame. That is why in sections 3,4 the gradients ∇⃗_E⃗ were taken only with respect to the circularly polarised Fourier components of the field (i.e. probe field in Section 3 and pump field in Section 4). Here, we have to evaluate the gradients with respect to both the pump field components and the probe field components. It leads to the following expression for the curvature (see Appendix F): Ω⃗^M(k,ρ) =Ω⃗_exc^M(k,ρ)+Ω⃗^M_ion(k,ρ)+Ω⃗^M_mix(k,ρ) where Ω⃗^M_exc(k,ρ) =-2CIm{∫dΘ_k(D⃗_1·E⃗__k⃗1)^*(D⃗_2·E⃗__k⃗2) e^iω_12τ}( d⃗_1^M×d⃗_2^M), Ω⃗^M_ion(k,ρ) =-2CIm{(d⃗_1·E⃗__1)^*(d⃗_2·E⃗__2) e^iω_12τ∫dΘ_kD⃗_1^*M×D⃗_2^M}. Ω⃗^M_mix(k,ρ)= -2CIm{∫dΘ_k(d⃗_1·E⃗__1)(D⃗_2·E⃗__k⃗2)^*( D⃗_1^*M×d⃗_2^M)e^iω_12τ} -2CIm{∫dΘ_k(D⃗_1·E⃗__k⃗1)^*(d⃗_2·E⃗__2) ( d⃗_1^M×D⃗_2^M)e^iω_12τ}, Here C≡|E_ω_k⃗1||E_ω_1||E_ω_k⃗2||E_ω_2|. Eq. (<ref>) combines bound and continuum curvatures as detailed in Eqs. (<ref>, <ref>), and also contains "mixed" terms, relying on both bound and continuum curvatures. The orientationally averaged scalar product of the curvatures and a unit vector e⃗_Ω are: ⟨Ω⃗_exc(k,ρ)·e⃗_Ω⟩ =-1/3C Re{∫dΘ_k(D⃗_1^*·D⃗_2) }( d⃗_1×d⃗_2)·e⃗_Ωsin(ω_12τ), ⟨Ω⃗_ion(k,ρ)·e⃗_Ω⟩ =-1/3C(d⃗_1·d⃗_2)𝖯⃗^+_12(k)·e⃗_Ωsin(ω_12τ). ⟨Ω⃗_mix(k,ρ)·e⃗_Ω⟩= -1/3CIm{∫dΘ_k(d⃗_1·D⃗_2^*) ( D⃗^*_1×d⃗_2)·e⃗_Ωe^iω_12τ} -1/3CIm{∫dΘ_k(D⃗_1^*·d⃗_2) ( d⃗_1×D⃗_2)·e⃗_Ωe^iω_12τ}, One can verify that using the general expression for expectation value of ⟨e⃗_Ω^L⟩^↺,↺: ⟨e⃗_Ω^L⟩^↺,↺=σ⟨(Ω⃗^L·ẑ^L)e⃗_Ω^L⟩ one obtains the same results as by evaluating ⟨e⃗_Ω^L⟩^↺ explicitly without involving the curvature: ⟨e⃗_Ω^L⟩^↺ =iσ/30C∫dΘ_k{(d⃗_2·d⃗_1)[e⃗_Ω·(D⃗_2^*×D⃗_1)]+(D⃗_2^*·D⃗_1)[e⃗_Ω·(d⃗_2×d⃗_1)] +(d⃗_2·D⃗_1)[e⃗_Ω·(D⃗_2^*×d⃗_1)]+(D⃗_2^*·d⃗_1)[e⃗_Ω·(d⃗_2×D⃗_1)]} e^iω_21tẑ+c.c. Comparing this result to ⟨Ω⃗_mix·e⃗_Ω⟩ we find that the factor R^(2,↺,↺)=1/10 is purely numerical due to preserved cylindrical symmetry: ⟨e⃗_Ω^L⟩^↺,↺=σ⟨(Ω⃗^L·ẑ^L)e⃗_Ω^L⟩=σ R^(2,↺,↺)⟨Ω⃗·e⃗_Ω⟩ẑ^L. Finally, one can verify that the expression for the enantio-sensitive and dichroic yield W^ES_↺,↺-W^ES_↻,↻ via geometric phase also remains invariant, because the sum of the curvatures (and the sum of the geometric phases) only contains terms independent of σ, just like the curvature and the geometric phase in the one-photon case: W^ES_↺,↺-W^ES_↻,↻ =σ/4π∫(Ω⃗^M_↺,↺(k,θ,ϕ)+Ω⃗^M_↻,↻(k,θ,ϕ)) · dS⃗^𝖬= σ/4π(Φ_↺,↺^ES+Φ_↻,↻^ES). § 6. OPPORTUNITIES FOR CONTROL OF GEOMETRIC OBSERVABLES PI-MOCD is the first geometric observable that is directly proportional to the curvature, relying on the geometry of either bound or continuum states. Importantly, the curvature is a dynamical property, as it relies on the excitation of currents and reflects their temporal geometry. Dynamics offers several opportunities for the control. The direction of enantio-sensitive molecular orientation is defined by the direction of the curvature vector in the molecular frame. The curvature vectors relying on the vector product of bound or continuum dipoles naturally have different directions: Ω⃗_exc∥d⃗_1 ×d⃗_2 and Ω⃗_ion∥Re{∫dΘ_kD⃗_1^*×D⃗_2}. Moreover, both orientations are state-specific (see Figs. <ref>,<ref> (b) for propylene oxide). It presents several opportunities for controlling the enantio-sensitive direction of molecular orientation. First, by changing the frequency of the pump pulse, one can address different bound states and thus control both the magnitude and direction of the curvature. Second, controlling the polarization of the pump-probe pulse sequence also allows us to control the magnitude and the direction of the enantio-sensitive molecular orientation by switching the curvature between its form defined by the geometry of the excited bound states to the one defined by the geometry of the continuum states. Figs. <ref> and <ref> illustrate how the direction of the curvature and respective molecular orientation changes if one switches polarizations of pump and probe pulses from circular (linear) to linear (circular). Third, by changing the frequency of the probe pulse in case of linear-circular pulse sequence one can change the direction of the curvature as shown in Fig. <ref>(b). For a given set of intermediate bound states excited by a pump pulse and continuum states populated by the probe pulse, the magnitudes of Ω⃗_ion and Ω⃗_exc differ only in their dipole factors. The bound transition dipole contributions to the magnitudes of Ω⃗_ion and Ω⃗_exc are d⃗_⃗1⃗·d⃗_⃗2⃗=-7.3× 10^-2 a.u. and |d⃗_1 ×d⃗_2|=4.9× 10^-2 a.u., respectively. The continuum dipole contributions Re{∫dΘ_kD⃗_1^*·D⃗_2} and |Re{∫dΘ_kD⃗_1^*×D⃗_2}| change as a function of the photoelectron energy and are shown in Fig. <ref>. These two parameters determine the k-dependence of expectation value of the degree of orientation in Figs. <ref>,<ref>(c). § OUTLOOK Geometric magnetism in chiral molecules appears to be a ubiquitous phenomenon, which can be controlled by tailoring light pulses and by addressing various molecular states. The linear-circular and circular-linear pump-probe sequences have been used experimentally to observe time-resolved photoelectron circular dichroism (TR-PECD) and photoexcitation circular dichroism (PXCD) in chiral molecules via detection of angular distribution of photoelectrons. We show that in both cases, the detection of molecular fragments leads to a complementary geometric observable, directly proportional to the curvature in continuum or bound states. Our results show that the temporal evolution of angular distribution of fragments (Eqs. (<ref>,<ref>)) and angular distribution of photoelectrons produced after inducing the photo-excitation circular dichroism (circular-linear pump-probe sequence)<cit.> and controlled by the term ∫dΘ_kk⃗(D⃗_1^*·D⃗_2) have fundamentally different properties. The former is proportional to the current in the bound states ∝sin(ω_12τ) and thus vanishes at zero pump-probe delay τ=0, while the latter also includes terms ∝cos(ω_12τ) and does not vanish at τ=0. This property establishes PI-MOCD as a direct probe of charge-directed reactivity: PI-MOCD is proportional to the curvature (either in bound or in continuum states) and as such relies on time-reversal symmetry breaking, which can be introduced by exciting current prior to photoionization. The direction of current at the moment of ionization defines the orientation of molecular cations and therefore the direction of molecular fragmentation, highlighting the ability of the ultrafast electron dynamics to control subsequent dynamics of the nuclei. Simulations <cit.> show that enantio-sensitive fragment asymmetry resulting from excitation of electronic wave-packet in Rydberg states in methyl lactate molecule in linear-circular polarization sequence reaches 20%, confirming strong enantio-sensitivity of curvature-driven geometric observables. The control over the directions of enantio-sensitive orientations of molecular ions suggests new opportunities. For example, in linear-circular sequence of the pulses the tip of the curvature vector traces the orange curve in space (Fig. <ref>(b)) as a function of photoelectron energy. This orange curve illustrates the sequence of orientations of the curvature vector achieved for different final photoelectron momenta from 0.25 to 2 a.u. Since enantio-sensitive molecular orientation follows the orientation of the curvature vector, polarization controlled pulses at free electron lasers (FELs), such as chirped circularly polarized pulses, can be used to force molecular ions to draw the trajectory shown in Fig. <ref>(b) (see orange trajectory) within a single experiment. Alternately, the set of experiments in which the frequency of the probe circularly polarized light is tuned within several tens of eV will produce a variety of well controlled molecular orientations. The geometric magnetism in bound states suggests new recipes for merging topological and enantio-sensitive response in chiral molecules. In particular, the flux of mixed curvature in the configurations space of laser field orientations emerging in the excitation by circularly polarized pump and probe pulses is proportional to the enantio-sensitive and dichroic part of the photoionization yield (Eqs. <ref>, <ref>). In analogy to the so-called Thouless charge pump, one would expect to realize a quantized rate of enantio-sensitive charge transfer via diabolic points in rotational states. § ACKNOWLEDGMENTS We gratefully acknowledge many enlightening discussions with Prof. Misha Ivanov. We gratefully acknowledge Prof. Vladimir Chernyak for discussions and lectures on the topic. A.F.O. and D.A. acknowledge funding from the Royal Society (URF/R1/201333). O.S., A.R. P.M.M. gratefully acknowledge ERC-2021-AdG project ULISSES, grant agreement No 101054696. § APPENDICES §.§ Appendix A: Closed lift for a state vector |ψ_ el(t)⟩⟨ψ_ el(t)| The closed lift is a concept from fiber bundle theory that allows one to identify the geometric phase outside the approximation of an adiabatic evolution. In the Aharonov-Anandan fiber bundle theory, one introduces the base space in which all pure state vectors |ψ_ el(t)⟩⟨ψ_ el(t)| live. By definition, the state vectors do not possess any phase. Thus, when the system evolves along the closed contour in the base space, the state vector remains the same after this cyclic evolution along the contour γ : |ψ_ el(t)⟩⟨ψ_ el(t)|=|ψ_ el(t+T)⟩⟨ψ_ el(t+T)|. Wave-functions corresponding to state vectors |ψ_ el(t)⟩⟨ψ_ el(t)| at every point in base space |ψ_ el(t)⟩e^iϕ "live" on a fiber, which "grows vertically" from the base space while the phase ϕ is taking all possible values along this vertical direction (the fiber). These wave-functions are said to constitute "lifts" of the state vector |ψ_ el(t)⟩⟨ψ_ el(t)| from the base space into the fiber space (at every point on the contour γ in the base space). The wave-function that does not accumulate a phase as a result of cyclic evolution of a state vector on the contour γ such that |ψ^0_ el(t)⟩=|ψ^0_ el(t+T)⟩ is called a closed lift. We can show that the wave-function ψ_el^0(t,E⃗(ρ),r⃗) defined by Eq. (<ref>) does not accumulate a phase as a result of cyclic evolution of rotational degrees of freedom and therefore presents the closed lift of the state vector |ψ_ el(t)⟩⟨ψ_ el(t)|. In our case, one cycle of evolution corresponds to one full rotation of the molecular frame ψ_el^0(t,E⃗(ρ),r⃗) from some initial orientation characterised by Euler angles ρ_0={θ_0,ϕ_0,α_0} with respect to the lab frame to the exact same orientation ρ_0+2π={θ_0,α_0+2π,ϕ_0+2π}. The wave-functions ψ_el^0(t,E⃗(ρ_0),r⃗) and ψ_el^0(t,E⃗(ρ_0+2π),r⃗) can be obtained from the same lab-frame wave-function using an appropriate sequence of unitary transformations, such as e.g. rotation around n̂-axes by the angle ξ: ψ_el^0(t,E⃗(ρ_0+2π),r⃗)=e^-iξ(n̂·L̂)ψ_el^0(t,r⃗)=ψ_el^0(t,E⃗(ρ_0),r⃗). Since these unitary transformations do not impart any relative phases on the transformed wave-functions ψ_el^0(t,E⃗(ρ_0+2π),r⃗) and ψ_el^0(t,E⃗(ρ_0),r⃗), and the same arguments remain valid for every point ρ_0, the wave-function ψ_el^0(t,E⃗(ρ),r⃗) presents a closed lift. §.§ Appendix B: Derivation of Eq. (<ref>) Previously, we have derived Eqs. (<ref>, <ref>) using the concept of adiabatic evolution of the rotational degree of freedom, introducing two different time scales for the evolution of electronic and rotational degrees of freedom. We now extend our derivation to a general case, including non-adiabatic evolution. We follow the arguments of Ref. <cit.> and introduce a rotational Hamiltonian H_rot(t), responsible for the cyclic non-adiabatic evolution of the rotational wave-function ψ_rot(ρ,t): id/dtψ_rot(ρ,t)=H_rot(t)ψ_rot(ρ,t). For example, H_rot(t) could encapsulate a sequence of microwave fields driving resonant transitions in three rotational levels of a chiral molecule. Since our focus is laser-induced coupling between electronic and rotational degrees of freedom, which arises due to the sensitivity of the electronic dynamics to the mutual orientation between the molecule and the laser field, we omit the standard Coriolis coupling between electronic and rotational degrees of freedom in the zero approximation. Staring from the TDSE for the full wave-function including the electronic and rotational degrees of freedom: id/dtΨ(r⃗,ρ,t)=[H_el+r⃗·E⃗(ρ,t)+H_rot(t)]Ψ(r⃗,ρ,t) and using the ansatz: Ψ(r⃗,ρ,t)=ψ_el(r⃗,ρ,t)ψ_rot(ρ,t), we first obtain iψ_rot(ρ,t)d/dtψ_el(r⃗,ρ,t)+iψ_el(r⃗,ρ,t)d/dtψ_rot(ρ,t)=[H_el+r⃗·E⃗(ρ,t)+H_rot(t)]ψ_el(r⃗,ρ,t)ψ_rot(ρ,t). Multiplying the TDSE (Eq. (<ref>)) by ψ_rot^*(ρ,t) from the left and integrating over ρ yields the following equation: i∫ dρ|ψ_rot(ρ,t)|^2d/dtψ_el(r⃗,ρ,t)+i∫ dρψ_el(r⃗,ρ,t)ψ_rot^*(ρ,t)d/dtψ_rot(ρ,t) = ∫ dρψ_rot^*(ρ,t)[H_el+r⃗·E⃗(ρ,t)]ψ_el(r⃗,ρ,t)ψ_rot(ρ,t) -i∫ dρd/dt[ψ_rot^*(ρ,t)]ψ_el(r⃗,ρ,t)ψ_rot(ρ,t), where we have used Eq. (<ref>) in the last term applying the rotational Hamiltonian H_rot(t) to the bra-vector on the left. Note, that in doing so we have avoided the assumption of the adiabatic decoupling of the electronic and rotational degrees of freedom. Indeed, the adiabatic decoupling implies that H_rot does not affect the electronic wave-function, while here it clearly does so via the last term in Eq. (<ref>). In the following we shall use the adiabatic decoupling to derive the equation for the closed lift solution (see Eqs. (<ref>-<ref>)). It is instructive to introduce a "global" electron wave-function sensitive to evolution of rotational degrees of freedom, suggested by the structure of Eq. <ref>: Ψ_el(r⃗,T,t)def=∫ dρ |ψ_rot(ρ,T)|^2ψ_el(r⃗,ρ,t). We find that it is instructive to introduce the second time scale T emphasising the evolution of the rotational degree of freedom. However, this step is not necessary and in the next section we perform the derivation without introducing the second time scale. Expanding the laser field around some local orientation ρ_0: E_i(ρ,t)=E_i(ρ_0,t)+∇_ρ_0E_i(ρ_0,t)(ρ-ρ_0), and expanding ψ_el(r⃗,ρ,t) in the same way, we shall see that the interaction term linearly depends on ρ. Integrating the interaction term over ρ we will obtain average orientation ρ(T)=∫ dρ |ψ_rot(ρ,T)|^2ρ. Finally choosing ρ_0=ρ(T), we cancel out the gradient terms and obtain TDSE for the global wavefunction Ψ_el(r⃗,T,t): id/dtΨ_el(r⃗,T,t)+id/dTΨ_el(r⃗,T,t)=[H_el+r⃗·E⃗(ρ(T),t)]Ψ_el(r⃗,T,t). The choice of ρ_0 guarantees that in Eq. <ref> we have included the gradients wrt to laser field up to the first order. Note that ρ(T)=∫ dρ |ψ_rot(ρ,T)|^2ρ follows the evolution of the center of the rotational wave-packet and therefore is controlled by H_rot(T). Comparing the TDSE (Eq. <ref>) for the "local" wavefunction ψ_el^0(r⃗,ρ,t) at fixed geometry ρ=ρ(T) and TDSE (Eq. <ref>) for the "global" wavefunction Ψ_el(r⃗,T,t), we find that they only differ by the term id/dTΨ_el(r⃗,T,t). Thus, ignoring id/dTΨ_el(r⃗,T,t) in Eq. <ref> gives us the equation for ψ_el^0(r⃗,ρ(T),t). We can now show that the local and global solutions only differ by phase: Ψ_el(r⃗,T,t)= e^iS(T)ψ_el^0(r⃗,ρ(T),t). Substituting Eq. <ref> into Eq. <ref>, and taking into account that ∂/∂ tS(T)=0 (see derivation below) we find that S(T)=∫ dT ⟨ψ_el^0(r⃗,T,t)|∂/∂ Tψ_el^0(r⃗,T,t)⟩. Indeed, Eq. <ref> takes the following form: ie^iS(T)∂/∂ tψ_el^0(r⃗,ρ(T),t)-e^iS(T)ψ_el^0(r⃗,ρ(T),t)∂/∂ TS(T)+ie^iS(T)∂/∂ Tψ_el^0(r⃗,ρ(T),t)= [H_el+r⃗·E⃗(ρ(T),t)]e^iS(T)ψ_el^0(r⃗,ρ(T),t). Using Eq. <ref>, multiplying Eq. <ref> by ψ_el^0*(r⃗,ρ(T),t) and integrating over r⃗, we obtain: S(T)=i∫ dT⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ Tψ_el^0(r⃗,ρ(T),t)⟩. Since T enters the local Hamiltonian (Eq. <ref>) only via E⃗(ρ(T),t), S(T) can be expressed in an equivalent form: S(T)=∫ i⟨ψ_el^0(r⃗,E⃗,t)|∇⃗_E⃗ψ_el^0(r⃗,E⃗,t)⟩ dE⃗, emphasising its geometric origin. Since in our derivation of Eq.<ref> we expanded up to the first order around ρ(T), we can also perform such expansion in Eq.<ref>, yielding: Ψ_el(r⃗,T,t)= ψ_el(r⃗,ρ(T),t)+𝒪(ρ^2(T)-ρ^2(T)). Thus, Eq.<ref> connects the full solution for the electron wave-function ψ_el(r⃗,ρ(T),t) (see ansatz <ref>) with its local approximation ψ_el^0(r⃗,ρ(T),t). ψ_el(r⃗,ρ(T),t)= e^iS(T)ψ_el^0(r⃗,ρ(T),t). Note that ∂/∂ tS(T)=0. To prove it we can explicitly calculate the derivative ∂/∂ tS(T)=i∫ dT⟨∂/∂ tψ_el^0(r⃗,ρ(T),t)|∂/∂ Tψ_el^0(r⃗,ρ(T),t)⟩+i∫ dT⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ T∂/∂ tψ_el^0(r⃗,ρ(T),t)⟩= 2∫ dT Im{⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ T∂/∂ tψ_el^0(r⃗,ρ(T),t)⟩}=2∫ dT Im{⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ THψ_el^0(r⃗,ρ(T),t)⟩}, where we introduced a short-hand notation for H≡ H_el+r⃗·E⃗(ρ(T),t) and used Eq. <ref>. The last integral can be evaluated by parts : ∫ dT Im{⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ THψ_el^0(r⃗,ρ(T),t)⟩}=∫Im{⟨ψ_el^0(r⃗,ρ(T),t)|d [Hψ_el^0(r⃗,ρ(T),t)]⟩}= Im{⟨ψ_el^0(r⃗,ρ(T),t)| Hψ_el^0(r⃗,ρ(T),t)⟩}-∫ dT Im{⟨∂/∂ Tψ_el^0(r⃗,ρ(T),t)| Hψ_el^0(r⃗,ρ(T),t)⟩} The last equation can be simplified to yield: 2∫ dT Im{⟨ψ_el^0(r⃗,ρ(T),t)|∂/∂ THψ_el^0(r⃗,ρ(T),t)⟩}=Im{⟨ψ_el^0(r⃗,ρ(T),t)| Hψ_el^0(r⃗,ρ(T),t)⟩}=0 Note that the last term is equal to zero due to Hermiticity of the Hamiltonian H. It is always zero for cyclic processes. Eq. (<ref>) takes the following form: i∫ dρ|ψ_rot(ρ,t)|^2d/dtψ_el(r⃗,ρ,t)+i∫ dρψ_el(r⃗,ρ,t)d/dt|ψ_rot(ρ,t)|^2 = ∫ dρψ_rot^*(ρ,t)[H_el+r⃗·E⃗(ρ,t)]ψ_el(r⃗,ρ,t)ψ_rot(ρ,t). We shall now assume that the rotational wave-packet is narrow. This assumption can be realized in an experiment by preparing a narrow rotational wavepacket with a pump pulse and using a short probe pulse with time-dependent polarization, rotating with respect to the frame established by the pump pulse. Introducing ρ̇(t)≡∫ dρd/dt|ψ_rot(ρ,t)|^2ρ=d/dt∫ dρ |ψ_rot(ρ,t)|^2ρ=d/dtρ(t) and expanding E_i(ρ,t)=E_i(ρ_0,t)+∇_ρ_0E_i(ρ_0,t)(ρ-ρ_0), and ψ_el(r⃗,ρ,t) up to the first order around ρ(t), we obtain for the integral ∫ dρψ_el(r⃗,ρ,t)d/dt|ψ_rot(ρ,t)|^2 the following expansion: ∫ dρψ_el(r⃗,ρ,t)d/dt|ψ_rot(ρ,t)|^2= ∫ dρψ_el(r⃗,ρ(t),t)d/dt|ψ_rot(ρ,t)|^2+∇_ρψ_el(r⃗,ρ(t),t)·ρ̇(t). Here we took into account that ∫ dρ |ψ_rot(ρ,t)|^2=1, which leads to d/dt∫ dρ |ψ_rot(ρ,t)|^2=0. Combining this term with the other terms, we obtain the following TDSE for the "global" wave-function ψ_el(r⃗,ρ(t),t): i∂/∂ tψ_el(r⃗,ρ(t),t)+i∇_ρψ_el(r⃗,ρ(t),t)·ρ̇(t)=[H_el+r⃗·E⃗(ρ(t),t)]ψ_el(r⃗,ρ(t),t). Note that ρ(t)=∫ dρρ |ψ_rot(ρ,t)|^2 follows the evolution of the center of the rotational wave-packet and therefore is controlled by H_rot(t). Let's derive an approximate solution ψ_el^0(r⃗,ρ,t) for the case when the rotational Hamiltonian itself does not affect the electronic dynamics, i.e. H_rot(t)ψ_el^0(r⃗,ρ,t)ψ_rot(ρ,t)=ψ_el^0(r⃗,ρ,t)H_rot(t)ψ_rot(ρ,t). Applying this condition to Eq. (<ref>) yields iψ_rot(ρ,t)d/dtψ_el^0(r⃗,ρ,t)=[H_el+r⃗·E⃗(ρ,t)]ψ_el^0(r⃗,ρ,t)ψ_rot(ρ,t), demonstrating that ψ_el^0(r⃗,ρ,t) solves the "local" TDSE for fixed ρ: id/dtψ_el^0(r⃗,ρ,t)=[H_el+r⃗·E⃗(ρ,t)]ψ_el^0(r⃗,ρ,t). Comparing Eq. (<ref>) with Eq. (<ref>) we find that the "global" and the "local" solutions are related as follows: ψ_el(r⃗,ρ(t),t)= e^iS(ρ(t))ψ_el^0(r⃗,ρ(t),t). Substituting Eq. (<ref>) to Eq. (<ref>), we find: S(ρ(t))=i∫_ρ(t_0)^ρ(t) dρ' ⟨ψ_el^0(r⃗,ρ',t)|∇_ρ'ψ_el^0(r⃗,ρ',t)⟩, Note that, by the virtue of equations Eq. (<ref>) and Eq. (<ref>), the only reason why ψ_el^0(r⃗,ρ,t) depends on ρ is because the laser field in the molecular frame depends on ρ, i.e. ψ_el^0(r⃗,ρ,t)≡ψ_el^0(r⃗,E⃗(ρ),t). Indeed, ψ_el^0(r⃗,ρ,t) satisfies the TDSE for the electronic degrees of freedom Eq.<ref>. This equation, by definition, has no information about the rotational degree of freedom. Thus, the gradient of ψ_el^0(r⃗,ρ,t) with respect to ρ can be rewritten in an equivalent form, ∇_ρψ_el^0(r⃗,ρ,t)=∇_E⃗ψ_el^0(r⃗,ρ,t)∂E⃗/∂ρ, as long as the laser field E⃗ can be used to unambiguously define the laboratory frame with respect to which the molecular orientation ρ is expressed. For example, circularly (elliptically) polarized field E⃗ satisfies this requirement. Hence the derivative should be taken with respect to E⃗. This leads to the following equivalent expression for the geometric phase: S(ρ(t))=i∫_E⃗(t_0)^E⃗(t)⟨ψ_el^0(r⃗,ρ',t)|∇_E⃗ψ_el^0(r⃗,ρ',t)⟩ dE⃗≡∫_E⃗(t_0)^E⃗(t)A⃗(E⃗)· dE⃗, where A⃗(E⃗)=⟨ψ_el^0(r⃗,E⃗,t)|∇_E⃗ψ_el^0(r⃗,E⃗,t)⟩. Finally, since ∇_ρψ_el(r⃗,ρ(t),t)=e^iS(ρ(t))ψ_el^0(r⃗,ρ(t),t)∇_ρS= ψ_el(r⃗,ρ(t),t)A⃗, the Eq.<ref> takes the following form: i∂/∂ tψ_el(r⃗,ρ(t),t)+A⃗(ρ(t))·ρ̇(t)ψ_el(r⃗,ρ(t),t)=[H_el+r⃗·E⃗(ρ(t),t)]ψ_el(r⃗,ρ(t),t). §.§ Geometric phase associated with the path in the space of rotational densities We are now going to consider and alternative mechanism of geometric phase accumulation by the electronic wave-function. In this example, we select a manifold of initial conditions for rotational wave-functions ψ_rot^(n)(t_0), where index n enumerates different initial state vectors. We now consider a snapshot of this manifold at some later time t, which corresponds to the manifold of states ψ_rot^(n)(t) evolved from these different initial conditions according to Eq. <ref>. We shall derive the relationship between the local electronic wave-functions, which ignore the presence of multiple rotational wave-packets, and the global electronic wave-function. We start with the ansatz similar to Eq.<ref>: Ψ(r⃗,ρ,t)=ψ_el(r⃗,ρ,t)ψ_rot^(n)(ρ,t). §.§ Appendix C: Time-reversal symmetry To prove Eq. (<ref>) we extend the approach established in <cit.> and introduce an operator Ĝ: ∫dΘ_kD⃗_1^*(k⃗)·D⃗_2(k⃗) =∫dΘ_k⟨ψ_k⃗|r⃗|ψ_1⟩^*·⟨ψ_k⃗|r⃗|ψ_2⟩ =∫dΘ_k⟨ψ_1|r⃗|ψ_k⃗⟩·⟨ψ_k⃗|r⃗|ψ_2⟩ =∫dΘ_k⟨ψ_1|r_iP_k⃗r_i|ψ_2⟩ =⟨ψ_1|r_iP_kr_i|ψ_2⟩ =⟨ψ_1|Ĝ|ψ_2⟩ We can now test the properties of this operator with respect to time-reversal symmetry. Ĝ≡r̂_iP̂_kr̂_i Ĝ^† =r̂_i^†P̂_k^†r̂_i^† =r̂_iP̂_kr̂_i =Ĝ [P̂_k,T̂]=[r̂_i,T̂]=0⇒[Ĝ,T̂]=0 ⟨α|Ĝ|β⟩=⟨α̃|Ĝ|β̃⟩^* For |α⟩=|α̃⟩ and |β⟩=|β̃⟩, we get ⟨α|Ĝ|β⟩=⟨α|Ĝ|β⟩^*, thus ⟨α|Ĝ|β⟩∈ℝ. This results yields Eq. (<ref>). §.§ Appendix D: Orientational averaging, derivation of Eq. (<ref>) and Eq. (<ref>) We shall use the following expressions: ∫ dϱ l_iα l_jβ l_k γ l_l δ = 1/30( [ ijkl ikjl iljk ]) ( [ 4 -1 -1; -1 4 -1; -1 -1 4 ]) ( [ αβγδ; αγβδ; αδβγ ]) = 1/30[ 4ijklαβγδ -ikjlαβγδ -iljkαβγδ -ijklαγβδ +4ikjlαγβδ -iljkαγβδ -ijklαδβγ -ikjlαδβγ +4iljkαδβγ], i.e., ∫ dρ (a⃗^L ·u⃗^L) (b⃗^L ·v⃗^L) (w⃗^L ·c⃗^L) x⃗^L = 1/30[ 4(a⃗^L·b⃗^L)c⃗^L (u⃗^M·v⃗^M)(w⃗^M·x⃗^M) -(a⃗^L·c⃗^L)b⃗^L (u⃗^M·v⃗^M)(w⃗^M·x⃗^M) - (b⃗^L·c⃗^L)a⃗^L (u⃗^M·v⃗^M)(w⃗^M·x⃗^M) -(a⃗^L·b⃗^L)c⃗^L (u⃗^M·w⃗^M)(v⃗^M·x⃗^M) +4(a⃗^L·c⃗^L)b⃗^L (u⃗^M·w⃗^M)(v⃗^M·x⃗^M) - (b⃗^L·c⃗^L)a⃗^L (u⃗^M·w⃗^M)(v⃗^M·x⃗^M) -(a⃗^L·b⃗^L)c⃗^L (u⃗^M·x⃗^M)(v⃗^M·w⃗^M) -(a⃗^L·c⃗^L)b⃗^L (u⃗^M·x⃗^M)(v⃗^M·w⃗^M) +4 (b⃗^L·c⃗^L)a⃗^L (u⃗^M·x⃗^M)(v⃗^M·w⃗^M) ]. We derive Eq. (<ref>), which we list here for completeness: ⟨e⃗_Ω^L⟩^↕,↺=σ⟨(Ω⃗_ion·ẑ^L)e⃗_Ω^L⟩=σRυ⟨Ω⃗_ion·e⃗_Ω⟩ẑ^L, where Ω⃗_ion(k,ρ) =-|E_ω_k⃗1||E_ω_k⃗2|(d⃗_1·ℰ⃗_1)^*(d⃗_2·ℰ⃗_2)𝖯⃗^+_12(k)sin(ω_12τ), (Eq. (<ref>)). Substituting Eq. (<ref>) into Eq. (<ref>) we obtain: ⟨e⃗_Ω^L⟩^↕,↺=-|E_ω_k⃗1||E_ω_k⃗2|sin(ω_12τ)∫ dϱ(d⃗_1^L·ℰ⃗_1^L)^*(d⃗_2^L·ℰ⃗^L_2)(𝖯⃗^+L_12(k)·ẑ^̂L̂)e⃗_Ω^L Applying Eq. (<ref>) to Eq. (<ref>) we obtain ⟨e⃗_Ω⟩^↕,↺ =σ/30C[4(d⃗_2·d⃗_1)e⃗_Ω^M-(d⃗_2·e⃗_Ω^M)d⃗_1-(d⃗_1·e⃗_Ω^M)d⃗_2]·𝖯⃗^+_12(k)sin(ω_12τ)ẑ^L, which is up to notations 𝖯⃗^+_12(k)=υ|𝖯⃗^+_12(k)|e⃗_Ω is equivalent to Eq. (<ref>). Starting from Ω⃗_ion(k,ρ)·e⃗_Ω^M =-|E_ω_k⃗1||E_ω_k⃗2|(d⃗_1·ℰ⃗_1)^*(d⃗_2·ℰ⃗_2)(𝖯⃗^+_12(k)·e⃗_Ω^M)sin(ω_12τ), we shall now evaluate ⟨Ω⃗_ion·e⃗_Ω^M⟩. Note that we only need to average the part that depends on mutual orientations of the pump pulse and molecule: ∫ dϱ(d⃗_1·ℰ⃗_1)^*(d⃗_2·ℰ⃗_2)= 1/3(d⃗_1·d⃗_2)(ℰ⃗_1^*·ℰ⃗_2)=1/3(d⃗_1·d⃗_2)|ℰ_ω_1||ℰ_ω_2|, yielding Eq. (<ref>): ⟨Ω⃗_ion(k)·e⃗_Ω⟩ =-1/3συ C(d⃗_1·d⃗_2)|𝖯⃗^+_12(k)|sin(ω_12τ). §.§ Appendix E: Connection between the curvature and the photoionization yield In this section, we calculate the photoionization yield W≡∫ |a_k⃗|^2 dΘ_k for two-photon processes considered in this work. For example, consider a sequence of linear and circularly polarized pulses: ℰ⃗^L_i=ℰ_ω_ix̂^L, E⃗^L_j=E_ω_jk1/√(2)(x̂^L+iσŷ^L) The photoionization amplitude |a_k⃗|^2 can be written as: |a_k⃗|^2=1/2[|E_ω_1k|^2|(d⃗^L_1·ℰ⃗^L_1) |^2[|D⃗_1^L·x̂^L|^2+|D⃗^L_1·ŷ^L|^2+iσ(D⃗_1^*L×D⃗^L_1)·ẑ^L] +1/2[|E_ω_2k|^2|(d⃗^L_2·ℰ⃗^L_2) |^2[|D⃗^L_2·x̂^L|^2+|D⃗^L_2·ŷ^L|^2+iσ(D⃗_2^*L×D⃗^L_2)·ẑ^L] +Re{E_ω_1k^*E_ω_2k(d⃗^L_1·ℰ⃗^L_1)^*·(d⃗^L_2·ℰ⃗^L_2) [(D⃗_1^*L·x̂^L)(D⃗_2^L·x̂^L)+(D⃗_1^*L·ŷ^L)(D⃗_2^L·ŷ^L)+iσ(D⃗_1^*L×D⃗_2^L)·ẑ^L]} The dichroic (i.e. proportional to σ) photoionization yield is ∫|a_k⃗|^2 dΘ_k=σRe{E_ω_1k^*E_ω_2k(d⃗^L_1·ℰ⃗^L)^*·(d⃗^L_2·ℰ⃗^L) [e^iω_12τ∫ i(D⃗_1^*L×D⃗_2^L)dΘ_k]}·ẑ^L=σΩ⃗^L·ẑ^L, where Ω⃗ is given by Eq. (<ref>). Next, consider a sequence of two circularly polarized pulses of the form E⃗^L_j=E_ω_j1/√(2)(x̂^L+iσŷ^L), E⃗^L_k⃗j=E_ω_k⃗j1/√(2)(x̂^L+iσŷ^L) . The photoionization amplitude |a_k⃗|^2 can be written as: |a_k⃗|^2 = |d^L_1 ·E^L_1|^2 |D^L_1 ·E^L_k⃗1|^2 + |d^L_2 ·E^L_2|^2 |D^L_2 ·E^L_k⃗2|^2 + 2Re{(D^L_1 ·E^L_k⃗1)^* (d^L_1 ·E^L_1)^* (D^L_2 ·E^L_k⃗2) (d^L_2 ·E^L_2)} = |d^L_1 ·E^L_1|^2 |D^L_1 ·E^L_k⃗1|^2 + |d^L_2 ·E^L_2|^2 |D^L_2 ·E^L_k⃗2|^2 + 1/2Re{(D^L_1 ·E^L_k⃗1)^* (d^L_1 ·E^L_1)^* (D^L_2 ·E^L_k⃗2) (d^L_2 ·E^L_2)} +1/2Re{(d^L_1 ·E^L_1)^* (D^L_1 ·E^L_k⃗1)^* (d^L_2 ·E^L_2) (D^L_2 ·E^L_k⃗2)} The part of the photoionization yield proportional to σ or σ^2 is W_σ = ∫ dΘ_k C/2{ [(d_1^*L·x̂^L)(d^L_2·x̂^L)+(d_1^*L·ŷ^L)(d^L_2·ŷ^L)]iσ (D_1^*×D^L_2) ·ẑ^Le^iω_12τ + [ (D_1^*·x̂^L)(D^L_2·x̂^L)+(D_1^*·ŷ^L)(D^L_2·ŷ^L)] iσ (d_1^*L×d^L_2) ·ẑ^Le^iω_12τ + [ (d_2^*L·x̂^L)(D^L_1·x̂^L)+(d_2^*L·ŷ^L)(D^L_1·ŷ^L)]iσ (D_2^*L×d^L_1) ·ẑ^Le^iω_12τ + [ (D_2^*L·x̂^L)(d^L_1·x̂^L)+(D_2^*L·ŷ^L)(d^L_1·ŷ^L)]iσ (d_2^*L×D^L_1) ·ẑ^Le^iω_12τ - σ^2[(d_1^*L×d^L_2) ·ẑ^L] [(D_1^*L×D^L_2) ·ẑ^L]e^iω_12τ - σ^2[(D_2^*L×d^L_1) ·ẑ^L] [(d_2^*L×D^L_1) ·ẑ^L]e^iω_12τ }+ c.c. where C≡|E_ω_k⃗1||E_ω_1||E_ω_k⃗2||E_ω_2|. Let us prove Eq. (<ref>) for circular pump and probe fields, using the yield (Eq. (<ref>)) and the curvature (Eq. (<ref>)). To this end, we calculate the expression σΩ^L(ρ)·ẑ^L = C/2∫ dΘ_k{ +[(d^L_1·x̂^L)(d^L_2·x̂^L)+(d^L_1·ŷ^L)(d^L_2·ŷ^L)] iσ(D_1^*L×D^L_2) ·ẑ^L e^iω_12τ +[(D_1^*L·x̂^L)(D^L_2·x̂^L)+(D_1^*L·ŷ^L)(D^L_2·ŷ^L)] iσ(d^L_1×d^L_2) ·ẑ^L e^iω_12τ +[(d^L_1·x̂^L)(D^L_2·x̂^L)+(d^L_1·ŷ^L)(D^L_2·ŷ^L) ]iσ( D⃗_1^*L×d⃗^L_2)·ẑ^L e^iω_12τ +[(D_1^*L·x̂^L)(d^L_2·x̂^L)+(D_1^*L·ŷ^L)(d^L_2·ŷ^L) ]iσ( d⃗^L_1×D⃗^L_2)·ẑ^L e^iω_12τ -2σ^2(d^L_1×d^L_2)·ẑ^L(D_1^*L×D^L_2) ·ẑ^L e^iω_12τ -2σ^2 (D_1^*L×d_2) ·ẑ^L( d⃗_1×D⃗^L_2)·ẑ^L e^iω_12τ }+c.c. The yield in Eq. (<ref>) satisfies the equation W_σ - W_-σ = C∫ dΘ_k { [(d_1^*L·x̂^L)(d^L_2·x̂^L)+(d_1^*L·ŷ^L)(d^L_2·ŷ^L)]iσ (D_1^*L×D^L_2) ·ẑ^Le^iω_12τ + [ (D_1^*L·x̂^L)(D^L_2·x̂^L)+(D_1^*L·ŷ^L)(D^L_2·ŷ^L)] iσ (d_1^*L×d^L_2) ·ẑ^Le^iω_12τ + [ (d_2^*L·x̂^L)(D^L_1·x̂^L)+(d_2^*L·ŷ^L)(D^L_1·ŷ^L)]iσ (D_2^*L×d^L_1) ·ẑ^Le^iω_12τ + [ (D_2^*L·x̂^L)(d^L_1·x̂^L)+(D_2^*L·ŷ^L)(d^L_1·ŷ^L)]iσ (d_2^*L×D^L_1) ·ẑ^Le^iω_12τ. On the other hand we have σΩ_σ^L(ρ)·ẑ^L - (-σ)Ω_-σ^L(ρ)·ẑ^L = C∫ dΘ_k{ +[(d^L_1·x̂^L)(d^L_2·x̂^L)+(d^L_1·ŷ^L)(d^L_2·ŷ^L)] iσ(D_1^*L×D^L_2) ·ẑ^L e^iω_12τ +[(D_1^*L·x̂^L)(D^L_2·x̂^L)+(D_1^*L·ŷ^L)(D^L_2·ŷ^L)] iσ(d^L_1×d^L_2) ·ẑ^L e^iω_12τ +[(d^L_1·x̂^L)(D^L_2·x̂^L)+(d^L_1·ŷ^L)(D^L_2·ŷ^L) ]iσ( D⃗_1^*L×d⃗^L_2)·ẑ^L e^iω_12τ +[(D_1^*L·x̂^L)(d^L_2·x̂^L)+(D_1^*L·ŷ^L)(d^L_2·ŷ^L) ]iσ( d⃗^L_1×D⃗^L_2)·ẑ^L e^iω_12τ }+c.c., such that W_σ - W_-σ = C σΩ_σ^L(ρ)·ẑ^L - (-σ)Ω_-σ^L(ρ)·ẑ^L = σ(Ω_σ^L(ρ) + Ω_-σ^L(ρ))·ẑ^L. §.§ Appendix E: Symmetric contributions of the curvature in bound and continuum states to enantio-sensitive orientation A symmetric contribution of the bound and continuum curvature to the enantio-sensitive orientation of molecular cations emerges in the case where both the excitation and ionization is performed by circularly polarized pulses and involve a single complex intermediate bound state. In this case, an oriented molecular vector v⃗: <v⃗> =|A^(2)|^2∫dΩ_k^M∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2|d⃗^L·F⃗^L|^2. Here <...> corresponds to orientational averaging, D stands for photoionization dipole matrix element, while d stands for photoexcitation dipole matrix element. |d⃗^L·F⃗^L|^2 =|(d⃗_r^L+id⃗_i^L)·F⃗^L|^2 =(d⃗_r^L·F⃗^L+id⃗_i^L·F⃗^L)^*(d⃗_r^L·F⃗^L+id⃗_i^L·F⃗^L) =|d⃗_r^L·F⃗^L|^2+|d⃗_i^L·F⃗^L|^2+i(d⃗_r^L·F⃗^L*)(d⃗_i^L·F⃗^L)-i(d⃗_i^L·F⃗^L*)(d⃗_r^L·F⃗^L) =|d⃗_r^L·F⃗^L|^2+|d⃗_i^L·F⃗^L|^2+i(d⃗_r^L×d⃗_i^L)·(F⃗^L*×F⃗^L) =|d⃗_r^L·F⃗^L|^2+|d⃗_i^L·F⃗^L|^2+1/2(d⃗^L*×d⃗^L)·(F⃗^L*×F⃗^L) After orientation averaging the first two terms read as: ∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2|d⃗_r^L·F⃗^L|^2 =1/30d_r^2{[2v⃗-(v⃗·d̂_r)d̂_r]·(D⃗^*×D⃗)}{[ẑ·(F⃗^*×F⃗)]|F⃗|^2} ∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2|d⃗_i^L·F⃗^L|^2=1/30d_i^2{[2v⃗-(v⃗·d̂_i)d̂_i]·(D⃗^*×D⃗)}{[ẑ·(F⃗^*×F⃗)]|F⃗|^2} Their sum yields ∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2|d⃗_r^L·F⃗^L|^2+∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2|d⃗_i^L·F⃗^L|^2 =1/30{[2|d⃗|^2v⃗-Re{(v⃗·d⃗^*)d⃗}]·(D⃗^*×D⃗)}{[ẑ·(F⃗^*×F⃗)]|F⃗|^2} Orientation averaging <cit.> of the third term in Eq. (<ref>) yields 1/2∫dϱ (v⃗^L·ẑ^L)|D⃗^L·F⃗^L|^2[(d⃗^L*×d⃗^L)·(F⃗^L*×F⃗^L)]=1/2g⃗^(4)· M^(4)f⃗^(4) where (see Eq. (19) in <cit.>) g⃗^(4)=([ (v⃗·D⃗^*)[D⃗·(d⃗^*×d⃗)]; (v⃗·D⃗)[D⃗^*·(d⃗^*×d⃗)]; [v⃗·(d⃗^*×d⃗)](D⃗^*·D⃗) ]) M^(4)=1/30([ 4 -1 -1; -1 4 -1; -1 -1 4 ]) f⃗^(4)=([ (ẑ·F⃗^*)[F⃗·(F⃗^*×F⃗)]; (ẑ·F⃗)[F⃗^*·(F⃗^*×F⃗)]; [ẑ·(F⃗^*×F⃗)](F⃗^*·F⃗) ])=([ 0; 0; [ẑ·(F⃗^*×F⃗)]|F⃗|^2 ]) g⃗^(4)· M^(4)f⃗^(4) =1/30([ (v⃗·D⃗^*)[D⃗·(d⃗^*×d⃗)]; (v⃗·D⃗)[D⃗^*·(d⃗^*×d⃗)]; [v⃗·(d⃗^*×d⃗)](D⃗^*·D⃗) ])·([ -1; -1; 4 ])[ẑ·(F⃗^*×F⃗)]|F⃗|^2 =1/30{-(v⃗·D⃗^*)[D⃗·(d⃗^*×d⃗)]-(v⃗·D⃗)[D⃗^*·(d⃗^*×d⃗)] +4[v⃗·(d⃗^*×d⃗)](D⃗^*·D⃗)}[ẑ·(F⃗^*×F⃗)]|F⃗|^2 We can factorize either v⃗, as in 1/30{ -[D⃗·(d⃗^*×d⃗)]D⃗^*-[D⃗^*·(d⃗^*×d⃗)]D⃗+4(d⃗^*×d⃗)|D⃗|^2}·v⃗ =1/302{ 2(d⃗^*×d⃗)|D⃗|^2-Im{[D⃗·(d⃗^*×d⃗)]D⃗^*}}·v⃗ or d⃗^*×d⃗, as in 1/30{ -(v⃗·D⃗^*)D⃗-(v⃗·D⃗)D⃗^*+4|D⃗|^2v⃗}·(d⃗^*×d⃗) = 1/302{ 2|D⃗|^2v⃗-Re{(v⃗·D⃗^*)D⃗}}·(d⃗^*×d⃗) The later is analogous to the sum of the first two terms in Eq. (<ref>). Putting everything together we get <v⃗ >=|A^(2)|^2{∫dΩ_kG(k⃗)}{[ẑ·(F⃗^*×F⃗)]|F⃗|^2} G =[2|d⃗|^2v⃗-Re{(v⃗·d⃗^*)d⃗}]·(D⃗^*×D⃗) +[2|D⃗|^2v⃗-Re{(v⃗·D⃗^*)D⃗}]·(d⃗^*×d⃗) The two terms correspond to a linear-circular photon absorption, and circular-linear photon absorption. In both terms, the first term in square brackets represents the isotropic term, while the second term reflects the alignment due to absorption of the first photon. Note that in this expression d⃗ and D⃗ appear on the same footing. §.§ Appendix F: Berry curvature for circular pump and probe fields The wave function expressed in first order perturbation theory is |ψ⟩ = |0⟩ + i ∑_jd⃗_j ·E⃗_j |k⟩ - ∑_j ∫ dΘ_k (D⃗_j ·E⃗_kj)(d⃗_j ·E⃗_j) e^-iω_jτ|k⃗⟩. The corresponding curvature is Ω = i ⟨∇_E ψ|×|∇_E ψ⟩ |∇_E ψ⟩ = i ∑_j |E_ω_j|d⃗_j |k⟩ - ∑_j |E_ω_j||E_ω_k⃗j|∫ dΘ_k[D⃗_j (d⃗_j ·E⃗_j) + (D⃗_j ·E⃗_kj)d⃗_j] e^-iω_jτ|k⃗⟩ Using the orthogonality of the basis, we get -iΩ⃗ = ∑_j |E_ω_j|d⃗_j^* ×d⃗_j + ∑_jl |E_ω_j||E_ω_k⃗j||E_ω_l||E_ω_k⃗l| ∫ dΘ_k[D⃗_j^* (d⃗_j ·E⃗_j)^* + (D⃗_j ·E⃗_kj)^*d⃗_j^*] ×[D⃗_l (d⃗_l ·E⃗_l) + (D⃗_l ·E⃗_kl)d⃗_l] e^iω_jlτ Since the dipole moments d⃗_l are real the connection simplifies to -iΩ⃗ = ∑_jl|E_ω_j||E_ω_k⃗j||E_ω_l||E_ω_k⃗l|∫ dΘ_k[D⃗_j^* (d⃗_j ·E⃗_j^*) + (D⃗_j ·E⃗_kj)^*d⃗_j] ×[D⃗_l (d⃗_l ·E⃗_l) + (D⃗_l ·E⃗_kl)d⃗_l] e^iω_jlτ We employ the identity ∫ dΘ_kD⃗_j ×D⃗_j^* = 0 , such that terms satisfying j = l vanish, -iΩ⃗ = ∑_j≠ l|E_ω_j||E_ω_k⃗j||E_ω_l||E_ω_k⃗l| ∫ dΘ_k[ (d⃗_j ·E⃗_j^*)(d⃗_l ·E⃗_l)(D⃗_j^*×D⃗_l) + (d⃗_j ·E⃗_j^*)(D⃗_l ·E⃗_kl)(D⃗_j^*×d⃗_l) + (D⃗_j ·E⃗_kj)^*(d⃗_l ·E⃗_l)(d⃗_j×D⃗_l ) + (D⃗_j ·E⃗_kj)^*(D⃗_l ·E⃗_kl)(d⃗_j×d⃗_l ) ] e^iω_jlτ. Due to the anti-symmetry of the cross product, the curvature can be expressed as Ω⃗ = -2C∫ dΘ_k[ Im{(d⃗_1 ·E⃗_1^*)(d⃗_2 ·E⃗_2)(D⃗_1^*×D⃗_2)e^iω_12τ} + Im{(d⃗_1 ·E⃗_1^*)(D⃗_2 ·E⃗_k2)(D⃗_1^*×d⃗_2)e^iω_12τ} + Im{(D⃗_1 ·E⃗_k1)^*(d⃗_2 ·E⃗_2)(d⃗_1×D⃗_2 )e^iω_12τ} + Im{(D⃗_1 ·E⃗_k1)^*(D⃗_2 ·E⃗_k2)(d⃗_1×d⃗_2 )e^iω_12τ}] , where we defined C = |E_ω_1||E_ω_k⃗1||E_ω_2||E_ω_k⃗2| .
http://arxiv.org/abs/2409.02822v2
20240904154229
Language Understanding as a Constraint on Consensus Size in LLM Societies
[ "Giordano De Marzo", "Claudio Castellano", "David Garcia" ]
physics.soc-ph
[ "physics.soc-ph" ]
[email protected] ^1University of Konstanz, Universitaetstrasse 10, 78457 Konstanz, Germany ^2Centro Ricerche Enrico Fermi, Piazza del Viminale, 1, I-00184 Rome, Italy. ^3Complexity Science Hub, Josefstaedter Strasse 39, 1080, Vienna, Austria ^4Istituto dei Sistemi Complessi (ISC-CNR), Via dei Taurini 19, I-00185 Rome, Italy § ABSTRACT The applications of Large Language Models (LLMs) are going towards collaborative tasks where several agents interact with each other like in an LLM society. In such a setting, large groups of LLMs could reach consensus about arbitrary norms for which there is no information supporting one option over another, regulating their own behavior in a self-organized way. In human societies, the ability to reach consensus without institutions has a limit in the cognitive capacities of humans. To understand if a similar phenomenon characterizes also LLMs, we apply methods from complexity science and principles from behavioral sciences in a new approach of AI anthropology. We find that LLMs are able to reach consensus in groups and that the opinion dynamics of LLMs can be understood with a function parametrized by a majority force coefficient that determines whether consensus is possible. This majority force is stronger for models with higher language understanding capabilities and decreases for larger groups, leading to a critical group size beyond which, for a given LLM, consensus is unfeasible. This critical group size grows exponentially with the language understanding capabilities of models and for the most advanced models, it can reach an order of magnitude beyond the typical size of informal human groups. Language Understanding as a Constraint on Consensus Size in LLM Societies David Garcia^1,3 September 9, 2024 ========================================================================= § INTRODUCTION Large Language Models (LLMs) have proven capabilities for particular applications, such as summarization <cit.> or sentiment analysis <cit.>, but can also be used in group settings where several heterogeneous agents interact with each other to tackle more complex collaborative tasks <cit.>. This goes beyond ensembles of models for one task <cit.>, for example in collaboration setups where multiple LLMs have different roles and tasks, such as AutoGPT [https://github.com/Significant-Gravitas/AutoGPT] and more recently Microsoft's AutoGen <cit.>. In a more organic way, AI-powered devices and assistants, such as Siri [https://openai.com/index/openai-and-apple-announce-partnership/] or the Humane AI Pin, can perform everyday tasks in interaction with each other, for example coordinating events or negotiating prices. As we move towards a society where intelligent machines interact with each other, it becomes important to understand their ability to agree with each other in large groups. This can motivate new applications but also identify risks stemming from undesired collective behavior of machines. For example, trading bots interacting through the stock market can lead to flash crashes <cit.>. Current research on the behavior of LLMs has mostly focused on their behavior in isolation <cit.> and collective behaviors have been explored less <cit.>, so far with a focus on social simulation of network structures <cit.>, opinion and information spreading <cit.> and online interaction <cit.>. To prepare for large numbers of interacting LLMs, we need to understand if they can display collective alignment, such as emerging consensus, what determines the abilities of LLMs to coordinate, and at what scale that can happen. Social groups can reach consensus on behavioral norms even when there is no preference or information supporting one option over another. Animals and early human groups develop and sustain those norms when each individual in the group knows the identity and behavior of all other members of the group. This leads to a scaling of group size with brain structure, where human groups reach sizes between 150 and 300 <cit.>. Human societies have built institutions and other ways of decision making to reach higher scales, but the cognitive limit of keeping a scale of about 250 contacts remains even in an online society <cit.>. This insight can be translated to LLM societies, where consensus could emerge in arbitrary norms and where the cognitive abilities of LLM agents could play a role in the size of consensus. These are new questions of AI anthropology where the insights and methods for the previous study of human societies can be applied to study the size and complexity of LLM societies. In this article, we investigate the ability of groups of LLMs to reach consensus about norms for which there is no information supporting one option over another. The emergence of consensus is a foundational aspect of social systems, where individual interactions lead to the formation of a unified agreement or shared understanding without the need for a central authority or structure <cit.>. We develop a framework to test if groups of LLMs can reach consensus and use it to analyze a benchmark of proprietary and open-source models. We apply insights from previous opinion dynamics research to understand the emergence of consensus in LLM societies, which allows us to measure a majority force that enables consensus in groups of LLMs. This majority force is a function of language understanding capabilities of models and group size, where consensus might not emerge beyond a critical size for a given LLM. Furthermore, we test if the most capable LLMs are able to reach consensus at scales that go beyond the spontaneous consensus formation of human groups. § RESULTS §.§ Opinion Dynamics of Large Language Models To investigate the opinion dynamics of large language models, we perform simulations using agents guided by various LLMs, as for instance models from the GPT, Claude, and Llama families. Simulations run as follows. Each agent is assigned an initial opinion randomly chosen from a binary set (e.g., "Opinion A" and "Opinion B"), where one is chose as the norm for reference. At each time step, a single agent is randomly selected to update their opinion. The selected agent receives a list of all other agents and their current opinions and is then prompted to choose their new opinion based on this information. This approach mirrors binary opinion dynamics models such as the voter model or Glauber dynamics <cit.>, where agents update their opinions based on peer interactions only. However, unlike traditional Agent-Based Models (ABMs) with predefined, hard-coded opinion update rules, here agents are allowed to autonomously decide their opinions. More details on this simualtion framework can be found in the Methods section. In order to follow the evolution of the system, we define the average group opinion m as m=1/N∑_is_i=N_1-N_2/N. Here s_i is the opinion of agent i and we adopt the convention that the first opinion corresponds to s_i=+1, while the second to s_i=-1 (i.e. in favor and against the norm). We also introduce N_1 and N_2 as the number of agents supporting opinion 1 and 2, respectively, while N is the total number of agents. In these terms we can define the consensus level C=|m| that quantifies the level of agreement among individuals. Full consensus corresponds to C=1, while C=0 means that the system is split in two groups of equal size and different opinion. Three scenarios can happen in the evolution of C(t): * C can converge to 1 (m converges to ±1), meaning that all agents are aligned and consensus is reached; * C can oscillate around a value greater (smaller) than 0 without ever reaching 1. In this case a partial consensus is reached, but not as a collective. * if C keeps fluctuating around zero, consensus is completely absent and the group is constantly in a disordered state. We show in Fig. <ref> the evolution of the consensus level for five different models and a group size of N=50 LLM agents. We also show the boxplot of C(t=10) over 20 realizations. The two most advanced models we considered, Claude 3 Opus and GPT-4 Turbo, reach consensus in all simulations, which corresponds to |m|=1 in the boxplot. Conversely, smaller models (Llama 3 70b, Claude 3 Haiku and GPT-3.5 Turbo) do not reach consensus in any of the simulations and only reach partial levels of agreement with |m|<0.5. We can get a deeper understanding of the underlying opinion dynamic process by looking at the adoption probability P(m), defined as the probability of an agent to support the norm as function of the average group opinion m. The right panel of Fig. <ref> shows P(m) for ten of the most popular LLMs (we consider N=50 and we set the model temperature to T=0.2). The most advanced models (GPT-4 family, Llama 3 70b, Claude 3 Sonnet and Opus) show a stronger tendency to follow the majority, with more pronounced S-curves. The adoption probability is an increasing function of m that saturates to high values (low values) when m=1 (m=-1). On the other hand, smaller models (GPT-3.5, Claude 3 Haiku), have a less pronounced tendency to follow the majority, with GPT-3.5 going against the majority for small values of m. Adoption probabilities can be approximated by the function P(m)=1/2*tanh*β· m+1. The parameter β, that we call majority force, regulates the level of randomness in the system. For β=0 each agent behaves fully randomly (the new opinion is selected by coin-tossing) and no consensus can be reached. For β=∞ agents always align with the global majority and consensus is reached very quickly. The agreement with (<ref>) is made fully evident by fitting the parameter β empirically for each model and plotting P(m^*) as a function of the rescaled average opinion m^*=m·β, where all models except GPT3.5 have a good agreement with the function. As shown in the inset of Fig. <ref>, all adoption probabilities collapse on the same curve. The adoption probability of (<ref>) is analogous to the case of the simplest spin system in physics of complex systems, the Curie-Weiss model <cit.>, where atoms interact and their spins can align in a magnet as opinions can align in consensus. §.§ Language understanding in consensus formation The fit of the function P(m^*), to the opinion dynamics of models highlights that their differences are captured by the majority force parameter β. The left panel of Fig. <ref> shows the values of β for N=50 versus the MMLU benchmark of each model, which measures the language understanding and cognitive capabilities of LLMs <cit.>. There is a clear monotonic relationship between MMLU and β, with a correlation coefficient of 0.75 (p-value 0.01). This means that models with higher language understanding capabilities tend to exhibit a stronger tendency towards consensus, but none of the models show consistent behavior against the majority. This is directly related to language understanding and not just context window length as a plain "memory size", as the context window length L has a weaker and less significant correlation with β (correlation 0.49 with p-value 0.15). More details are reported in the Appendix. In animal (human and non-human) societies, group size plays a crucial role, with a progressive loss of norm stability as the number of individuals increases. We hypothesize that a similar phenomenon may occur also in LLMs societies, where agents might have their ability to understand the norm limited by the amount of information they have to process on the rest of the group. The right panel of Fig. <ref> shows the estimated β as a function of N for different LLMs. Given the cost of performing the simulations with proprietary models, we selected a subset of the LLMs we analyzed before to probe the space of possible values of β as a function of N. Independently of the model, we observe a general tendency where the larger the group, the lower the β and the weaker is the majority force. Models with a low MMLU tend to reach low values of β already for N of the order of few tens of agents, while the most advanced models still present a substantial majority force even for N=1000. §.§ Critical Consensus Size As we discussed above, the adoption probability curves of the models are the same as the Curie-Weiss model on a fully connected network. A well known result for this model is that β_c=1 represents a transition point: for β<1 the system shows no sign of consensus, while when β>1 order emerges and the agents can coordinate and reach consensus <cit.>. As a consequence, we expect a change in the behavior of the LLM society depending on the group size and also on the specific LLM driving the agents. Such a result is valid in the limit of very large systems, while here we are dealing with relatively small sizes. However, we can still observe the effects of this transition by inspecting how the consensus time i.e., the average time needed to reach consensus starting from a random state, grows for larger N. If β were infinite, the consensus time would grow logarithmically with N. Instead, as shown on Fig. <ref> using Llama 3 70b as an example, the consensus time shows two different regimes. When N is small the consensus time increases slowly, while for larger N, consensus time grows extremely fast. The transition between the two regimes occurs for a critical size N_c, that we define as β(N_c)=β_c=1. For Llama 3 70b N_c≈50. Note that as we detail in the methods, this value of N_c is actually an upper bound. On the other hand, a more capable model like GPT-4 Turbo does not present any deviation from the linear regime, as shown on Fig. <ref>, with the same scaling pattern as the Curie-Weiss model. By studying the values of β as a function of N we can determine, for each model, the critical consensus size N_c where β crosses the line at β=1 and that determines the size above which a model does not reach consensus. You can see some examples of this on the right panel of Fig. <ref>. In the case of the most powerful models, provide a lower bound for this quantity as we were not able to find cases with β<1 even for N=1000. Anthropology provides an expectation on how β depends on the language understanding capabilities of models, as primates exhibit a monotonic relationship between neocortex ratio and typical group size <cit.>. We report the values of the critical consensus size N_c on the right panel of Fig. <ref>, where we plot N_c as a function of the MMLU benchmark. We also plot, in the same figure, the critical consensus size for humans as Dunbar's limit N_c≈ 150 <cit.>. LLMs display a similar trend as observed in anthropology, where language understanding capability predicts the limit of consensus size, i.e. the size of groups above which consensus becomes unlikely. Remarkably, the simplest models and humans are well aligned along an exponential growing N_c, suggesting that the capacity to reach consensus in large groups is connected to cognitive capabilities both in humans and in LLMs, when measure as language understanding ability. However, the most modern and advanced LLMs go beyond this exponential scaling, reaching a super-human coordination capability despite having a human-level MMLU performance. For instance both GPT-4 and GPT-4 Turbo are well above β_c also for N=1000, the largest system we considered and substantially beyond Dunbar's number. § DISCUSSION Human societies are characterized by emergent behavior that cannot be understood just by studying individuals in isolation. Consensus is one of such emergent group capabilities that has proven crucial in the development of languages, widely accepted norms and religions. For humans, the Dunbar number N_c≈ 150 gives the maximal number of personal relations we can cultivate and thus also the maximal group size in which consensus, intended as the emergence of common social norms, can exist. Studying humans and other primates, researchers have identified a power-law scaling connecting the neocortex ratio to the average group size <cit.>, thus proving the link between cognitive capabilities and the development of large societies. LLMs are attracting a growing interests in the social sciences for their ability to mimic humans, both at the individual and, as recent studies suggest, at the group level. Indeed, like humans, LLMs show emergent group properties that were not directly coded in their training process. As we argue in this paper, consensus is one of these properties, with LLMs showing striking similarities with primates including humans. As a first result, we showed that all most advanced LLMs are characterized by a majority-following tendency described by a universal function with a single parameter β, the majority force. Remarkably, this function depends on the specific model and is the same describing magnetic spin systems. Different models typically have a different β, with the less sophisticated ones showing a smaller β, i.e., a less pronounced majority force and thus a more stochastic behavior that prevents consensus. The majority force depends not only on the model, but also on the group size: it tends to be larger in smaller groups. This evidence and the equivalence with the Curie-Weiss models allowed us to compute the “Dunbar number” of LLMs, a threshold above which societies composed by these artificial animals can no longer reach a consensus. While the less sophisticated models show a human-like scaling, the most sophisticated LLMs are capable to reach consensus in groups of size that go beyond what humans can do without explicit rules, despite having human-like cognitive capabilities in language-based tasks. These results are important due to the relevance of collective behavior and coordination in social contexts. More research is needed to understand other conditions that lead to the emergence of LLM consensus, especially when different models coexist or when some agents have privileged data access. The ability of LLMs to reach consensus can be beneficial, for example when looking to coordinate group activities of LLMs where incentives might not be aligned. When there is no information to guide how to behave, LLM societies could reach their own social norms that regulate their behavior to be predictable by other LLMs despite that lack of objectivity. However, this also poses a threat, as these norms might not be aligned with human values or could pose situations of coordinated behavior that threaten the integrity of a system, such as the case of flash crashes due to trading bots. Future research on AI anthropology can understand better how this kind of norms emerge in actionable scenarios, going beyond the idealized situation we studied here and illustrating both the promise and peril of LLMs agents collaborating within our society. § METHODS §.§ Opinion dynamics simulations In order to simulate an opinion dynamics process we implement a voter model-like process with the only difference being the use of LLMs. * At each infinitesimal time step dt=1/N an agent is randomly selected; * the agent is given the full list of all agents in the system, each identified by a random name, and the opinions they support. Note that the opinion of the agent itself, like in the voter model, is not relevant; * the selected agent is then asked to reply with the opinion it wants to support and its opinion is updated correspondingly; * the process is then iterated till consensus is reached or till the maximal number of updates is performed. Note that in one time step t→ t+1 we thus perform N updates (being dt=1/N), so that, on average, each LLMs is selected at least once. In all our simulations we set the initial collective opinion m to zero, meaning that there are initially the same number of agents supporting the two opinions. In order to practically perform the opinion simulations, following the framework introduced in <cit.>, we exploit the prompt below 8cm Below you can see the list of all your friends together with the opinion they support. You must reply with the opinion you want to support. The opinion must be reported between square brackets. X7v A keY B 91c B gew A 4lO B ... Here A and B are the opinions names, which, in general, do not play any role. In all our simulations we used the opinion names k and z and we tested the effect of using different opinion names, obtaining no major difference. It's important to remark that most LLMs present an opinion bias, tending to prefer an opinion name over the other. This behavior is particularly strong when the opinion names have an intrinsic meaning, like for instance “Yes” and “No”. In this case LLMs have a strong preference toward the more “positive” opinion, tending to strongly prefer “Yes” over “No”. For this reason it is important to use random letters or random combinations of letters as opinion names. Even doing so, small biases are typically always present. However they are not as pronounced as in the situations we described above and they can easily removed. We do so by performing a random shuffling of the opinion names at each iteration. So for instance at t=0 the first opinion may be called k and the second z, while at t=dt these names may swap with probability 0.5, meaning that the first opinion will now be called z and the second k. More details about this procedure are reported in the Appendix. §.§ Details on the LLMs In all the simulations reported in the main text we used as model temperature T=0.2. As we details in the Appendix, there are no major differences using different values for T, but a low model temperature ensures reliability in the output format. We also report in Table <ref> the detailed list and model version of all the LLMs we exploited. §.§ Curie-Weiss Model The Curie-Weiss (CW) Model is arguably the most simple spin model. It describes a system of N atoms that interact with ferromagnetic interaction i.e. that tend to align, and that can only have two states either s_i=+1 (up) or s_i=-1 (down). Moreover, these spins interact on a fully connected network, meaning that each of them is influenced by all the other spins. The CW model is therefore the mean field limit of the well know Ising model. The order parameter of this model is the magnetization m, the equivalent of our collective opinion, defined as the average of the spin values m=*s_i. The mapping between our LLM based opinion simulations and the CW model derives from the transition probability defined by Eq. (<ref>). This expression is indeed equivalent to the Glauber dynamic <cit.>, which allows to simulate the CW model by means of a Markov chain Monte Carlo approach. It is relatively easy to compute the equilibrium value of the magnetization in the CW model. This is done by deriving the so called self-consistency equation, that reads m=tanh(β m). This equation has a different behavior depending on the value of the majority force β (that in the CW model is called inverse temperature). For β<1 it only admits the solution m=0, while for β>1 m=0 stops to be a stable solution, and two new solution m=± m^* appear. The point β=1 is a critical point characterized by a second order phase transition. This means that as soon as β>1, m^* starts to grow gradually, till reaching the value m^*=1 for large values of β. It is important to remark that in finite size systems, the ability to reach consensus depends both on the value of m^* and on the fluctuations around this value. In general, order will emerge as soon as β>1, but the system may still not reach a full consensus due to statistical fluctuations being too small. For this reason the condition β(N_c)=1 is actually an upper bound to the maximal group size where consensus can be reached, since for values of the majority force close to one, the fluctuations may still be not enough for a full consensus to be reached. unsrt 10 chang2023booookscore Yapei Chang, Kyle Lo, Tanya Goyal, and Mohit Iyyer. Booookscore: A systematic exploration of book-length summarization in the era of llms. arXiv preprint arXiv:2310.00785, 2023. miah2024multimodal Md Saef Ullah Miah, Md Mohsin Kabir, Talha Bin Sarwar, Mejdl Safran, Sultan Alfarhood, and MF Mridha. A multimodal approach to cross-lingual sentiment analysis with ensemble of transformer and llm. Scientific Reports, 14(1):9603, 2024. aroyehun2023leia Segun Taofeek Aroyehun, Lukas Malik, Hannah Metzler, Nikolas Haimerl, Anna Di Natale, and David Garcia. Leia: Linguistic embeddings for the identification of affect. EPJ Data Science, 12(1):52, 2023. guo2024embodied Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L Griffiths, and Mengdi Wang. Embodied llm agents learn to cooperate in organized teams. arXiv preprint arXiv:2403.12482, 2024. liu2023dynamic Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. Dynamic llm-agent network: An llm-agent collaboration framework with agent team optimization. arXiv preprint arXiv:2310.02170, 2023. shen2024hugginggpt Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems, 36, 2024. jiang2023llm Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. arXiv preprint arXiv:2306.02561, 2023. wu2023autogen Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. johnson2013abrupt Neil Johnson, Guannan Zhao, Eric Hunsader, Hong Qi, Nicholas Johnson, Jing Meng, and Brian Tivnan. Abrupt rise of new machine ecology beyond human response time. Scientific reports, 3(1):2627, 2013. aher2023using Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, pages 337–371. PMLR, 2023. argyle2023out Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337–351, 2023. dentella2023systematic Vittoria Dentella, Fritz Günther, and Evelina Leivada. Systematic testing of three language models reveals low language accuracy, absence of response stability, and a yes-response bias. Proceedings of the National Academy of Sciences, 120(51):e2309583120, 2023. binz2023using Marcel Binz and Eric Schulz. Using cognitive psychology to understand gpt-3. Proceedings of the National Academy of Sciences, 120(6):e2218523120, 2023. pellert2023ai Max Pellert, Clemens M Lechner, Claudia Wagner, Beatrice Rammstedt, and Markus Strohmaier. Ai psychometrics: Assessing the psychological profiles of large language models through psychometric inventories. Perspectives on Psychological Science, page 17456916231214460, 2023. strachan2024testing James WA Strachan, Dalila Albergo, Giulia Borghini, Oriana Pansardi, Eugenio Scaliti, Saurabh Gupta, Krati Saxena, Alessandro Rufo, Stefano Panzeri, Guido Manzi, et al. Testing theory of mind in large language models and humans. Nature Human Behaviour, pages 1–11, 2024. grossmann2023ai Igor Grossmann, Matthew Feinberg, Dawn C Parker, Nicholas A Christakis, Philip E Tetlock, and William A Cunningham. Ai and the transformation of social science research. Science, 380(6650):1108–1109, 2023. bail2024can Christopher A Bail. Can generative ai improve social science? Proceedings of the National Academy of Sciences, 121(21):e2314021121, 2024. de2023emergence Giordano De Marzo, Luciano Pietronero, and David Garcia. Emergence of scale-free networks in social interactions among large language models. arXiv preprint arXiv:2312.06619, 2023. papachristou2024network Marios Papachristou and Yuan Yuan. Network formation and dynamics among multi-llms. arXiv preprint arXiv:2402.10659, 2024. chang2024llms Serina Chang, Alicja Chaszczewicz, Emma Wang, Maya Josifovska, Emma Pierson, and Jure Leskovec. Llms generate structurally realistic social networks but overestimate political homophily. arXiv preprint arXiv:2408.16629, 2024. park2023generative Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology, pages 1–22, 2023. chuang2023simulating Yun-Shiuan Chuang, Agam Goyal, Nikunj Harlalka, Siddharth Suresh, Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, and Timothy T Rogers. Simulating opinion dynamics with networks of llm-based agents. arXiv preprint arXiv:2311.09618, 2023. tornberg2023simulating Petter Törnberg, Diliara Valeeva, Justus Uitermark, and Christopher Bail. Simulating social media using large language models to evaluate alternative news feed algorithms. arXiv preprint arXiv:2310.05984, 2023. park2022social Joon Sung Park, Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Social simulacra: Creating populated prototypes for social computing systems. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages 1–18, 2022. dunbar1992neocortex Robin IM Dunbar. Neocortex size as a constraint on group size in primates. Journal of human evolution, 22(6):469–493, 1992. gonccalves2011modeling Bruno Gonçalves, Nicola Perra, and Alessandro Vespignani. Modeling users' activity on twitter networks: Validation of dunbar's number. PloS one, 6(8):e22656, 2011. dunbar2016online Robin IM Dunbar. Do online social media cut through the constraints that limit the size of offline social networks? Royal Society Open Science, 3(1):150292, 2016. dyer2009leadership John RG Dyer, Anders Johansson, Dirk Helbing, Iain D Couzin, and Jens Krause. Leadership, consensus decision making and collective behaviour in humans. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1518):781–789, 2009. baronchelli2018emergence Andrea Baronchelli. The emergence of consensus: a primer. Royal Society open science, 5(2):172189, 2018. castellano2009statistical Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of modern physics, 81(2):591–646, 2009. kochmanski2013curie Martin Kochmański, Tadeusz Paszkiewicz, and Sławomir Wolski. Curie–weiss magnet—a simple model of phase transition. European Journal of Physics, 34(6):1555, 2013. hendrycks2020measuring Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. glauber1963time Roy J Glauber. Time-dependent statistics of the ising model. Journal of mathematical physics, 4(2):294–307, 1963. § BIAS REMOVAL As we mention in the main text, in order to remove opinion biases we have to shuffle the opinion names at each iteration. However, this only works when the initial bias is not extremely pronounced. We show in Fig. <ref> the adoption probability with and without shuffling for two opinion names combinations: “yes, no” and “k, z”. As it is possible to see, the former has a very strong bias toward “yes” and therefore the shuffling procedure results in an adoption probability not described by a tanh function. On the other hand, “k, z” only present a very mild bias and the shuffling procedure allows such a bias to be removed without altering the shape of the adoption probability. § ROLE OF OPINION NAMES In order to test the stability of our results we investigate the shape of the adoption probability under different opinion names. We report in Fig. <ref> (top row) the results of this procedure for four possible combinations of opinion names and three different LLMs, representative of the three families of models we studied in our work. As it is possible to see there are differences only in the case of Llama and just for one of the opinion name pairs we considered. In any case, the functional form of the adoption probability is always the same and therefore the general picture is not affected by these minor variations. Moreover, the most advanced models, GPT-4 Turbo and Claude 3.5 Sonnet, show no differences at all, suggesting that as the model become more capable, biases and differences due to the opinion names disappear. § MODEL TEMPERATURE Another aspect we tested is the effect of the model temperature T. This parameter sets the level of creativity or randomness of the LLM. For T=0 the model behaves deterministically, always producing in output the token (word) with the highest probability. Instead, when T>0, randomness starts to play a role and also other tokens can be observed in the output. As shown in Fig. <ref> (bottom row), we considered three different temperatures T=0.2, 0.6, 1.0 observing no substantial difference in the adoption probability. § ROLE OF CONTEXT WINDOW In order to understand if the majority force parameter β is influenced by the language understanding and cognitive capabilities of the LLMs or rather by their context window length, we repeat the analysis performed in Fig. <ref> (left panel). In this case, however, we compare the majority force with the context windows of the ten models we analyzed. As shown in Fig. <ref> there is a much weaker and less significant correlation (0.49 with a p-value of 0.15) with respect to the MMLU benchmark, suggesting that the context window length only plays a marginal role.
http://arxiv.org/abs/2409.02779v1
20240904145659
Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance
[ "Akash R. Wasil", "Peter Barnett", "Michael Gerovitch", "Roman Hauksson", "Tom Reed", "Jack William Miller" ]
cs.CY
[ "cs.CY", "cs.AI" ]
Governing dual-use technologies: Case studies of international security agreements & lessons for AI governance Akash R. Wasil^1,2, Peter Barnett^3, Michael Gerovitch^2, Roman Hauksson^4, Tom Reed^2, Jack William Miller^2 ^1Georgetown University () ^2University of Cambridge, ERA AI Fellowship ^3Independent ^4University of Texas at Dallas § ABSTRACT [t]0.8 International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms. § EXECUTIVE SUMMARY We examine 5 international security agreements focusing on dual-use technologies and extract lessons for potential AI governance efforts. The following table summarizes the 5 case studies: [title=Case Study Summaries] IAEA (International Atomic Energy Agency) * Purpose: Promote peaceful nuclear energy, prevent nuclear proliferation, and verify compliance with the Treaty on the Non-proliferation of Nuclear Weapons. * Core Powers: Inspect facilities, verify non-proliferation, and report violations to the UN Security Council. * Governance: The Board of Governors has 35 members — 13 are pre-selected based on nuclear capability, and 22 are elected by the General Conference for two years. The Director General is chosen by the Board and approved by the General Conference. * Non-Compliance: Iran's illegal nuclear program was discovered in 2002, leading to sanctions and the 2015 Iran nuclear deal. The deal's effectiveness was questioned and undermined when the US withdrew. START Treaties * Purpose: Reduce US and Russian nuclear arsenals. * Core Powers: Allow on-site inspections, satellite monitoring, and data exchange. * Governance: Overseen by the Bilateral Consultative Commission (BCC) with US and Russian officials; decisions are made by consensus. * Non-Compliance: In 2023, Russia stopped allowing inspections. The Organisation for the Prohibition of Chemical Weapons (OPCW) * Purpose: Eliminate the production and use of chemical weapons and verify compliance with the Chemical Weapons Convention (CWC). * Core Powers: The OPCW conducts inspections and oversees the elimination of chemical destruction. * Governance: The Conference of the States Parties elects a 41-member Executive Council (EC) to the OPCW. The Director-General manages daily operations. * Non-Compliance: Syria used chemical weapons and failed to comply with OPCW inspections, leading to suspension from certain rights and privileges afforded by the CWC. Wassenaar Arrangement * Purpose: Promote transparency in the exports of arms and dual-use technologies & generate consensus. * Core Powers: Coordinate arms control policies and share best practices. * Governance: Decisions are made by consensus at annual meetings. Discussions are facilitated through working groups for technical matters and policy decisions. * Non-Compliance: Russia has been accused of undermining and violating the Wassenaar Arrangement by unilaterally blocking attempts to update export control lists. Biological Weapons Convention (BWC) * Purpose: Eliminate the production and use of biological weapons. * Core Powers: The BWC relies on commitments by member states and voluntary information-sharing mechanisms between nations (rather than a centralized body like the IAEA or the OPCW). * Governance: Decision-making is decentralized. States conduct Review Conferences to discuss new scientific developments. States address potential violations through consultation and cooperation; there is no formal process for inspections or verification. * Non-Compliance: The Soviet Union secretly developed biological weapons until exposed by defectors. The BWC failed to detect this violation, exposing potential weaknesses in the BWC relating to lack of verification and an over-reliance on self-reporting. Drawing from these case studies, we extracted lessons learned that could inform international AI governance agreements and institutions: [title=Key Findings and Lessons Learned] * Verification mechanisms are essential: Robust inspection and monitoring powers have been vital for institutions like the IAEA and OPCW, while the lack of verification in the BWC has undermined its effectiveness. Given AI's potential strategic value, effective verification will likely be a necessary foundation for any international agreement. * Governance structures attempt to distribute power based on geography and national capabilities: International agreements often provide extra power (e.g., some seats on the IAEA Board of Governors are reserved for nations with advanced nuclear programs) to nations with advanced capabilities while also attempting to ensure geographic representatives (e.g., some seats on the IAEA Board of Governors are reserved for nations from certain geographical regions). Future AI governance structures will need to consider how to appropriately represent global interests while acknowledging the roles of leading AI powers. * Agreements balance transparency and privacy. Greater transparency allows for more robust verification, though greater transparency could expose proprietary information or state secrets. The US rejected a proposed BWC verification method due to concerns about proprietary information. For AI agreements, the level of transparency nations are willing to allow will likely be proportional to the perceived risks. * Agreements must adapt to technological change: The NEW START treaty and the BWC have had to adapt to novel technologies like hypersonic weapons or advances in synthetic biology. Rapid technological advancement in AI may require governance structures that can adapt quickly. This emphasizes the importance of strong technical expertise within governing bodies to track breakthroughs, anticipate risks, and develop appropriate regulatory responses. * Agreements use benefit-sharing to incentivize participation. The OPCW provides member states with support for the peaceful uses of chemistry and the IAEA provides support for the peaceful use of nuclear energy. International AI agreements may incentivize participation by sharing safe and responsible applications of AI technology. §.§ Future work These findings underscore the complexity of developing effective international AI governance mechanisms. Some promising areas for future work include: * Verification methods. Developing robust, adaptable verification methods for AI development and deployment. * Core powers and decisions. Identifying key decisions that an international AI governance institution should make and how these decisions would be made. * Governance structures. Designing governance structures that balance global representation with the interests of leading AI nations. * Technical expertise. Identifying ways to incorporate strong technical expertise into governance bodies to keep pace with rapid AI advancements. * Non-compliance. Examining strategies that could be used to prevent non-compliance or react to confirmed instances of non-compliance with AI agreements. By drawing insights from case studies of international security agreements, we may be able to identify ways to make international AI agreements more effective and robust. § INTRODUCTION The development of advanced AI presents important global security risks. AI experts, policy experts, and world governments have acknowledged a variety of ways that advanced AI development could present major security challenges <cit.>. At the UK AI Safety Summit, over 20 nations (including the United States and China) acknowledged risks from the intentional misuse or unintentional misalignment of advanced AI systems. Furthermore, they acknowledged that “[m]any risks arising from AI are inherently international in nature, and so are best addressed through international cooperation” <cit.>. International coordination may have the potential to reduce global security risks. Thus far, international coordination has focused on defining risks and establishing voluntary standards for safe AI development. Many countries have launched AI Safety Institutes designed to measure risks from advanced AI, further the science of AI evaluations and risk management, share research about AI safety, and work toward a common understanding of risks and risk mitigation strategies <cit.>. The US AI Safety Institute announced that it plans to lead an “inclusive, international network on the science of AI safety” <cit.>, and international dialogues have emphasized the need for “coordinated global action on AI safety research and governance” <cit.>. Furthermore, while it is too early to tell how China will react to international coordination proposals, there are early signs that Chinese leaders acknowledge global security risks from advanced AI and could be interested in international approaches to AI governance <cit.>. Previous work has examined proposals for international AI agreements. Such agreements focus on various aims, including consensus-building, enforcement of regulations, emergency preparedness and response, and the shared distribution of benefits from AI <cit.>. Some scholars have described an international approach to advanced AI development, in which certain kinds of advanced AI development take place in a joint safety-focused AI project <cit.>. Another common proposal involves the creation of international institutions that set standards, monitor compliance with standards, detect unauthorized AI development, or certify national regulatory bodies <cit.>. International proposals for AI governance could be informed by existing international agreements and international institutions. There are many historical and modern case studies of international agreements or institutions that attempt to minimize global risks <cit.>. In this paper, we review international agreements which focus on dual-use technologies. These agreements are especially relevant to AI given its dual-use potential <cit.>. Our aim is to identify lessons learned that could inform the design of future international agreements or international institutions designed to reduce global risks from advanced AI. The paper is divided into two sections: * Case studies. We cover the following questions for each international agreement: * Purpose. What is the purpose of the agreement? * Core powers. What are the core powers granted to the international body responsible for monitoring and enforcement? * Governance structure. How is the agreement governed? If an institution is established, how is decision-making power allocated between member nations? * Case study of non-compliance. Are there any cases in which a nation was suspected of non-compliance with the international agreement? How did this situation get resolved? * Lessons learned. We summarize lessons learned that could be useful when thinking about international agreements related to the development or deployment of advanced AI. § CASE STUDIES OF INTERNATIONAL AGREEMENTS §.§ IAEA Purpose. The International Atomic Energy Agency (IAEA) was founded in 1957 to promote the peaceful use of nuclear energy and limit its use for military purposes <cit.>. The IAEA exists as an autonomous organization within the United Nations. In practice, one of its main purposes is to verify that states do not build nuclear weapons <cit.>. Core powers. The IAEA can conduct inspections to ensure that states are not secretly building nuclear facilities. Findings from inspections are reported to the IAEA Board of Directors. If the Board of Directors believes that a state is not complying with international agreements, the Board can escalate the issue to the UN Security Council <cit.>. Ultimately, the IAEA does not have the authority to take action directly — it simply provides information to the UN Security Council and member states <cit.>. The UN Security Council has the authority to impose sanctions[After the UN Security Council passes a resolution to issue a sanction, each member state is responsible for implementing the sanctions at the national level. Member states must report back to the UN about their compliance and enforcement measures.] (e.g., trade restrictions, travel bans, freezing assets) or engage in military action. Governance structure. The IAEA is composed of 178 member states <cit.>. The selection of the Board of Governors and the Director General are the most consequential parts of IAEA’s governance structure. The Board currently consists of 35 members; each member represents a different nation <cit.>. 13 of those spots are guaranteed to nations that have advanced atomic energy technology 22 are elected by the General Conference for a two-year term. The election attempts to ensure a balanced geographical representation from each region of the world. The Director General of the IAEA is selected by the Board. The Director General must receive a two-thirds majority from the Board, as well as approval from the General Conference <cit.>. The Director General is responsible for setting the IAEA’s strategic direction, overseeing the development and implementation of policies, managing the IAEA’s staff, providing reports to the Board of Generals and General Conference, and directing the IAEA’s emergency response protocols. The Director General appoints many of the senior-level staff of the IAEA and oversees the recruitment and training of IAEA inspectors <cit.>. Non-compliance. In the late 1990s and early 2000s, Iran began secretly investing in its nuclear program via the Amad Plan. In 2002, these efforts were publicly exposed by the National Council of Resistance of Iran (NCRI), a political organization that advocated for the collapse of the Islamic Republic and the establishment of a democratic, secular government <cit.>. After the NCRI revelations, the IAEA conducted inspections and found Iran to be in violation of its obligations under the Nuclear Non-Proliferation Treaty <cit.>. The IAEA reported Iran to the UN Security Council who then recommended the application of diplomatic pressure and economic sanctions <cit.>. This led to a series of sanctions by the UN Security Council and a series of international negotiations with Iran. In 2015, this resulted in the Joint Comprehensive Plan of Action (JCPOA) — an agreement between Iran and the P5+1 (US, UK, France, Russia, China, and Germany) <cit.>. The JCPOA (colloquially referred to as the “Iran Nuclear Deal”) required Iran to limit its investments in its nuclear program and allow extensive monitoring and inspections by the IAEA. In exchange, economic sanctions on Iran would be lifted <cit.>. The JCPOA went into effect in 2016, but the United States withdrew from the agreement in 2018 <cit.>. The Trump Administration argued that the JCPOA did not go far enough in restricting Iran’s nuclear program, did not restrict its ballistic missile program, and did not address its role in regional conflicts in the Middle East <cit.>. There were also concerns about the verification methods, with some critics of the deal arguing that Iran might still be able to secretly invest in its nuclear program despite the increased monitoring. Ultimately, the United States ended up reimposing sanctions on Iran. Iran initially continued to comply with the agreement (as verified by IAEA reports) <cit.>, but as the sanctions from the United States intensified, Iran began to reduce its compliance with the JCPOA commitments <cit.>. IAEA inspections continued to occur <cit.>, and the JCPOA did not fully dissolve– many nations (including Russia and China) have continued to support it, and there are ongoing efforts to bring the United States back into the deal. §.§ START Treaties Purpose. The START treaties, or Strategic Arms Reduction Treaties, were a series of bilateral agreements between the United States and the Soviet Union (later Russia) <cit.>. The primary goal of the various START treaties is to reduce the number of strategic nuclear weapons and their delivery systems possessed by the United States and Russia (previously the Soviet Union). The treaties include various verification and transparency mechanisms to provide assurance to each nation that the other is meeting its commitments. START I entered into force in 1994. START I was intended to be followed by START II, but START II never entered into force due to disagreements between the two nations. SORT (Strategic Offensive Reductions Treaty) successfully entered into force in 2003, and was superseded by New Start in 2011 <cit.>. Core powers. The original START I allowed nations to conduct inspections to ensure compliance, these included routine inspections and short-notice inspections of suspect activities <cit.>. The nations also monitored each other primarily using satellites and agreed not to interfere with each other’s monitoring. START I also allowed for continuous monitoring of certain facilities– inspectors from the other nation would maintain a 24/7 presence and inspect all vehicles and containers large enough to hold treaty-limited items. The current New START treaty allows for similar monitoring and verification by each nation. Unlike START I, New START does not include continuous on-site monitoring at missile production facilities, instead relying more on periodic inspections and data exchanges <cit.>. This may be due to continuous monitoring being expensive to maintain, and the greater level of trust between the U.S. and Russia. Governance structure. START I created the Joint Compliance and Inspection Commission (JCIC), which was the primary forum for addressing implementation issues and resolving questions related to compliance <cit.>. The JCIC was composed of representatives from both the US and the USSR (later Russia). These representatives included diplomats, military officials, and technical experts. The JCIC met regularly, typically several times a year, and could convene special sessions at the request of either party. Decisions were made by consensus, and both parties had to agree on resolutions and interpretations. Proceedings were confidential, and agreed outcomes were shared with the relevant agencies of both countries. Both countries also authorized national implementation bodies to oversee the implementation of START I. The US body was the On-Site Inspection Agency (OSIA), which was originally created in 1988 to implement the inspection regime for the Intermediate-Range Nuclear Forces Treaty (INF). The OSIA trained inspectors, organized inspections of Russian facilities, and escorted Russian inspectors around US facilities. In 1998 it was consolidated into the newly formed Defense Threat Reduction Agency, which continues to carry out similar duties for New Start. New START replaced the JCIC with the Bilateral Consultative Commission (BCC), which has a similar function and is composed of representatives from the US and Russia <cit.>. Decisions are again made by consensus. Non-compliance. In January 2023, the U.S. State Department reported to Congress that Russia was in non-compliance with the New START treaty. The report noted that Russia had refused to reschedule inspections after their COVID-19-related pause and had failed to meet for a session of the Bilateral Consultative Commission since October 2021. This marked the first formal U.S. accusation of Russian violation since the treaty's inception in 2011. (Previously, there had been disputes with START I, although these had been successfully resolved via diplomatic channels and the JCIC.) Following this report, in February 2023, Russia announced the suspension of its participation in the New START treaty. This action escalated the existing compliance issues, as Russia had already halted on-site inspections in August 2022. Russian President Vladimir Putin cited Western support for Ukraine as the primary reason for the suspension. The move raised concerns about the future of data exchanges and other verification measures required by the treaty. In response, the US revoked visas to Russian inspectors in June 2023. Despite the suspension, both countries stated they would continue to adhere to the numerical limits on nuclear warheads and delivery systems set by the treaty. However, the lack of verification measures complicated the ability to confirm compliance. As of 2024, Russia has rejected further talks, continuing to cite US support of Ukraine. Without further negotiations, New START (along with its verification measures) will expire in 2026. §.§ The Organisation for the Prohibition of Chemical Weapons (OPCW) Purpose. The Organisation for the Prohibition of Chemical Weapons (OPCW) is an autonomous international organization established to implement the Chemical Weapons Convention (CWC), a chemical weapons control treaty that went into force in 1997 <cit.>. The OPCW now oversees 193 member states that have signed the CWC. Under OPCW verification, members of the CWC are prohibited from developing, producing, stockpiling, transferring, or using chemical weapons (except for limited uses such as medical or research purposes) and are required to destroy any of their existing chemical weapons <cit.>. The OPCW was modeled after the International Atomic Energy Agency (IAEA) <cit.>. Core powers. The OPCW has the authority to send inspectors to any member state to search for evidence of the production of banned chemicals and verify compliance with the CWC. OPCW conducts both routine inspections as well as investigations into allegations of CWC violations through Fact-Finding Missions (FFM) and challenge inspections. The organization also oversees and verifies the destruction of chemical weapons stockpiles and production facilities declared by member states. If a Member State believes another state is non-compliant with the CWC, it can request the Executive Council (EC) to launch a challenge inspection. The inspection can be launched at short notice — within 12 hours of notification — and cannot be refused by the Member State. The Director-General formally issues the challenge inspection. If a State Party fails to address compliance issues, the Conference may restrict or suspend its rights under the Convention. The Conference may also recommend collective measures, such as sanctions. For serious violations, the Conference may recommend collective measures to States Parties or bring the issue to the UN Security Council. Another key power of the OPCW is to facilitate cooperation on safe chemistry research. While the CWC restricts the production of ‘dual-use’ chemicals, it encourages the peaceful uses of chemistry in industry, agriculture, and research purposes. The OPCW also offers training to specialists on practical aspects of chemical safety and provides forums to share and discuss best practices among State Parties <cit.>. Governance structure. The Conference of the States Parties (CSP) is the principal organ composed of all OPCW Member States. It meets annually to make key decisions, adopt the budget, and elect and direct the Executive Council (EC), and jointly appoint the Director-General with the EC. Also, the States Parties nominate a group of emergency response experts to be part of the Protection Network, who are called in to assist and protect against chemical weapons. The CWC mandates that each Member State establishes a National Authority to facilitate communication with the OPCW and ensure national compliance through appropriate legislation and enforcement measures. The Executive Council (EC), with 41 CSP-elected States Parties, oversees the day-to-day operation of OPCW, implements decisions of the CSP, and prepares recommendations for CSP. Under OPCW rules, the EC must have a fixed number of States Parties across geographic regions; a fixed subset of each region must be the States Parties with the most advanced national chemical industries in their region. Within the EC, approving decisions generally requires a two-thirds majority. The exceptions are questions of procedure (which require a majority vote) and decisions to stop a challenge inspection within a 12-hour review period (which require a three-quarters majority vote.) The Technical Secretariat, appointed and supervised by the Director-General, is responsible for the day-to-day operations of OPCW. If a State Party requests an inspection, the Director-General is responsible for notifying the CSP, including the challenged State Party, as well as the EC. The Director-General is also responsible for selecting the inspection team from a list of qualified experts. The Secretariat offers training for first responders, government experts, and emergency response units to support individual Member States in implementing the Convention <cit.>. Non-compliance. Syria, despite ratifying the Chemical Weapons Convention (CWC) in 2013, was found to continue using chemical weapons and failed to comply with OPCW investigations. In April 2021, the OPCW suspended Syria's rights and privileges in the organization. Yet, Syria's ongoing non-compliance, Russia's diplomatic protection and vetoes at the UN Security Council, and OPCW's limited enforcement mechanisms have resulted in the situation in Syria remaining largely unresolved, with accountability for chemical weapons use still elusive <cit.>. The United States determined that Russian forces had used chemical weapons against Ukrainian troops <cit.>. Russia lost its seat on the OPCW Executive Council during re-election and faced sanctions from the United States <cit.>. While the OPCW commits to providing assistance and protection to Ukraine, there was no official OPCW inspection into Russia’s alleged violations since the evidence presented was “insufficiently substantiated” <cit.>. §.§ Wassenaar Arrangement Purpose. The Wassenaar Arrangement (WA) was established in 1996 as a voluntary export control regime <cit.>. Its primary purpose is to promote transparency in exports of conventional arms (such as tanks and missiles) and dual-use technologies (such as radio equipment and lasers) and to prevent accumulations of these items from destabilizing international security <cit.>. It is the successor to the Coordinating Committee for Multilateral Export Controls (CoCom), a stricter agreement that was established during the Cold War and ceased operation in 1994 <cit.>. Core powers. The Wassenaar Arrangement establishes regular information exchange and policy coordination, but it does not set up formal inspection or enforcement powers <cit.>. During the annual Plenary Meeting, all member states send representatives to a headquarters building in Vienna, and they come to a consensus on two “control lists” – the Munitions List and the List of Dual-Use Goods and Technologies – which define items that they agree should be subject to export controls <cit.>. They also exchange information on transfers of these items and denials of certain export licenses. The arrangement also maintains “best practices” documents on topics such as effective legislation on arms brokering and internal compliance programs for dual-use goods and technologies <cit.>. Each state retains its autonomy regarding whether or how it chooses to implement these practices — as a result, each state's implementation of export controls differs in practice. Governance structure. All decisions, including the addition of new members and the election of chairs, are made by consensus <cit.>. The agreement is open to adding new members if they produce or export relevant goods, they maintain membership in other non-proliferation agreements, and they maintain fully effective export controls <cit.>. The Plenary Committee is made up of representatives from all participating states and is headed by the Plenary Chair, who facilitates discussions at the annual Plenary Meeting in the headquarters building in Vienna. During the meeting, representatives provide information on transfers, revise best practices documents and control list entries, establish subsidiary “working groups” to help make recommendations for decisions, and decide on which state should be the Plenary Chair for the following year and who should lead each working group <cit.>. The main working groups are currently the General Working Group (GWG), which deals with policy-related matters, and the Experts Group (EG), which addresses control lists. In addition, the Licensing and Enforcement Officers Meeting (LEOM) is held once per year, and a small group called the Secretariat provides administrative support and maintains the headquarters building. Non-compliance. Since Russia's invasion of Ukraine, some experts on non-proliferation have questioned whether Russia can remain in export control arrangements such as the Wassenaar arrangement <cit.>. While their membership provides the international community some insight into their export activities, Russia has used its position to impede efforts to update control lists <cit.> forcing other nations to implement ad hoc export controls outside the arrangement's framework <cit.>. Moreover, Russia exports parts used in weapons manufacturing to unstable regions, specifically North Korea <cit.>. Some view Russia's involvement in Wassenaar as an opportunity for intelligence gathering instead of a genuine attempt to further the goals of the arrangement <cit.>. Even if there were significant sentiment in favor of removing Russia from the Wassenaar arrangement, this wouldn't be feasible because some countries would oppose it — all decisions are made by consensus — and there is no formal expulsion mechanism. §.§ Biological Weapons Convention Purpose. The Biological Weapons Convention (BWC) prohibits the development, production, acquisition, transfer, stockpiling, and use of biological and toxin weapons <cit.>. It was the first multilateral disarmament treaty to ban an entire category of weapons of mass destruction. The BWC was opened for signature on April 10, 1972, and entered into force on March 26, 1975. It built upon the 1925 Geneva Protocol, which had only prohibited the use of biological weapons in war. The BWC currently has 187 state parties and four signatory states. The treaty consists of 15 articles. Review Conferences are held every five years to assess and strengthen the Convention's implementation. Core powers. The BWC's core powers are primarily based on commitments by member states and information-sharing mechanisms, rather than a strong centralized authority. The Convention requires states to implement its provisions through national legislation and regulations. This includes prohibiting the development, production, and stockpiling of biological weapons, as well as destroying or diverting existing stockpiles to peaceful purposes. A key feature of the BWC is its system of Confidence Building Measures (CBMs), introduced in 1987. States parties are required to submit annual CBM reports by April 15th each year. These reports include information on research centers and laboratories, state biodefense programs, outbreaks of infectious diseases, relevant scientific publications, and vaccine production facilities. The CBMs aim to increase transparency and reduce suspicion among member states. The BWC's implementation is supported by regular meetings. Review Conferences are held every five years to assess the Convention's effectiveness and consider new challenges. Since 2002, annual Meetings of States Parties and Meetings of Experts have been held to discuss specific topics related to the BWC's implementation. Importantly, the Convention lacks a formal verification mechanism for compliance. In 1991, an ad-hoc group of experts (VEREX) was established to “identify and examine potential verification measures from a scientific and technical standpoint” <cit.>. Ultimately, however, these efforts were unsuccessful, and the US rejected VEREX’s proposed protocol in 2001. Notably, the US’s rejection was grounded in the concern that the new proposals would not provide sufficient measures to effectively verify compliance with the protocol <cit.>. In the words of US Ambassador Mahley, the proposals would “still permit a potential proliferator to conceal significant efforts in legitimately undeclared facilities” <cit.>. Governance structure. The BWC's governance structure is relatively decentralized, relying primarily on the collective action of its member states rather than a strong central authority. The key elements of its governance include: * States Parties: All countries that have ratified the treaty. They are responsible for implementing the Convention's provisions through national legislation and regulations. * Review Conferences: Held every five years, these conferences assess the operation of the Convention, consider new scientific and technological developments, and make decisions on further measures. The most recent was the Ninth Review Conference in November 2022. * Meetings of States Parties and Meetings of Experts: Since 2002, these annual meetings have been held between Review Conferences to discuss specific topics related to the BWC's implementation. * Implementation Support Unit (ISU): Established in 2006, this small unit of three full-time staff provides administrative support to States Parties, particularly in managing the Confidence Building Measures (CBMs) process. * Depositary Governments: The US, UK, and Russia serve as depositary governments, responsible for certain administrative functions. * United Nations Office for Disarmament Affairs (UNODA): Provides institutional support for the BWC, including hosting the ISU. The BWC does not have a formal international organization to oversee its implementation, unlike some other arms control treaties. Instead, it relies on the collective efforts of States Parties to monitor compliance and address concerns through consultation and cooperation. Non-compliance. The Soviet Union’s Biopreparat program, operating from 1973 to 1991, is the most significant case of non-compliance with the BWC <cit.>. Despite being a signatory to the Convention, the Soviet Union established and maintained a large covert biological weapons program. Biopreparat employed over 50,000 people across various research and production facilities. They produced and stockpiled enormous quantities of deadly pathogens, including anthrax bacilli and smallpox virus. Some of these agents were even prepared for deployment via intercontinental ballistic missiles, demonstrating the program's integration with the Soviet strategic weapons complex <cit.>. The Biopreparat program remained secret for many years. Its existence only came to light after the defection of Vladimir Pasechnik to the UK in 1989, followed by Ken Alibek to the US in 1992 <cit.>. These defectors revealed the program's vast scope to Western intelligence agencies. This case highlights the potential for large-scale violations of the BWC to go undetected, especially in the absence of robust verification mechanisms. The BWC failed to detect this large-scale violation because of its lack of any robust verification measures and its reliance on self-reporting <cit.>. § LESSONS LEARNED In this paper, we reviewed international security agreements in an effort to understand how they are governed, what powers they possess, and how they handle issues of non-compliance. We reviewed agreements in nuclear security, chemical weapons security, biological weapons, and export controls. Some of these agreements involved the establishment of international institutions for monitoring and verification, while others relied on voluntary compliance between nations. Below, we discuss a few themes and lessons learned that could be useful for discussions about international AI governance. Verification mechanisms are essential to assess compliance with international agreements. The importance of robust verification mechanisms is highlighted by several case studies. The IAEA’s inspection powers for nuclear facilities, the OPCW’s authority to conduct challenge inspections for chemical weapons, and the on-site inspections or continuous monitoring in START treaties have played crucial roles in their effectiveness. In contrast, the lack of a formal verification protocol in the BWC permitted cases of non-compliance and undermined efforts to strengthen the treaty. International AI agreements will require robust verification methods to detect non-compliance. Strategies like on-site inspections, challenge inspections, and continuous monitoring could help ensure the robustness of verification regimes (see <cit.>). Governance structures attempt to balance power between nations based on geography and geopolitical importance. The international institutions we reviewed often had permanent or fixed seats for geopolitically powerful or technologically advanced nations, while also reserving a certain number of seats for countries from various regions around the world. For example, both the IAEA and the OPCW ensure representation in their membership whilst reserving additional or permanent powers to countries with advanced capabilities in the relevant fields. Bodies that rely strongly on consensus-based decision-making can lead to gridlock and limit an agreement’s adaptability, as in the Wassenaar Arrangement. Bilateral agreements (like the START treaties) can avoid gridlock but may break down in response to geopolitical events (such as Russia’s invasion of Ukraine). Furthermore, the willingness of key nations to follow through on commitments can be critical to the success of agreements– such as the United States withdrawing from the JCPOA. When considering AI agreements, it is important to consider the role of the United States and China– the world’s two leading AI powers. A bilateral agreement could attempt to draw from some of the promising provisions of the START treaties. A broader international agreement could attempt to ensure global representation while still preserving permanent seats or extra decision-making power for nations that lead in AI expertise. Agreements require striking a balance between transparency and privacy. Balancing the need for transparency with protecting legitimate state and commercial interests can be challenging <cit.>. This was evident in the U.S. rejection of the BWC verification protocol due to concerns about proprietary information <cit.> and in the challenges faced by the Wassenaar Arrangement regarding agreeing on controlled items. For AI agreements, ensuring compliance with international standards or safety practices may require verification methods that promote high levels of transparency (e.g., on-site inspections, access to code, access to data centers) <cit.>. Depending on the perceived level of danger, nations may be more or less willing to tolerate invasive monitoring and verification methods. Furthermore, nations may justifiably want to secure sensitive or dangerous material, such as the model weights of highly dangerous systems or details about certain kinds of algorithmic insights. It will be important to identify verification methods that balance the need for transparency with other needs, such as security and national interests. Such verification methods could involve some techniques that are already standard for verifying compliance with international agreements, as well as novel approaches in which advanced hardware automatically alerts an international authority if it detects unauthorized code or unauthorized networking patterns <cit.>. International institutions must adapt to rapid technological change. The rapid pace of technological advancement poses challenges for international agreements. Both the NEW START treaty and the BWC have struggled to deal, respectively, with developing technologies like hypersonic weapons or advances in synthetic biology <cit.>. International AI agreements will also have to adapt to technical breakthroughs. Examples include novel AI capabilities, new technical breakthroughs that make it easier for actors to develop dangerous AI, new technical breakthroughs that make it easier to monitor compliance with international agreements, and advances in AI that could allow AI to be incorporated into verification schemes. It will be essential for international AI governance institutions to possess strong technical expertise in order to track and incorporate such breakthroughs. The technical staff could play essential roles, such as interpreting model evaluations <cit.>, evaluating affirmative safety cases <cit.>, conducting interviews with technical experts to predict technical advances and anticipate security risks <cit.>, and identifying novel ways to detect non-compliance with international agreements (e.g., <cit.>). To acquire and retain such technical talent, international AI governance institutions may need to play more than merely a “policing” function– technical talent may be more attracted to projects that have a positive, inspiring, and innovative vision.[A related point was raised in the Acheson-Lilienthal Report, when the United States was considering establishing an international body to promote nuclear security. “The difficulty of recruiting enforcement officers having only a negative and policing function, one of prohibiting, detecting, and suppressing, is obvious. Such a job lacks any dynamic qualities. It does not appeal to the imagination. Its future opportunities are obviously circumscribed. It might draw the kind of man, let us say, who was attracted to prohibition squads in years past. Compare this type of personnel with those who could be expected to enter a system under which it is clear that the constructive possibilities of atomic energy may also be developed. Atomic energy then becomes a new and creative field in which men may take pride as participants, whatever their particular role” (Acheson-Lilienthal Report, 1946).] Agreements use benefit-sharing to incentivize participation. Agreements often offer clear benefits to participating states in order to give them an incentive to join or remain <cit.>. Examples include the OPCW's support for peaceful uses of chemistry and the IAEA's promotion of peaceful nuclear energy. Future AI governance structures could consider incorporating mechanisms that promote beneficial AI research and development while mitigating risks. Enforcement is challenging and may rely on other national or international institutions. Many international bodies lack direct enforcement powers. The IAEA and OPCW can only report violations to the UN Security Council, which then decides on enforcement actions. Reliance on external bodies for enforcement can lead to political deadlocks, as seen in the case of Syria's chemical weapons use. It will be important to consider what powers ought to be granted to a potential international AI governance institution. Example questions include: (a) what actions should it be allowed to take on its own, (b) to what extent will it rely on the UN Security Council to take actions, and (c) to what extent will it rely on individual member nations.
http://arxiv.org/abs/2409.02773v1
20240904145252
A generalization of K-theory to operator systems
[ "Walter D. van Suijlekom" ]
math.OA
[ "math.OA", "math.FA", "math.KT" ]
compat=1.11 patterns thmTheorem[] corl[thm]Corollary conj[thm]Conjecture lma[thm]Lemma lem[thm]Lemma prop[thm]Proposition defn[thm]Definition ex[thm]Example rem[thm]Remark alg 𝒜 𝒮 𝒲 𝒰 ℬ ℂ cb CB compressed grading diag D 𝔻 ℰ env ϵε ε ℱ 𝒢 GUT ℍ ℋ̋ ⊗ 𝕀 𝕀 𝒦 Łℒ minmin ℕ Ø𝒪 𝒫 ℙ ℝ ℛ 𝒮 𝕋 𝒰 𝒱 ℤ 𝔽 ,  ∀ commentstyle 0.2cm0.2cm 0cm 0cm #1 #2: #3 commentstyle mycommentComment comment 1pt 1pt Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands. [email protected] A generalization of K-theory to operator systems Walter D. van Suijlekom September 4, 2024 ================================================ § ABSTRACT We propose a generalization of K-theory to operator systems. Motivated by spectral truncations of noncommutative spaces described by C^*-algebras and inspired by the realization of the K-theory of a C^*-algebra as the Witt group of hermitian forms, we introduce new operator system invariants indexed by the corresponding matrix size. A direct system is constructed whose direct limit possesses a semigroup structure, and we define the K_0-group as the corresponding Grothendieck group. This is an invariant of unital operator systems, and, more generally, an invariant up to Morita equivalence of operator systems. For C^*-algebras it reduces to the usual definition. We illustrate our invariant by means of the spectral localizer. § INTRODUCTION The last few years has seen many new interactions between noncommutative geometry and operator theory. In particular, the development of noncommutative geometry to describe spectral truncations <cit.>, the corresponding spectral localizer <cit.> (cf. <cit.> and references therein), and spaces up to tolerance relations <cit.> has led to the development of many interesting new structures and applications in operator system theory. These include operator system duality <cit.>, noncommutative convex geometry <cit.>, quantum metric spaces <cit.> and the notion of Morita equivalence for operator systems <cit.>. A key ingredient for many applications in noncommutative geometry based on C^*-algebras is K-theory. Given the above fruitful interactions based on the replacement of C^*-algebras by operator systems, one is naturally led to the question whether there is an analogue of K-theory for operator systems. We address this question in the present paper. On our wish list for a candidate for K-theory we have put that * it should capture the spectral localizer <cit.> for spectral truncations as a pairing with K-theory; * it should capture compressions of the element Y describing quanta of geometry in <cit.>; * it should be invariant under Morita equivalence <cit.>; * it should be a refinement of K-theory for C^*-algebras, taking the operator system structure into account. Some results in the direction of K-theory for operator systems already exist in the literature. For instance, projections in operator systems were defined in <cit.>. However, it is not difficult to see that spectral compressions PpP of a projection p in a C^*-algebra are in general not projections. As a matter of fact, in general they are not even ϵ-projections in the sense of <cit.> so that also quantitative K-theory may not be suitable for these kind of applications (instead, quantitative K-theory is more fitting for the examples coming from tolerance relations). The K-theory for absolute matrix order unit spaces of <cit.> need an additional structure (the absolute value map) on the operator system which we would like to avoid. The aforementioned applications based on the spectral localizer suggests that one should look for invertible, or non-degenerate elements instead. In fact, this is nicely aligned with the elegant description of C^*-algebraic K-theory K_0(A) as the Witt group of hermitian forms over A (see <cit.>). Indeed, these are given by invertible self-adjoint elements in M_n(A). It is precisely the matrix structure of the operator system that allows us to formulate such a notion of hermitian forms for operator systems as well. Based on this, we will develop and analyze K-theoretic invariants for operator sysetems. This paper is structured as follows. After some brief background on operator systems and unital completely positive (ucp) maps, we introduce in Section <ref> a notion of hermitian forms in a unital operator system E. These are given by non-degenerate self-adjoint elements x ∈ M_n(E). A natural notion of homotopy equivalence allows to introduce sets (E,n) of equivalence classes up to homotopy, which are labeled by the matrix size n. We will show that they are invariants for the operator system structure and compatible with the notion of direct sum. A semigroup of hermitian forms then appears when we introduce maps (E,n) →(E,n+1) and consider the direct limit (E,n). The K_0-group of the operator system is then defined to be the Grothendieck group of this semigroup. We show that it coincides with C^*-algebraic K-theory in case the unital operator system is a unital C^*-algebra, and that it is stably equivalent when taking matrix amplifications. It is not to be expected that this notion of K_0-theory behaves nicely with respect to ucp maps, as can be seen already in the simple case of a ucp map given by compression by a projection. Nevertheless, for some maps —including complete order isomorphisms— we have induced maps between the corresponding K_0-groups. In Section <ref> we consider the extension of K-theory to non-unital operator systems in the sense of <cit.>. This allows to formulate and prove the stability of K_0-theory, which as a consequence of <cit.> then shows invariance of K_0 under Morita equivalence. In Section <ref> we return to our initial motivation and formulate the spectral localizer <cit.> in terms of the K-theory groups for the spectral truncations. An illustrative example is given by spectral compressions of projections on the torus. §.§ Acknowledgements I would like to thank Alain Connes, Jens Kaad, Matthew Kennedy, Marc Rieffel, and Steffen Sagave for fruitful discussions and suggestions. I thank Malte Leimbach for a very careful proofreading of a draft of this paper. § BACKGROUND ON OPERATOR SYSTEMS We start by briefly recalling the theory of operator systems, referring to <cit.> for more details. A unital operator system (E,e) is a matrix-ordered *-vector space E, equipped with an Archimedean order unit e. A map ϕ: E → F between operator systems determines a family of maps ϕ^(n) : M_n(E) → M_n(F) given by ϕ^(n)([x_ij]) = [ϕ(x_ij)]. A map ϕ: E → F between unital operator systems is called completely positive if each ϕ^(n) is positive (n ≥ 1). We also abbreviate completely positive by cp, and unital completely positive by ucp. A dilation of a ucp map ϕ: E → B()̋ of a unital operator system is a ucp map ψ: E → B(), where is a Hilbert space containing $̋ such thatP_ψ̋(x)|_=̋ ϕ(x)for allx ∈E. The ucp mapϕis called maximal if every dilation ofϕis obtained by attaching a direct summand. A non-zero cp mapϕ:E →B()̋is said to be pure if the only cp maps satisfying0 ≤ψ≤ϕare scalar multiples ofϕ. We may viewEas a concrete operator system in theC^*-algebraC^*(E)it generates; in this case, we say that a ucp mapϕ: E →B()̋has the unique extension property if it has a unique ucp extension toC^*(E)which is a*-representation. If, in addition, the*-representation is irreducible, it is called a boundary representation<cit.>. The following result is well-known in the literature <cit.> but for completeness we include a proof. Let ϕ: E → B()̋ be a ucp map. Then * ϕ is maximal if and only if it has the unique extension property. * ϕ is pure and maximal if and only if it is a boundary representation. (1) is <cit.>. (2) If the extensionϕ̃: C^*(E) →B()̋ofϕis reducible, then there exists a non-trivial projectionP ∈ϕ̃(C^*(E))' . Then the mapψ:E →B()̋defined byψ(x) = P ϕ̃(x)is cp and0 ≤ψ≤ϕ. Butψ(1) = Pwhileϕ(1) = 𝕀_$̋ so that ψ is not a scalar multiple of ϕ; hence ϕ is not pure. For the other implication, suppose that ϕ is a boundary representation and take a cp map ψ such that 0 ≤ψ≤ϕ. By Arveson's extension theorem, we have cp maps ψ̃ and ϕ-ψ from C^*(E) to B()̋. Since ψ̃+ ϕ-ψ extends ϕ it follows from the unique extension property of ϕ that ϕ-ψ =ϕ̃- ψ̃. Consequently, ψ̃≤ϕ̃ so that by <cit.> there exists an operator T ∈ϕ̃(C^*(E))' such that ψ̃(a) = T ϕ̃(a) ; ∀ a∈ C^*(E). Since ϕ̃ is irreducible, it follows that T = t ·𝕀_$̋ for somet ∈[0,1]so thatψ̃= t ϕ̃and, consequently, alsoψ= t ϕ. Henceϕis pure. § K-THEORY FOR UNITAL OPERATOR SYSTEMS Even though in an operator system we cannot speak about invertible elements, we may use the pure and maximal ucp maps to introduce the following notion of nondegeneracy. Let (E,e) be a unital operator system. A self-adjoint element x=x^* ∈ M_n(E) is called a hermitian form if it is non-degenerate in the sense that there exists g>0 such that for all pure and maximal ucp maps ϕ: E → B()̋ we have |ϕ^(n)(x)| ≥ g ·𝕀_^̋⊕ n The smallest real number g>0 such that (<ref>) holds is called the gap of x. We denote the set of hermitian forms contained inM_n(E)byH(E,n). The following result will be of crucial importance to us, as it makes it feasible to check the non-degeneracy condition in concrete cases: A self-adjoint element x ∈ M_n(E) is a hermitian form if and only if _E^(n)(x) is an invertible element in the C^*-envelope C^*_(E). Moreover, x has gap g>0 if and only if |_E^(n)(x)| ≥ g · 1^⊕ n_C^*_(E). In <cit.> theC^*-envelope ofEis constructed as the direct sum of all boundary representations(_̋σ, σ):_E : E →⊕_σ B( _̋σ ).Now ifx ∈Ethen_E(x)is invertible if and only if| _E(x)|≥g ·𝕀_⊕_σ(_̋σ), which holds if and only if|σ(x)| ≥g ·𝕀__̋σfor all boundary representationsσ. A similar statement holds ifx ∈M_n(E). Since by Proposition <ref> a ucp map is a boundary representation if and only if it is pure and maximal, the result follows. A trivial example of a hermitian form is the order unitx = e(with gapg =1). Other examples that in fact motivated the above definition are * (Hermitian forms and Witt groups) The relation between the K-theory and the Witt group for rings <cit.> and (unital) C^*-algebras <cit.> stresses the role played by hermitian forms. In fact, any hermitian form on a finitely generated projective module of the form e A^n is described by a self-adjoint element x ∈ e M_n(A)e. The fact that the form is non-degenerate translates to invertibility of the hermitian element h = x+ (1-e) ∈ M_n(A), so that h^2 ≥ g^2 · 1_A for some g>0. But then x^2 = (ehe)^2 = eh^2 e ≥ g^2 · e, since h and e commute. We find that x is a hermitian form in the operator system (eM_n(A)e,e) with gap g (note that since eM_n(A)e is a C^*-algebra, it coincides with its C^*-envelope). This should also explain the above terminology. Note that in this case one may just as well consider the invertible element x+ (1-e) as a hermitian form in M_n(A). * (Almost projections and quantitative K-theory) In <cit.> quantitative (aka controlled) K-theory K_0^ϵ,r(A) of a filtered C^*-algebra A = (A_r) was defined in terms of ϵ-r-projections, i.e. elements p ∈ A_r such that p^2 - p < ϵ where ϵ < 1/4. But then x= 1_A -2p is a hermitian form in the unital operator system (A_r,1_A) with gap g = √(1-4 ϵ) >0 since x^2 = 1_A - 4 (p-p^2) ≥ (1-4 ϵ )· 1_A * (Projections in operator systems) In <cit.> projections in unital operator systems are abstractly defined. These then turn out to be precisely those elements p ∈ E such that _E(p) is a projection in the C^*-envelope. If we set x= e-2p we find that _E(x)^2 =e so that again such projections define hermitian forms (with gap 1). * (The even spectral localizer) The even spectral localizer <cit.> is defined as a spectral compression x = P H P of an invertible self-adjoint element H in a C^*-algebra A ⊆ B()̋ by a projection in $̋. If the spectrum ofHdoes not intersect with the interval[-g,g]then we compute for the operator system(PAP,P)that x^2 = P H P P H P =P H^2P + P H[P,H] P = P H^2P + P [P,H][P,H] P Hence if we setδ = [P,H]we find thatx^2 ≥ (g^2 -δ^2) Pin theC^*-extensionC^*(PAP) ⊆ B()̋. By the universal property of theC^*-envelope, there is a surjective*-homomorphismρ: C^*(PAP) → C^*_(PAP)which ensures that_E(x)^2 ≥ (g^2 -δ^2) Pso thatxis a hermitian form with gap√(g^2- δ^2). * (Spectral truncations of quanta of geometry) As a special case of an invertible self-adjoint element we may also consider the operatorY ∈ Adescribing the so-called quanta of geometry in <cit.>. It satisfiesY^2 = 1so thatY has spectrum contained in{ -1,1}. IfPis a projection, we find that the compressionsPYPdefine hermitian forms with gap√(1-δ^2)providedδ = [P,Y]<1. Let us continue the general treatment of hermitian forms in operator systems, and derive the following rigidity result. Let x be a hermitian form with gap g>0. If y =y^* ∈ M_n(E) with x - y≤ϵ for some ϵ < g^2/2 x, then y is a hermitian form with gap √(g^2 - 2 ϵ x). The norm estimate implies that-2 ϵ x e ≤_E (x)_E (y-x) + _E(y-x)_E( x) ≤ 2 ϵ xeHence by writingy = x + (y-x)and noting that_E (y-x)^2 ≥ 0we find that _E(y)^2 = _E (x)^2 + _E (x)_E (y-x) + _E(y-x)_E( x) + _E (y-x)^2 ≥ ( g^2 - 2 ϵ x )e. We end this subsection by another standard operation of hermitian forms, which is their direct sum. Let x ∈ H(E,n) and x' ∈ H(E,n') be hermitian forms with respective gaps g,g'. Their direct sum is the hermitian form given by x ⊕ x' = [ x 0; 0 x' ]∈ H(E,n+n') which has gap equal to min{ g,g'}. §.§ Homotopy equivalence of hermitian forms One of the defining properties of an operator system is its matrix-order structure. This motivates us to consider homotopies of hermitian forms inH(E,n)for a fixed matrix sizen ≥ 0, as follows: Let x,x' ∈ H(E,n) be hermitian forms. We say that x ∼_n x' if there exists a hermitian form x̃∈ H(C([0,1]) ⊗ E),n) such that x̃(0) = x ; x̃(1) = x' We denote the equivalence class of a hermitian form x ∈ H(E,n) by [x]_n, and the set of equivalence classes of hermitian forms in H(E,n) by (E,n), or, equivalently, (E,n) = π_0 (H(E,n)). This should be confronted with the usual notion of homotopy equivalence in, say, K-theory forC^*-algebras, wherenis allowed to vary (see also Section <ref> below). The following is immediate: Let E and F be unital operator systems. If ϕ:E → F is a ucp map for which there exists a *-homomorphism ϕ̃:C^*_(E) → C^*_(F) that makes the following diagram commute, E [r]^ϕ[d]__E F [d]__F C^*_(E) [r]^ϕ̃ C^*_(F) then there is an induced map ϕ^*: (E,n) →(F,n) defined by ϕ^* ([x]_n) = [ϕ^(n)(x)]_n. If E and F are completely order isomorphic then (E,n) ≅(F,n) for all n ≥ 0. Consider E=. Then (,n) is the set of homotopy equivalence classes of invertible hermitian n × n matrices. Any such matrix x can be diagonalized with a unitary matrix, and since the unitary group U(n) is connected, there is a homotopy between x and the corresponding diagonal matrix. In turn, this diagonal matrix is homotopic (in the space of invertible matrices) to the corresponding signature matrix, which is unique up to ordering. In other words, (,n) can be parametrized by the signature s of the hermitian forms , yielding an isomorphism (,n) ≅{ -n , -n+2 …, n-2, n } Note that the direct sum of two hermitian forms translates to the addition of the corresponding signatures: (s,s')↦ s+s'. Anticipating the discussion in Section <ref> there are maps _nm : (,n) →(,m) for n ≤ m given by _nm ([x]_n) = [ x ⊕ e_m-n ]_m. In terms of the signature, we find that _nm (s) = s + m-n. Moreover, there are commuting diagrams for any m ≥ n ≥ 1: (,n) [rd]_ρ_n[rr]^_nm (,m) [ld]^ρ_m where ρ_n([x]) is defined to be the so-called negative index of inertia of x, i.e. the number of negative eigenvalues of the hermitian form x. When expressed in terms of the signature s of x, we have ρ_n(s) =1/2 (n-s) from which commutativity of the diagram follows at once. This suggests that is the direct limit (,n); we will come back to this soon (in Section <ref>). Let E and F be unital operator systems. Then for any n we have (E⊕ F,n) ≅(E,n) ×(F,n). Let(x,y)∈ M_n(E) ⊕ M_n(F)be a hermitian form; this has gapgiff_E ⊕ F(x,y)^2 ≥ g^2 · (e_E, e_F) _E (x)^2 ≥ g^2 e_E and _E (y)^2 ≥ g^2 e_F.Moreover,(x,y) ∼ (x',y')iffx ∼ x'andy ∼ y'. From this the stated isomorphism follows. The following result is a useful computational tool for(E,n). Recall the multiplierC^*-algebraA_Eof the operator systemE:A_E := { a ∈ C^*_(E): a E ⊂ E }Let u be a unitary in M_n(A_E), and let x ∈ M_n(E) be a hermitian form with gap g. Then u x u^* is a hermitian form with gap g, and u x u^* ⊕ e_n is equivalent to x ⊕ e_n as hermitian forms in H(E,2n). Clearly,_E(u x u^*)^2 = u _E (x)^2 u^* ≥ g^2 e. For the homotopy equivalence, recall that by Whitehead's Lemma there exists a continuous path of unitariesw_t ∈ M_2n(A_E)such thatw_0 = [ 1 0; 0 1 ]; w_1 = [ u 0; 0 u^* ].But thenw_t ( x ⊕ e_n) w_t^*is a self-adjoint element inC[0,1]⊗ M_2n(E), also with gapg, establishing an equivalence betweenx ⊕ e_nandu x u^* ⊕ e_n, as desired. Let x,y ∈ H(E,n) be hermitian forms both with gap g for which x - y < ϵ such that ϵ < 4 g^2. Then there is a homotopy x̃ of hermitian forms in C([0,1])⊗ E with gap √(g^2 - ϵ^2/4) such that x̃(0) = x and x̃(1)=y. We claim that a homotopy of hermitian forms with the indicated gap is given byx̃(t) = t x + (1-t)y. It follows from x - y < ϵthat (_E(x-y))^2 ≤ϵ^2 e, which implies that_E(x) _E(y)+_E(y) _E(x) ≥_E(x)^2 + _E(y)^2 - ϵ^2 e.We will use this to find the gap ofx̃as follows: _C([0,1])⊗ E( x̃)^2 = t^2 _E( x)^2 + (1-t)^2 _E(y)^2 + t (1-t) (_E(x) _E(y)+_E(y) _E(x)) ≥ (t^2 g^2 + (1-t)^2 g^2 + t(1-t)(2 g^2 - ϵ^2 ))e ≥ (g^2 - ϵ^2/4)e. Let x ∈ H(E,n) be a hermitian form with gap g and suppose that y =y ^*∈ M_n(E) is such that x - y < ϵ for some ϵ >0 such that g^2 > 2 ϵx + ϵ^2/4. Then there is a homotopy between x and y given by hermitian forms with gap √(g^2- 2 ϵ x - ϵ^2/4). From Lemma <ref> it follows that the elementyis a hermitian form with gap√(g^2-2ϵ x )and the same applies tox. By Proposition <ref> it now follows that there is homotopy betweenxandyof hermitian forms with gap√(g^2-2ϵ x - ϵ^2/4). §.§ Semigroup of hermitian forms and K_0-group In the above, we have stressed the role of the matrix sizen ≥ 0for the invariants(E,n). However, in order to prepare for a comparison withK-theory forC^*-algebras we will need the limit object(E) := (E,n)that we will now construct. Given a unital operator system(E,e)we consider the direct system of sets((E,n), _nm)where form ≥ n_nm: (E,n) →(E,m) [x]_n ↦ [x ⊕ e_m-n]_m, We denote the direct limit of the direct system (<ref>) by(E,n). A more explicit description is given as follows: forx ∈ H(E,n)andx' ∈ H(E,n')we writex ∼ x'if there exists ak ≥ n,n'such that x ⊕ e_k-n∼_k x'⊕ e_k-n'inH(E,k). We will write[x]_E, or simply[x], for the equivalence class corresponding tox ∈ H(E,n)and(E) := ⨿_n (E,n)/_∼for the corresponding set of equivalence classes. The following is then clear from the definition of the direct limit. The set (E) is the direct limit (E,n) of the direct system (<ref>). Moreover, it is a semigroup when equipped with the direct sum [x]+ [x'] = [x ⊕ x'] and identity element 0 = [e]. We now arrive at our tentative definition of K-theory for unital operator systems. Let (E,e) be a unital operator system. We define the K-theory group K_0(E) of E to be the Grothendieck group of (E). Before addressing the properties of the associationK_0(E)to an operator system, let us first check the consistency of our definition with the notion ofK_0-groups when the unital operator system is in fact a unitalC^*-algebra. For a unital C^*-algebra A the group K_0(A) is isomorphic to the C^*-algebraic K-theory group of A. Since a hermitian formxin aC^*-algebra is invertible we may define a map between the semigroups (where𝒫(A)denotes the semigroup of projections inM_∞(A)up to homotopy equivalence): Φ: (A) →𝒫(A) [x] ↦[p= 12 (1-x |x|^-1) ] This map is well-defined, since ifx ∼ x'then also the corresponding projectionsp,p'are homotopy equivalent. It also maps the neutral element0 = [e]to0=[0] ∈𝒫(A), whilst being compatible with direct sums. Let us check it is bijective. For injectivity, let[x],[x'] ∈(A)be mapped to[p],[p']and assume that[p]=[p']. The homotopyp_tof projections (t ∈ [0,1]) that implements the equivalence betweenp_0 = p, p_1 = p'can be written asp_t = 1/2 (1-y_t)in terms of a self-adjoint elementy_tsatisfyingy_t^2 = 1, to wity_t = 1-2 p_t. This a continuous family of hermitian forms such thaty_0 = x|x|^-1andy_1 = x' |x'|^-1while alsox |x|^-1∼ xvia the homotopyx |x|^-t(t ∈ [0,1]) of hermitian forms, and the same applies tox' |x'|^-1∼ x'. Hencex ∼ x'which proves thatΦis injective. Surjectivity follows by taking a[p] ∈𝒫(A)and defines a hermitian formx = 1-2p. ThenΦ([x]) = [p]as desired. The next result prepares for invariance ofK_0under stable isomorphism (in Section <ref> below). Let E be a unital operator system and let N be a natural number. Then (E) is isomorphic to (M_N(E)) (and so are the corresponding K_0-groups). We will give an explicit proof in the caseN=2. For anyn>0we define a map^E,n : M_n(E)→ M_n(M_2(E))by ^E,n (x)= [ x_11 0 0 e x_12 0 0 0 ⋯ x_1n 0 0 0; x_21 0 0 0 x_22 0 0 e ⋯ ⋮; ⋮ ⋮ ⋱ ⋮; x_n1 0 0 0 ⋯ ⋯ x_nn 0 0 e ] Letu_σbe the permutation matrix corresponding to the permutationσ = [ 1 2 3 ⋯ n n+1 ⋯ 2n; 1 3 5 ⋯ 2n-1 2 ⋯ 2n ]This shuffles the columns and rows in such a way thatu_σ·^E,n(x)· u_σ^* = [ x 0; 0 e_n ]Sinceu_σ∈ M_2n() ⊆ M_2n(A_E)it follows from Lemma <ref> that^E,n(x) ∼ x, relative toE. Similarly, it follows that^E,n+n'(x⊕ x') ∼^E,n(x) ⊕^E,n'(x'), while the map^E,nalso respects non-degeneracy and self-adjointness. Note that also^E,n(e_n) = e_2n, and, in fact,^E,nis unital completely positive. As such, it is contractive, hence continuous so that it maps homotopy equivalences to homotopy equivalences. We conclude from all of this that the induced map^E: (E) ↦(M_2(E)), [x] ↦ [^E,n(x)]is a well-defined morphism of semigroups. Let us show that^Eis surjective: take[y] ∈(M_2(E),m). Then by the above it follows that^M_2(E),m(y) ∼ y, relative toM_2(E). Upon identifyingM_m(M_2(E))withM_2m(E)there is an elementỹ∈ H(E,2m)satisfying^E,2m(ỹ) ∼^M_2(E),m (y). But then^E([ỹ ]_E) = [y]_M_2(E)which shows surjectivity. For injectivity of^Esupposex ∈ H(E,n), x'∈ H(E,n')are such that^E,n(x) ∼^E,n'(x'), i.e. that^E([x]) = ^E([x']). Since^E,n(x) ∼ xand^E,n'(x') ∼ x'by the above, it follows thatx ∼ x'. But then[x] = [x']which shows that^Eis injective. This completes the proof. More generally, we may consider maps ^E,n_NM : M_n(M_N(E)) → M_n(M_M(E)) x ↦ v_σ^* ·[ x 0; 0 e_M-N ]· v_σ, in terms of a suitable permutation matrix v_σ shuffling the rows and columns to identify M_n(M_M(E)) with M_M(M_n(E)). The corresponding maps _NM: (M_N(E)) →(M_M(E)) then yield a direct system, and the above Theorem generalizes to an isomorphism (M_N(E)) ≅(E). §.§ K_0 and maps between operator systems The behavior of hermitian forms with respect to ucp maps as obtained in Proposition <ref> translates into the following: Let E,F be unital operator systems and ϕ:E → F a restriction of a *-homomorphism between the corresponding C^*-envelopes (so that Equation (<ref>) is satisfied). Then the induced map ϕ^* : (E) →(F) of semigroups is well-defined and induces a map between the corresponding K_0-groups (also denoted ϕ^*). We already know from Proposition <ref> that hermitian forms are mapped to hermitian forms. Clearly the mapϕbehaves well with respect to direct sums of hermitian forms, and unitality ofϕimplies that[e_E]is mapped to[e_F]. Finally, as in Proposition <ref> homotopies are mapped to homotopies. Let E,F be unital operator systems. If E and F are completely order isomorphic, then K_0(E) ≅ K_0(F). This follows from the previous Proposition in combination with the fact that a unital complete order isomorphismϕ: E → Fextends to a unital*-isomorphismϕ̃: C^*_(E) → C^*_(F)<cit.>. The following is then immediate. * For every unital operator sytem E, 𝕀_E^* = 𝕀_K_0(E); * If E, F,G are unital operator systems, and if ϕ:E → F, ψ:F → G are restrictions of *-homomorphisms between the respective C^*-envelopes, then ψ^* ∘ϕ^*= ψ^* ∘ϕ^*; * K_0({0}) = { 0}; We also record the following behaviour ofK_0with respect to direct sums of operator systems, which is a direct consequence of Lemma <ref>. Let E,F be unital operator systems. Then K_0(E ⊕ F) ≅ K_0(E) × K_0(F). § NON-UNITAL OPERATOR SYSTEMS AND STABILITY OF K-THEORY §.§ Non-unital operator systems Recall <cit.> ( cf.<cit.>) that the partial unitization of a non-unital operator systemEis given by the*-vector spaceE^♯ = E ⊕with matrix order structure:(x,A) ≥ 0 iff A ≥ 0 and ϕ(A_^-1/2 x A_^-1/2 ) ≥ -1for all >0and noncommutative statesϕ∈𝒮_n(E), and whereA_ = 𝕀_n + A. The matrix order unitse^♯_ninM_n(E^♯)are given by the identity matrices𝕀_n ∈ M_n(). This turns(E^♯, e^♯)into a unital operator system. Moreover, given a completely contractive completely positive mapϕ:E → Fwe have that the canonical extensionϕ^♯: E^♯→ F^♯is a ucp map (<cit.>) . We now extend the definition of K-theory to non-unital operator systems as follows: Let E be a non-unital operator systems and E^♯ its unitization. We define the sets (E,n) := π_0 ( { (x,A) ∈ H(E^♯,n) : A ∼_n 𝕀_n }) The Grothendieck group of the direct limit semigroup is denoted by K̃_0(E). Let E and F be operator systems. If E and F are completely isometric, completely order isomorphic, then (E) ≅(F) as semigroups. Consequently, in this case K̃_0(E) ≅K̃_0(F). This follows from the fact that for a completely isometric, complete order isomomorphismϕ:E → Fthe induced mapϕ^♯ :E^♯→ F^♯is a complete order isomorphism which furthermore respects the propertyA ∼_n 𝕀_n. Let us also compare this with with our previous definition of K-theory in the case of unital operator systems. Let E be a unital operator system and let E^♯ be its partial unitization. Then for any n ≥ 1 there are isomorphisms (E,n) ≅(E,n). Consequently, in this case K̃_0(E) ≅ K_0(E). In the unital case, there is a unital complete order isomorphism betweenE^♯andE ⊕given by(x, λ) ↦ (x+λ e, λ), where we have equippedE ⊕with the induced direct sum order structure <cit.>. By Lemma <ref> we have thatH(E ⊕,n) = H(E,n) × H(,n)from which it follows that{ (x,A) ∈ H(E^♯,n) : A ∼_n 𝕀_n }≅ H(E,n) ×{ A ∈ H(,n) : A ∼_n 𝕀_n }This isomorphism clearly respects direct sums and the unit, so that taking homotopy equivalence classes of this set of hermitian forms yields the statement. §.§ Stability of K_0 One of the crucial features ofK-theory forC^*-algebras is that it is Morita invariant, or, equivalently, invariant under stable isomorphism. We will now establish thatK-theory for unital operator systems shares this property, i.e. we will relate theK_0-groups ofEand𝒦⊗ E. Consider a direct system of Hilbert subspaces{ P_N }̋_N ≥ 0in$̋ with P_N ≅̋^N so that we realize = ()̋≅ M_N(). We then obtain the stabilization of E by the following series of completely contractive, completely positive maps defined for M ≥ N: κ_NM : M_N(E) → M_M(E), x ↦[ x 0; 0 0_M-N ], The inductive limit of the sequence (M_N(E),κ_NM) is 𝒦⊗ E with connecting maps κ_N,∞ : M_N(E) →𝒦⊗ E. The maps (κ_NM^♯)_*: (M_N(E),n) →(M_M(E),n) induced by κ_NM make the following diagram commute for any n: (M_N(E),n) [d]_≅[rr]^- (κ_NM^♯)_* (M_M(E),n)[d]_≅ (M_N(E),n) [rr]^-_NM (M_M(E),n) where _NM is defined in Equation (<ref>) and the vertical isomorphisms are the ones from Proposition <ref>. The maps κ_NM^♯: M_N(E)^♯→ M_M(E)^♯ and their amplifications are given explicitly by (κ_NM^♯)^(n) : M_n(E^♯) → M_n( M_N(E)^♯) , (x,A) ↦( u_σ[ x 0; 0 0_n(M-N) ] u_σ^* , A ). Here u_σ is a permutation matrix similar to the one appearing in the proof of Theorem <ref>: it identifies M_M(M_n(E)) with M_n(M_M(E)) by a suitable shuffle of rows and columns. When we identify M_N(E)^♯ with the direct sum operator systems M_N(E) ⊕ ( cf. Proposition <ref>), then the map (κ_1N^♯)^(n) becomes (x,A) ∈ M_n(M_N( E)) ⊕ M_n() ↦(u_σ( [ x 0; 0 A e_M-N ]) u_σ^*, A ) ∈ M_n( M_M(E)) ⊕ M_n() If we assume that A ∼_n _n we see that up to homotopy this map coincides with _NM, which completes the proof. For a unital operator system E we have K̃_0(𝒦⊗ E) ≅ K_0(E). The above Lemma, in combination with Theorem <ref> and Proposition <ref>, yields that (M_N(E)) ≅(E). We will show that the universal map u: (M_N(E)) →(⊗ E) for the direct limit in the following diagram is an isomorphism: (M_N(E)) [rd] @/^-2.0pc/[rdd]_(κ_N∞^♯)_*[rr]^(κ_NM^♯)_* (M_M(E)) [ld] @/^2.0pc/[ldd]^(κ_M∞^♯)_* (M_N(E)) @–>[d]_u (⊗ E) For injectivity of u, take [(x,A)] ∈(M_N(E),n) and [(x',A')] ∈(M_N(E),n') for some N and n,n'. Assume that [(κ_N,∞^(n)(x),A)] = [(κ_N,∞^(n')(x),A)] as elements in (⊗ E). In other words, there exists a family (x̃,Ã) ∈ H ( C[0,1] ⊗ (𝒦⊗ E)^♯ ,k) for some k ≥ n,n' such that the following hold: x̃(0) = κ_N,∞ ^(n)(x) ⊕ 0_k-n , Ã(0)=A ⊕_k-n, x̃(1) = κ_N,∞^(n) (x') ⊕ 0_k-n' Ã(1) = A' ⊕_k-n'. Consider the compressions R_M: 𝒦⊗ E → M_M(E) given by y ↦ (P_M ⊗ e)y (P_M ⊗ e). Then for the above x ∈ M_n(M_N(E)) we have κ_NM^(n)(x) = R_M^(n) (κ_N ∞^(n) (x)) ≡ (P_M ⊗ e_n) κ_N ∞^(k)(x) (P_M ⊗ e_n). and similarly for x'. We may also compress the homotopy to give (R_M^♯)^(k)((x̃, Ã)) = (R_M^(k) (x̃),Ã ) with end points: (R_M^♯)^(k)( ( (x̃(0)),Ã (0)) = ( κ_NM^(n)(x) ⊕ 0_k-n, A ⊕_k-n) (R_M^♯)^(k)( ( (x̃(1)),Ã (1)) = ( κ_NM^(n')(x') ⊕ 0_k-n', A' ⊕_k-n') A priori the family (R_M^♯)^(k)((x̃, Ã)) lies in M_k(C[0,1] ⊗ (M_M( E))^♯) so we need to show that it completely lies in ( M_M(E),n), at least for some sufficiently large M. In other words, we need to show that the gap of (R_M^(k) (x̃),Ã) is strictly positive, while Ã(t) ∼𝕀_k for all t. The latter fact is clear from the assumption that both A ∼_n and A' ∼_n'. For the former claim, we identify C[0,1] ⊗ (M_M (E))^♯ with C[0,1] ⊗ (M_M (E) ⊕) which maps (R_M^(k) (x̃),Ã ) to (R_M^(k) (x̃) + P_M ⊗Ã e,Ã) since P_M ⊗ e is the order unit of M_M(E). In order to check that the latter has strictly positive gap, we realize M_k((⊗ E)^♯) as concrete operators in B( ⊗̋( '̋)^⊕ k) for some Hilbert space '̋. We compute (R_M^(k)(x̃ )+ P_M ⊗Ã )^2 = ((P_M ⊗ e_k) (x̃ + 𝕀_⊗̋Ã )(P_M ⊗ e_k) )^2 = ((P_M ⊗ e_k) (x̃ + 𝕀_⊗̋Ã )^2(P_M ⊗ e_k) + (P_M ⊗ e_k )[P_M ⊗ e_k ,x̃ + P_M ⊗Ã ]^2 (P_M ⊗ e ) ≥ g^2(P_M ⊗ e_k) - [P_M ⊗ e_k , x̃] ^2. We have used that (x̃(t),Ã(t)) ∈ H( (⊗ E)^♯,k) has gap given by some g>0 for all t. Moreover, [P_M ⊗ e_k , x̃ ]→ 0 as N →∞ so that it follows that (R_M^(k)(x̃),Ã ) is a hermitian form. We thus find that [(R_M^(k)(x̃)(t),Ã(t) )] ∈( M_M(E),k) for all t, so that [(κ_NM^(n)(x),A)] = [(κ_NM^(n')(x'), A)] in (M_M(E)). Since we also know that κ_NM^♯ induces isomorphisms between (M_N(E)) and (M_M(E)) we conclude that [(x,A)] = [(x', A)] as elements in (M_N(E)). For surjectivity, take an arbitrary [(x,A)] ∈( ⊗ E, n). We may approximate x by a finite rank operator (for instance using the compression R_N^(n)) so that for all ϵ>0 there exists a x_0∈ M_n(M_N(E)) for some N so that x - κ_N,∞^(n)(x_0) < ϵ. We then also have (x,A) - (κ_N,∞^♯)^(n) (x_0,A) < ϵ so that if (x,A) has gap g and we choose ϵ small enough so that g^2 - ϵ x - ϵ^2/4 >0, then (x,A) and (κ_N,∞^♯)^(n) (x_0,A) are homotopy equivalent by Corollary <ref>. The element (x_0,A) is the sought-for element in (M_N(E)) ≅(E) that maps to [(x,A)]. Let E and F be Morita equivalent unital operator systems. Then K_0(E) ≅ K_0(F). In <cit.> it is shown that E and F are Morita equivalent whenever ⊗ E ≅⊗ F via a completely isometric, complete order isomorphism. § APPLICATION TO THE SPECTRAL LOCALIZER In <cit.> the spectral localizer was introduced as a powerful tool for computing index pairings. They relate a certain Fredholm index to the signature of a finite-dimensional matrix —the so-called spectral localizer. We will put it in the context of our notion of K-theory, realizing the spectral localizer as a map (E,n) →. the spectral localizer, including its relation to spectral flow, we refer to the excellent textbook <cit.> and references therein. In order to describe the index map on (E,n), we start with an operator system spectral triple <cit.>. A (unital) operator system spectral triple is given by a triple (E,,̋D) where E is a unital operator system realized concretely so that E ⊆ C^*_(E) ⊆ B()̋, and a self-adjoint operator D: (D) →$̋ such that * the commutators [D,x] extend to bounded operators for all x ∈ for a dense *-subspace ⊆ E; * the resolvent (i+D)^-1 is a compact operator. An operator system spectral triple is called even if in addition to the above, there is a grading operatorγon$̋ (so that γ^* = γ, γ^2 = 1_$̋) which commutes with allx ∈ Eand anti-commutes withD. Otherwise, it is called odd. In the even case, we can decompose=̋_̋+ ⊕_̋-according to the eigenvalues ofγ, and decompose accordinglyD = [ 0 D_0; D_0^* 0 ].Given an even operator system spectral triple, a parameterκ >0and a hermitian formx ∈ H(,n)we now define the even spectral localizer<cit.> as L_κ (D,x) = [ x κ (D_0)^⊕ n; κ (D_0^*)^⊕ n - x ]. Let (E,,̋D) be an even finite-dimensional operator system spectral triple. Let x ∈ H(,n) with gap g>0 and let κ_0 = g^2 [D,x] ^-1. Then the index map defined by the signature of the spectral localizer, _D([x]) = 1/2 (L_κ(D,x)), is constant for all κ < κ_0 and invariant under homotopy equivalence. Consequently, it induces a map _D: (,n) →. Without loss of generality we taken=1and compute very similar to <cit.> that L_κ(D,x)^2 = [ κ^2 D_0 D_0^* + x^ 2 κ [D_0,x]; κ [D_0,x]^* κ^2 D_0^* D_0 +x^2 ]≥( g^2 -κ [D,x]) 1_M_2(B()̋). HenceL_κ(D,x)is invertible (and thus has well-defined signature) providedκ< κ_0and, moreover, the signature is constant for allκ <κ_0as no eigenvalues will cross the origin. Also, one may easily check that(L_κ(D,x ⊕ x')) = (L_κ(D,x ))+ (L_κ(D,x' )).In order to see that (L_κ(D,e )) = 0consider an eigensystem{v_λ = (v_λ^+, v^-_λ)}_λin_̋+ ⊕_̋-such thatD_0 v^-_λ = λ v^+_λandγ v_λ^± = ± v_λ^±. We may write the matrix ofL_κin this eigensystem as⟨ v_λ, L_κ(D,x) v_λ'⟩ =δ_λλ'[ 1 λ; λ -1 ]∼δ_λλ'[ √(1+λ^2) 0; 0 -√(1+λ^2) ]and the signature of the latter matrix vanishes. Consider now a homotopyx̃inH(E,n)betweenx = x̃(0)andx' = x̃ (1). We defineκ_0' = g^2 / sup_t [D,x̃(t)]and note that for allκ < κ_0'L_κ(D,x̃)^2 ≥ g^2 - κsup_t [D, x̃(t)]by a computation similar to the one in Eq. (<ref>). So as long asκ < κ_0'we find that L_κ(D,x̃(t))is constant int. We combine this with the fact thatκ < κ_0' < κ_0to conclude that L_κ(D,x̃(0)) = L_κ(D,x̃(1)). This completes the proof. We may now rephrase the main results of <cit.>. For an invertible self-adjoint elementx ∈ Aone considers the class[p=1/2 (1-x|x|^-1)]in the K-theory of theC^*-algebraA. Given a spectral triple(A,,̋D)one would like to compute the index pDp. As shown in loc.cit. this index may be computed in terms of a spectrally truncated (operator system) spectral triple(P A P,P ,̋ P D P)for a spectral projectionP(ofD) of sufficiently high rank. Indeed, we then have (pDp)= _PDP ([P x P ]) where the map on the right-hand side is the index map on(PAP,n)that appeared in Proposition <ref>. §.§ Example: spectral localizer on the torus Let us illustrate the spectral localizer for a spectrally truncated two-torus. First, recall from <cit.> the class of projections on the torus ( cf.<cit.>, much inspired by the so-called Powers–Rieffel projections on the noncommutative torus <cit.>: p = [ f g+hU^*; g+ hU 1-f ]∈ M_2(C(^2)), wheref,g,hare real-valued (periodic) functions of the first variablet_1, andUis a unitary depending only on the second variablet_2, sayU(t_2)=e^i m t_2. The projection propertyp^2=ptranslates into the two conditionsgh = 0, g^2 + h^2 = f-f^2.A possible solution of these relations is given by0 ≤ f ≤ 1 such that f(0)=1, f(π) = 0 ,and theng = χ_[0,π]√(f-f^2)andh = χ_[π,2π]√(f-f^2), whereχ_Xis the indicator function for the setX(see Figure <ref>). The Dirac operator on the two-torus is defined on the coreC^∞(^2) ⊗^2byD= [ 0 D_0; D_0^* 0 ]; D_0 = i ∂_t_1 + ∂_t_2.The corresponding spectrum is{±√(n_1^2+n_2^2)}_n_1,n_2∈. Let us now consider a spectral projection corresponding to a (discrete) ball of radiusρ, i.e.P_ρ = χ(|D|≤ρ). We consider the corresponding spectral compression ofY=1-2p, i.e.P_ρ Y P_ρ = [ P_ρ -2 P_ρ fP_ρ -2P_ρ gP_ρ -2P_ρ h U P_ρ; -2P_ρ gP_ρ -2P_ρ h U^*P_ρ -P_ρ +2P_ρ fP_ρ ]∈ M_2(P_ρ C^∞(^2)P_ρ )For suitableP_ρthese are hermitian forms in the truncated operator system, that is,P_ρ YP_ρ∈(P_ρ C(^2)P_ρ ,2). The spectral localizer is now given by the following matrix:L_κ,ρ:= L_κ(P_ρ D P_ρ,P_ρ Y P_ρ) = [ P_ρ YP_ρ κ P_ρ D_0 P_ρ; κ P_ρ D_0^* P_ρ -P_ρ YP_ρ ]The main result of <cit.> ( cf. Eq. (<ref>)) now implies that for suitableρ,κthe index ofpD pcan be expressed in terms of the signature of the above spectral localizer. More precisely, in terms of the above index map on(P_ρ C(^2)P_ρ , 2)we have:(pDp) = _P_ρ DP_ρ([P_ρ YP_ρ ]) ≡1/2Sig L_κ,ρIn Figure <ref> we illustrate the resulting signature by showing the negative and positive eigenvalues ofL_κ,ρ. We find already for lowρthat the signature of the spectral localizer is equal to (twice) the winding numberm, that is to say, the index ofpD p. § OUTLOOK We have proposed a generalization of K-theory forC^*-algebras to operator systems, which in the spirit of Witt was based on hermitian forms. Several aspects are still to be developed, and which we leave for future research. This includes a careful study of the functorial properties ofK_0and, in particular, howK_0behaves with respect to approximations results. For instance for approximations ofC^*-algebra by finite-dimensional algebras in <cit.> where the maps are approximately multiplicative, or approximately order-zero cpc maps. Or in the context of quantum metric spaces where it turns out that small Hausdorff distance between two such spaces —the state spaces of twoC^*-algebrasA,B— allows one to relate projections inAto projections inB<cit.>. Another open problem is to identify the higher K-groups, starting withK_1. Again, one needs to find a analogue of the assumption of being unitary. However, just demanding invertibility appears too weak; one can easily show that the set of all invertible elements in a finite-dimensional operator system is contractible —arguing much as in the case ofGL(n,). Nevertheless, from the odd spectral localizer <cit.> we know that some index-theoretic information is maintained after spectrally truncating a unitary. Again this fact will be leading in the development ofK_1, which we also leave for future research. This extends of course to finding an analogue of Bott periodicity. Dually, one may be interested also in defining K-homology for operator systems. Note here that the notion of a Fredholm module makes perfect sense for unital operator systems realized concretely in Hilbert space, but that this immediately gives rise to a Fredholm module for the pertinentC^*-extension. The task is thus to find the right notion of equivalence. '99AR22 R. Araiza and T. Russell. An abstract characterization for projections in operator systems, arXiv:2006.03094. Arv08 W. Arveson. The noncommutative Choquet boundary. J. Amer. Math. Soc. 21 (2008) 1065–1084. Arv69 W. B. Arveson. Subalgebras of C^∗-algebras. Acta Math. 123 (1969) 141–224. Bal05 P. Balmer. Witt groups. In Handbook of K-theory. Vol. 1, 2, pages 539–576. Springer, Berlin, 2005. Bla06 B. Blackadar. Operator algebras, volume 122 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin, 2006. Theory of C^*-algebras and von Neumann algebras, Operator Algebras and Non-commutative Geometry, III. BK97 B. Blackadar and E. Kirchberg. Generalized inductive limits of finite-dimensional C^*-algebras. Math. Ann. 307 (1997) 343–380. CCM14 A. H. Chamseddine, A. Connes, and V. Mukhanov. Geometry and the quantum: Basics. JHEP 1412 (2014) 098. CS20 A. Connes and W. D. van Suijlekom. Spectral truncations in noncommutative geometry and operator systems. Comm. Math. Phys. 383 (2021) 2021–2067. CS21 A. Connes and W. D. van Suijlekom. Tolerance relations and operator systems. Acta Sci. Math. (Szeged) 88 (2022) 101–129. CW23 K. Courtney and W. Winter. Nuclearity and CPC*-systems, arXiv:2304.01332. ALL22 F. D'Andrea, G. Landi, and F. Lizzi. Tolerance relations and quantization. Lett. Math. Phys. 112 (2022) Paper No. 65, 28. ALM14 F. D'Andrea, F. Lizzi, and P. Martinetti. Spectral geometry with a cut-off: topological and metric aspects. J. Geom. Phys. 82 (2014) 18–45. DK15 K. R. Davidson and M. Kennedy. The Choquet boundary of an operator system. Duke Math. J. 164 (2015) 2989–3004. DSW23 N. Doll, H. Schulz-Baldes, and N. Waterstraat. Spectral flow—a functional analytic and index-theoretic approach, volume 94 of De Gruyter Studies in Mathematics. De Gruyter, Berlin, [2023] 2023. ER00 E. G. Effros and Z.-J. Ruan. Operator spaces, volume 23 of London Mathematical Society Monographs. New Series. The Clarendon Press, Oxford University Press, New York, 2000. EKT21 G. K. Eleftherakis, E. T. A. Kakariadis, and I. G. Todorov. Morita equivalence for operator systems, arXiv:2109.12031. Far21 D. Farenick. The operator system of Toeplitz matrices. Trans. Amer. Math. Soc. Ser. B 8 (2021) 999–1023. FB23 D. Farenick and M. McBurney. Toeplitz separability, entanglement, and complete positivity using operator system duality. Proc. Amer. Math. Soc. Ser. B 10 (2023) 114–128. GS22 M. Gielen and W. D. van Suijlekom. Operator systems for tolerance relations on finite sets. Indag. Math. (N.S.) 34 (2023) 606–621. Ham79 M. Hamana. Injective envelopes of operator systems. Publ. Res. Inst. Math. Sci. 15 (1979) 773–785. Hek21 E.-M. Hekkelman. Truncated geometry on the circle. Lett. Math. Phys. 112 (2022) Paper No. 20, 19. KK21 A. K. Karn and A. Kumar. k_0-group of absolute matrix order unit spaces, 2101.01966. KKM21 M. Kennedy, S.-J. Kim, and N. Manor. Nonunital operator systems and noncommutative convexity. Int. Math. Res. Not. IMRN (2023) 4408–4455. Kle14 C. Kleski. Boundary representations and pure completely positive maps. J. Operator Theory 71 (2014) 45–62. Knu91 M.-A. Knus. Quadratic and Hermitian forms over rings, volume 294 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1991. With a foreword by I. Bertuccioni. LS23 M. Leimbach and W. D. van Suijlekom. Gromov-Hausdorff convergence of spectral truncations for tori. Adv. Math. 439 (2024) Paper No. 109496, 26. LS18b T. A. Loring and H. Schulz-Baldes. Spectral flow argument localizing an odd index pairing. Canad. Math. Bull. 62 (2019) 373–381. LS18a T. A. Loring and H. Schulz-Baldes. The spectral localizer for even index pairings. J. Noncommut. Geom. 14 (2020) 1–23. Lor86 T. A. Loring. The torus and noncommutative topology. ProQuest LLC, Ann Arbor, MI, 1986. Thesis (Ph.D.)–University of California, Berkeley. MS98 P. S. Muhly and B. Solel. An algebraic characterization of boundary representations. In Nonselfadjoint operator algebras, operator theory, and related topics, volume 104 of Oper. Theory Adv. Appl., pages 189–196. Birkhäuser, Basel, 1998. Ng21 C.-K. Ng. Dual spaces of operator systems. J. Math. Anal. Appl. 508 (2022) Paper No. 125890, 23. OY15 H. Oyono-Oyono and G. Yu. On quantitative operator K-theory. Ann. Inst. Fourier (Grenoble) 65 (2015) 605–674. Pau02 V. Paulsen. Completely bounded maps and operator algebras, volume 78 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2002. Pis03 G. Pisier. Introduction to operator space theory, volume 294 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2003. Rie81 M. A. Rieffel. C*-algebras associated with irrational rotations. Pacific J. Math. 93 (1981) 415–429. Rie10 M. A. Rieffel. Vector bundles and Gromov-Hausdorff distance. J. K-Theory 5 (2010) 39–103. Rie18 M. A. Rieffel. Vector bundles for “matrix algebras converge to the sphere”. J. Geom. Phys. 132 (2018) 181–204. Rie23 M. A. Rieffel. Convergence of Fourier truncations for compact quantum groups and finitely generated groups. J. Geom. Phys. 192 (2023) Paper No. 104921, 13. Ros95 J. Rosenberg. Analytic Novikov for topologists. In Novikov conjectures, index theorems and rigidity, Vol. 1 (Oberwolfach, 1993), volume 226 of London Math. Soc. Lecture Note Ser., pages 338–372. Cambridge Univ. Press, Cambridge, 1995. Sui14 W. D. suijlekomvan Suijlekom. Noncommutative Geometry and Particle Physics. Springer, 2015. Sui21 W. D. van Suijlekom. Gromov-Hausdorff convergence of state spaces for spectral truncations. J. Geom. Phys. 162 (2021) Paper No. 104075, 11. Wer02 W. Werner. Subspaces of L(H) that are *-invariant. J. Funct. Anal. 193 (2002) 207–223.
http://arxiv.org/abs/2409.03363v1
20240905091038
Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
[ "Cheng Wang", "Yiwei Wang", "Bryan Hooi", "Yujun Cai", "Nanyun Peng", "Kai-Wei Chang" ]
cs.CL
[ "cs.CL" ]
Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management Yujie Wang^1, Shenhan Zhu^1, Fangcheng Fu^1, Xupeng Miao^2, Jie Zhang^3, Juan Zhu^3, Fan Hong^3, Yong Li^3, Bin Cui^1 September 9, 2024 ========================================================================================================================= § ABSTRACT The training data in large language models is key to their success, but it also presents privacy and security risks, as it may contain sensitive information. Detecting pre-training data is crucial for mitigating these concerns. Existing methods typically analyze target text in isolation or solely with non-member contexts, overlooking potential insights from simultaneously considering both member and non-member contexts. While previous work suggested that member contexts provide little information due to the minor distributional shift they induce, our analysis reveals that these subtle shifts can be effectively leveraged when contrasted with non-member contexts. In this paper, we propose , a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts through contrastive decoding, amplifying subtle differences to enhance membership inference. Extensive empirical evaluations demonstrate that achieves state-of-the-art performance on the WikiMIA benchmark and is robust against various text manipulation techniques. § INTRODUCTION Large Language Models (LLMs) <cit.> have revolutionized natural language processing by achieving remarkable performance across a wide range of language tasks. These models owe their success to extensive training datasets, often encompassing trillions of tokens. However, the sheer volume of these datasets makes it practically infeasible to meticulously filter out all inappropriate data points. Consequently, LLMs may unintentionally memorize sensitive information, raising significant privacy and security concerns. This memorization can include test data from benchmarks <cit.>, copyrighted materials <cit.>, and personally identifiable information <cit.>, leading to practical issues such as skewed evaluation results, potential legal ramifications, and severe privacy breaches. Therefore, developing effective techniques to detect unintended memorization in LLMs is crucial. Existing methods for detecting pre-training data <cit.> typically analyze target text either in isolation or alongside with non-member contexts, while commonly neglecting member contexts. This omission is based on the belief that member contexts induce only minor distributional shifts, offering limited additional value <cit.>. However, our analysis reveals that these subtle shifts in member contexts, though often dismissed, hold valuable information that has been underexploited. The central insight of our work is that information derived from member contexts gains significant importance when contrasted with non-member contexts. This observation led to the development of , a novel approach that harnesses the contrastive power of prefixing target text with both member and non-member contexts. By exploiting the asymmetric distributional shifts induced by these different prefixes, provides more nuanced and reliable signals for membership inference. This contrastive strategy not only uncovers previously overlooked information but also enhances the accuracy and robustness of pre-training data detection, offering a more comprehensive solution than existing methods. To demonstrate the effectiveness of , we conduct extensive empirical evaluations on the method across a variety of models of different sizes. Our experiments show that outperforms the current state-of-the-art method by a significant margin, as shown in Figure <ref>. Notably, only requires a gray-box access to LLMs, i.e., token probabilities, and does not necessitate a reference model, enhancing its applicability in real-world scenarios. We summarize our contributions as follows: 1) We introduce , a novel contrastive decoding approach that effectively utilizes both member and non-member contexts, significantly enhancing the distinction between member and non-member data in LLMs. 2) Through extensive experiments, we demonstrate that achieves substantial improvements over existing baselines, highlighting its effectiveness and resilience in detecting pre-training data. 3) We demonstrate that is robust against text manipulation techniques, including random deletion, synonym substitution, and paraphrasing, maintaining superior performance and resilience to potential evasion strategies. § RELATED WORK Membership inference attack. Membership inference attack (MIA) was first proposed by <cit.>. MIA has been extensively studied, particularly in classification models within the computer vision domain <cit.>. While there is growing attention to MIA in language models, most work has focused on detecting fine-tuning data <cit.>. MIA can serve as a powerful tool for detecting copyrighted materials <cit.>, personally identifiable information <cit.> and test-set contamination <cit.>. Detecting Pre-training Data in LLMs. Although detecting pre-training data is an instance of MIA, it faces greater challenges compared to traditional MIA. Classical MIA <cit.> typically requires training a shadow model using data sampled from the training data distribution. However, for large language models, many developers are reluctant to release the full training data <cit.>, making it impractical to train shadow models. Additionally, due to the sheer volume of training data, LLMs are usually trained for a single epoch, which makes memorization inherently difficult and detection even more challenging <cit.>. To our knowledge, <cit.> was the first to investigate this problem, contributing a baseline method and the WikiMIA benchmark. Their method, Min-K%, despite its simplicity, serves as a powerful baseline. <cit.> enhanced Min-K% by normalizing token log-probabilities. The ReCall method  introduces relative conditional log-likelihoods and achieves current state-of-the-art performance. Contrastive Decoding. Contrastive decoding is primarily a method for text generation. Depending on the elements being contrasted, it serves different purposes. For example, DExperts <cit.> use outputs from a model exposed to toxicity to guide the target model away from undesirable outputs. Context-aware decoding <cit.> contrasts model outputs given a query with and without relevant context. <cit.> further enhance context-aware decoding by providing irrelevant context in addition to relevant context. In this paper, we adapt the idea of contrastive decoding to MIA, where the contrast occurs between target data prefixed with member and non-member contexts. § §.§ Problem Formulation Consider a model ℳ trained on dataset 𝒟. The objective of a membership inference attack is to ascertain whether a data point x belongs to 𝒟 (i.e., x ∈𝒟) or not (i.e., x ∉𝒟). Formally, we aim to develop a scoring function s(x, ℳ) →ℝ, where the membership prediction is determined by a threshold τ: x ∈𝒟 if s(x, ℳ) ≥τ x ∉𝒟 if s(x, ℳ) < τ . §.§ Motivation Our key insight is that prefixing target text with contextually similar content increases its log-likelihood, while dissimilar content decreases it. Member prefixes boost log-likelihoods for member data but reduce them for non-member data, with non-member prefixes having the opposite effect. This principle stems from language models' fundamental tendency to generate contextually consistent text. To quantify the impact of different prefixes, we use the Wasserstein distance to measure the distributional shifts these prefixes induce. For discrete probability distributions P and Q defined on a finite set X, the Wasserstein distance W is given by: W(P, Q) = ∑_x∈ X |F_P(x) - F_Q(x)|, where F_P and F_Q are the cumulative distribution functions of P and Q respectively. To capture the directionality of the shift, we introduce a signed variant of this metric: W_signed(P, Q) = sign(𝔼_Q[X] - 𝔼_P[X]) · W(P, Q). Our experiments reveal striking asymmetries in how member and non-member data respond to different prefixes. Figure <ref> illustrates these asymmetries, showing the signed Wasserstein distances between original and prefixed distributions across varying numbers of shots, where shots refer to the number of non-member data points used in the prefix. We observe two key phenomena: * Asymmetric Shift Direction: Member data exhibits minimal shift when prefixed with other member contexts, indicating a degree of distributional stability. However, when prefixed with non-member contexts, it undergoes a significant negative shift. In contrast, non-member data displays a negative shift when prefixed with member contexts and a positive shift with non-member prefixes. * Asymmetric Shift Intensity: Non-member data demonstrated heightened sensitivity to contextual modifications, manifesting as larger magnitude shifts in the probability distribution, regardless of the prefix type. Member data, while generally more stable, still exhibited notable sensitivity, particularly to non-member prefixes. These results corroborate our initial analysis and establish a robust basis for our contrastive approach. The asymmetric shifts in both direction and intensity provide crucial insights for developing a membership inference technique that leverages these distributional differences effectively. §.§ Contrastive Decoding with Member and Non-member Prefixes Building on the insights from our analysis, we propose , a method that exploits the contrastive information between member and non-member prefixes to enhance membership inference through contrastive decoding. Our approach is directly motivated by the two key observations from the previous section: * The asymmetric shift direction suggests that comparing the effects of member and non-member prefixes could provide a strong signal for membership inference. * The asymmetric shift intensity indicates the need for a mechanism to control the relative importance of these effects in the decoding process. These insights lead us to formulate the membership score s(x, M) for a target text x and model M as follows: LL(x|P_non-member) - γ· LL(x|P_member)/LL(x) , where LL(·) denotes the log-likelihood, P_member and P_non-member are prefixes composed of member and non-member contexts respectively, and γ is a parameter controlling the strength of the contrast. This formulation provides a robust signal for membership inference by leveraging the distributional differences revealed in our analysis. Figure <ref> illustrates how our contrastive approach amplifies the distributional differences Importantly, requires only gray-box access to the model, utilizing solely token probabilities. This characteristic enhances its practical utility in real-world applications where full model access may not be available, making it a versatile tool for detecting pre-training data in large language models. § EXPERIMENTS In this section, we will evaluate the effectiveness of across various experimental settings, demonstrating its superior performance compared to existing methods. §.§ Setup Baselines. In our experiment, we evaluate against seven baseline methods. Loss  directly uses the loss of the input as the membership score. Ref  requires another reference model, which is trained on a dataset with a distribution similar to 𝒟, to calibrate the loss calculated in the Loss method. Zlib  instead calibrates the loss by using the input's Zlib entropy. Neighbor  perturbs the input sequence to generate n neighbor data points, and the loss of x is compared with the average loss of the n neighbors. Min-K%  is based on the intuition that a member sequence should have few outlier words with low probability; hence, the top-k% words having the minimum probability are averaged as the membership score. Min-K%++  is a normalized version of Min-K% with some improvements.   calculates the relative conditional log-likelihood between x and x prefixed with a non-member contexts P_non-member. More details can be found in Table <ref>. Datasets. We primarily use WikiMIA <cit.> as our benchmark. WikiMIA consists of texts from Wikipedia, with members and non-members determined using the knowledge cutoff time, meaning that texts released after the knowledge cutoff time of the model are naturally non-members. WikiMIA is divided into three subsets based on text length, denoted as WikiMIA-32, WikiMIA-64, and WikiMIA-128. Another more challenging benchmark is MIMIR <cit.>, which is derived from the Pile <cit.> dataset. The benchmark is constructed using a train-test split, effectively minimizing the temporal shift present in WikiMIA, thereby ensuring a more similar distribution between members and non-members. More details about these two benchmarks are presented in Appendix <ref>. Models. For the WikiMIA benchmark, we use Mamba-1.4B <cit.>, Pythia-6.9B <cit.>, GPT-NeoX-20B <cit.>, and LLaMA-30B <cit.>, consistent with <cit.>. For the MIMIR benchmark, we use models from the Pythia family, specifically 2.8B, 6.9B, and 12B. Since Ref <cit.> requires a reference model, we use the smallest version of the model from that series as the reference model, for example, Pythia-70M for Pythia models, consistent with previous works <cit.>. Metrics. Following the standard evaluation metrics <cit.>, we report the AUC (area under the ROC curve) to measure the trade-off between the True Positive Rate (TPR) and False Positive Rate (FPR). We also include TPR at low FPRs (TPR@5%FPR) as an additional metrics. Implementation Details. For Min-K% and Min-K%++, we vary the hyperparameter k from 10 to 100 in steps of 10. For , we optimize γ from 0.1 to 1.0 in steps of 0.1. Following <cit.>, we use seven shots for both ReCall and on WikiMIA. For MIMIR, due to its increased difficulty, we vary the number of shots from 1 to 10. In all cases, we report the best performance. For more details, see Appendix <ref>. §.§ Results Results on WikiMIA. Table <ref> summarizes the experimental results on WikiMIA, demonstrating 's significant improvements over baseline methods. In terms of AUC performance, our method improved upon ReCall by 7.4%, 6.6%, and 5.7% on WikiMIA-32, -64, and -128 respectively, achieving an average improvement of 6.6% and state-of-the-art performance. For TPR@5%FPR, outperformed the runner-up by even larger margins: 30.0%, 34.8%, and 27.6% on WikiMIA-32, -64, and -128 respectively, with an average improvement of 30.8%. Notably, achieves the best performance across models of different sizes, from Mamba-1.4B to LLaMA-30B, demonstrating its robustness and effectiveness. The consistent performance across varying sequence lengths suggests that effectively identifies membership information in both short and long text samples, underlining its potential as a powerful tool for detecting pre-training data in large language models in diverse scenarios. Results on MIMIR. We summarize the experimental results on MIMIR in Appendix <ref>. The performance of on the MIMIR benchmark demonstrates its competitive edge across various datasets and model sizes. In the 7-gram setting, consistently achieved top-tier results, often outperforming baseline methods. Notably, on several datasets, our method frequently secured the highest scores in both AUC and TPR metrics. In the 13-gram setting, maintained its strong performance, particularly with larger model sizes. While overall performance decreased compared to the 7-gram setting, still held leading positions across multiple datasets. It's worth noting that exhibited superior performance when dealing with larger models, indicating good scalability for more complex and larger language models. Although other methods occasionally showed slight advantages in certain datasets, 's overall robust performance underscores its potential as an effective method for detecting pre-training data in large language models. §.§ Ablation Study We focus on WikiMIA with the Pythia-6.9B model for ablation study. Ablation on γ. In , we introduce a hyperparameter γ, which controls the contrastive strength between member and non-member prefixes. The AUC performance across different γ values for the WikiMIA dataset is depicted in Figure <ref>. The red vertical lines mark the γ = 0 case, where reverts to the baseline ReCall method. The performance of fluctuates as γ varies, meaning that there exist an optimal value for γ for us to get the best performance. However, even without any fine-tuning on γ, our method still outperforms ReCall and other baselines. Ablation on the number of shots. The prefix is derived by concatenating a series of member or non-member strings, i.e., P = p_1⊕ p_2⊕⋯⊕ p_n, and we refer to the number of strings as shots following <cit.>'s convention. In this section, we evaluate the relationship between AUC performance and the number of shots. We vary the number of shots on the WikiMIA dataset using the Pythia-6.9B model, and summarize the results in Figure <ref>. The general trend shows that increasing the number of shots improves the AUC, as more shots provide more information. Both ReCall and exhibit this trend, but significantly enhances the AUC compared to ReCall and outperforms all baseline methods. § ANALYSIS To further evaluate the effectiveness and practicality of , we conducted additional analyses focusing on its robustness and adaptability in real-world scenarios. These investigations provide deeper insights into the method's performance under various challenging conditions. §.§ Robustness of As membership inference attacks gain prominence, it is crucial to evaluate the robustness of these methods against potential evasion techniques. In real-world scenarios, data may not always be presented in its original form due to various factors such as text preprocessing, natural language variations, or intentional obfuscation. Therefore, a robust membership inference method should maintain its effectiveness even when faced with altered versions of the target data. To assess the robustness of , we employ three text manipulation techniques. First, we use Random Deletion, where we randomly remove a certain percentage of words from the original text, using deletion rates of 10%, 15%, and 20% in our experiments. Second, we apply Synonym Substitution, replacing a portion of the words in the text with their synonyms. For this technique, we use substitution rates of 10%, 15%, and 20%, utilizing WordNet <cit.> for synonym selection. Lastly, we leverage the WikiMIA-paraphrased dataset <cit.>, which offers paraphrased versions of the original WikiMIA <cit.> texts. This dataset, created using ChatGPT[OpenAI. <https://chat.openai.com/chat>] to rephrase the original text while preserving its meaning, provides a standardized benchmark for evaluating robustness against paraphrasing. We evaluate the effectiveness of baselines and after transforming texts using the above techniques. Our experiments are conducted using Pythia-6.9B <cit.> and LLaMA-30B <cit.> models on the WikiMIA-32 <cit.> dataset. Table <ref> presents the AUC performance for each method under various text manipulation scenarios. The results demonstrate that consistently outperforms baseline methods across all text manipulation techniques, maintaining its superior performance even when faced with altered versions of the target data. Notably, shows particular resilience to synonym substitution and paraphrasing, where it experiences minimal performance degradation compared to other methods. This robustness underscores 's effectiveness in real-world scenarios where data may undergo various transformations. §.§ Approximation of Members In real-world scenarios, access to member data may be limited or even impossible. Therefore, it is crucial to develop methods that can approximate member data effectively. Our approach to approximating members is driven by two primary motivations. First, large language models (LLMs) are likely to retain information about significant events that occurred before their knowledge cutoff date. This retention suggests that LLMs have the potential to recall and replicate crucial aspects of such events when prompted. Second, when presented with incomplete information and tasked with its completion, LLMs can effectively leverage their internalized knowledge to generate contextually appropriate continuations. These two motivations underpin our method, where we first utilize an external LLM to enumerate major historical events. We then truncate these events and prompt the target LLM to complete them, hypothesizing that the generated content can serve as an effective approximation of the original data within the training set. To test this approach, we first employed GPT-4o <cit.> to generate descriptions of seven major events that occurred before 2020 (the knowledge cutoff date for the Pythia models). We then truncated these descriptions and prompted the target model to complete them. This method allows us to simulate the generation of data resembling the original members without directly accessing the original training set. Details of the prompts and the corresponding responses can be found in Appendix <ref>. We evaluated this method using a fixed number of seven shots for consistency with our previous experiments. The results, summarized in Table <ref>, demonstrate that even without prior knowledge of actual member data, this approximation approach yields competitive results, outperforming several baseline methods. This finding suggests that when direct access to member data is not feasible, leveraging the model's own knowledge to generate member-like content can be an effective alternative. § CONCLUSION In this paper, we introduced , a novel contrastive decoding approach for detecting pre-training data in large language models. By leveraging both member and non-member contexts, CON-RECALL significantly enhances the distinction between member and non-member data. Through extensive experiments on multiple benchmarks, we demonstrated that CON-RECALL achieves substantial improvements over existing baselines, highlighting its effectiveness in detecting pre-training data. Moreover, CON-RECALL showed robustness against various text manipulation techniques, including random deletion, synonym substitution, and paraphrasing, maintaining superior performance and resilience to potential evasion strategies. These results underscore CON-RECALL's potential as a powerful tool for addressing privacy and security concerns in large language models, while also opening new avenues for future research in this critical area. §.§ Limitations The efficacy of is predicated on gray-box access to the language model, permitting its application to open-source models and those providing token probabilities. However, this prerequisite constrains its utility in black-box scenarios, such as API calls or online chat interfaces. Furthermore, the performance of is contingent upon the selection of member and non-member prefixes. The development of robust, automated strategies for optimal prefix selection remains an open research question. While our experiments demonstrate a degree of resilience against basic text manipulations, the method's robustness in the face of more sophisticated adversarial evasion techniques warrants further rigorous investigation. §.§ Ethical Considerations The primary objective in developing is to address privacy and security concerns by advancing detection techniques for pre-training data in large language models. However, it is imperative to acknowledge the potential for misuse by malicious actors who might exploit this technology to reveal sensitive information. Consequently, the deployment of necessitates meticulous consideration of ethical implications and the establishment of stringent safeguards. Future work should focus on developing guidelines for the responsible use of such techniques, balancing the benefits of enhanced model transparency with the imperative of protecting individual privacy and data security. § DATASETS STATISTICS § ADDITIONAL IMPLEMENTATION DETAILS All models are obtained from Huggingface[<https://huggingface.co/>] and deployed with 4 NVIDIA RTX 3090 GPUs. In our evaluation process, we carefully handled the data to ensure fair comparison across all methods. The specifics of our data handling varied between the WikiMIA and MIMIR datasets: For the WikiMIA dataset, we selected 7 samples each from the member and non-member sets to use as prefixes. The number of shots was fixed at 7 for all experiments on this dataset. For the MIMIR dataset, we removed 10 samples each from the member and non-member datasets to create our prefix pool. Unlike WikiMIA, we varied the number of shots from 1 to 10 and reported the best-performing configuration. For both datasets, the samples used for prefixes were removed from the evaluation set for all methods, including baselines, ensuring a fair comparison across different methods. § MEMBER APPROXIMATION DETAILS In this section, we detail our method for approximating member data when direct access to the original training set is not feasible. Our approach involves two steps: first, using GPT-4o <cit.> to generate descriptions of significant events, and then using these partially truncated descriptions to prompt our target model. We begin by providing GPT-4o with the following prompt: [title=Prompt for GPT-4o] Provide a concise overview of seven pivotal global events that occurred before 2020. Each event should be described in under 32 words and presented in a narrative format without explicit headings. GPT-4o generated the following response: [title=Response from GPT-4o] The world witnessed the fall of the Berlin Wall in 1989, symbolizing the end of the Cold War. The 9/11 attacks in 2001 reshaped global security dynamics. The 2008 financial crisis led to worldwide economic instability. In 1945, World War II concluded, altering geopolitical landscapes. The 1969 Moon landing marked humanity's leap into space. The 1979 Iranian Revolution transformed Middle Eastern politics. The 1918 Spanish flu pandemic highlighted vulnerabilities in global health. We then truncated these responses to create partial prompts: [title=Truncated Responses] * The world witnessed the fall of the Berlin Wall in * The 9/11 attacks in * The 2008 financial crisis led to * In 1945, World War II concluded, * The 1969 Moon landing marked * The 1979 Iranian Revolution transformed Middle Eastern * The 1918 Spanish flu pandemic highlighted These truncated texts were then used as prompts for our target model to complete, simulating the generation of member-like content. To ensure consistency with our experimental setup, we set the maximum number of new tokens (max_new_tokens) to match the length of the target text. For example, when working with WikiMIA-32, max_new_tokens was set to 32. § MIMIR RESULTS
http://arxiv.org/abs/2409.02915v1
20240904175244
Latent Watermarking of Audio Generative Models
[ "Robin San Roman", "Pierre Fernandez", "Antoine Deleforge", "Yossi Adi", "Romain Serizel" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Latent Watermarking of Audio Generative Models Robin San Roman Meta, FAIR Inria Nancy Pierre Fernandez Meta, FAIR Inria Rennes Antoine Deleforge Inria Nancy Yossi Adi Meta, FAIR Hebrew University of Jerusalem Romain Serizel Inria Nancy Received XXXX; Accepted YYYY =========================================================================================================================================================================================================== § ABSTRACT The advancements in audio generative models have opened up new challenges in their responsible disclosure and the detection of their misuse. In response, we introduce a method to watermark latent generative models by a specific watermarking of their training data. The resulting watermarked models produce latent representations whose decoded outputs are detected with high confidence, regardless of the decoding method used. This approach enables the detection of the generated content without the need for a post-hoc watermarking step. It provides a more secure solution for open-sourced models and facilitates the identification of derivative works that fine-tune or use these models without adhering to their license terms. Our results indicate for instance that generated outputs are detected with an accuracy of more than 75% at a false positive rate of 10-3, even after fine-tuning the latent generative model. watermarking, audio, generative § INTRODUCTION Sophisticated generative models are impacting various audio modalities: environmental sounds <cit.>, music <cit.>, and speech <cit.>. These models produce outputs increasingly indistinguishable from real data <cit.>. Their rapid proliferation and quality raise concerns about misuse (e.g. creation of deepfakes) and respect for intellectual property. These concerns are heightened when models are open-sourced, since they can be easily accessed and used by anyone, including malicious actors. Consequently, regulators suggest watermarking to label and detect generative model outputs (refer to the https://artificialintelligenceact.eu/EU AI Act, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/White House executive order, and http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htmCAC measures). Watermarking is a technique that slightly alters the audio after its generation, in a way that is inaudible for humans but identifiable by specific detection algorithms. The state-of-the-art methods are based on deep neural networks <cit.> that are trained end-to-end to embed and detect watermarks in audio signals, even after audio compression or editing. Such methods are for instance employed to safeguard APIs for public model demonstrations <cit.>. However, while post-hoc watermarking has proven effective in certain scenarios, it is not as effective for protecting open-sourced models, as malicious users could potentially extract the output before the watermarking stage (for example by commenting out the code responsible for watermark embedding). In the image domain, some methods <cit.> fine-tune decoders to output watermarked images directly, to make it compliant with open-sourcing. However, in the audio domain, it is common and cost-effective to train decoders (also called vocoders) that convert latent representations to waveforms <cit.>. Watermarking can thus be easily bypassed by using non-watermarked vocoders. Therefore, in this article, we propose to watermark the latent generative model that creates the latent representations. We focus on MusicGen <cit.> due to its performance and adoption. It consists of an auto-encoder EnCodec <cit.> that compresses audio into discrete representations (tokens) and a single-stage transformer (audio Language Model, LM) that predicts the next tokens and decodes them into a music stream. We train watermark generator/detector models to be robust to EnCodec. Intuitively, this makes both the audio and encoded tokens watermarked. We then train the LM on tokens derived from audios that were preemptively marked. The resulting LM produces tokens whose decoded outputs are watermarked, irrespective of the LM conditioning or decoding algorithm. In other terms, as long as the watermarking algorithm withstands the audio tokenization, the watermark transfers from the training data to the generative model outputs. In short, (1) we introduce a way to watermark audio generative models at the latent representations level, (2) we demonstrate that it makes generations detectable with high confidence while having almost no influence on the model performance, (3) we demonstrate the robustness of the watermark to model-level changes, namely, switching the decoding algorithm and fine-tuning the audio LM. § RELATED WORK Audio generation is a challenging task because audio signals are high-dimensional and have complex temporal dependencies. Early autoregressive deep-learning-based approaches like WaveNet <cit.> were quickly followed by GAN-based models <cit.>. Inspired by progress in text generation <cit.>, audio language models (LM) have recently emerged as state of the art for most audio generative tasks such as text-to-speech <cit.>, music <cit.> or sound <cit.> generation. They make audio modeling more tractable by compressing audio into discrete tokens using models like EnCodec <cit.>, SoundStream <cit.>, or DAC <cit.>. Additional tokens coming from text, melody, phoneme, speaker embedding, etc. may serve as conditioning to generate audio with user-specific characteristics. Then a transformer-based model <cit.> generates audio by predicting the next tokens and decoding them. In parallel to audio LMs, latent diffusion models have also been largely studied in recent works on audio generation. Those models can sample in an non autoregressive way from the training data distribution and have recently shown great generative habilities on different audio modalities such as speech <cit.>, music <cit.> or general audio <cit.>. Invisible audio watermarking has evolved from using domain-specific features in the time/frequency domain of audios <cit.> to deep learning methods that employ encoder/decoder architectures <cit.>. Notably, AudioSeal <cit.> introduces localized audio watermarking with a detector producing time-step-level logits. This method also allows for a watermarking robust to neural compression models which is a necessary element for our work. Generative model watermarking is attracting renewed interest thanks to its potential to improve detection of AI-generated contents. In this context, the aforementioned methods apply watermarking post-hoc after audio generation, unlike more recent methods which do so in-model. Examples include watermarking: image GANs by training a hyper-network model <cit.>, latent diffusion models by a quick fine-tuning of their decoder <cit.>, and HiFi-GAN decoders that take mel-spectrograms and output waveforms <cit.>. Unlike the last two approaches, our method operates one step earlier at the latent representation level. It draws inspiration from research demonstrating that watermarks embedded in images or texts may propagate from the training data of generative models to their outputs <cit.>. We apply this concept to audio generative models and target audio language models. § AUDIO MODEL WATERMARKING §.§ Problem statement We consider providers training an audio-generative model on a large proprietary dataset. They aim to release the model publicly but worry about misuse and unauthorized redistribution. To mitigate these concerns, they watermark the model during training to enhance the detection of generated content or unauthorized API usage. We describe this watermarking process below and provide a step by step overview in Figure <ref>. §.§ Audio watermarking We first build an audio watermarking model based on AudioSeal <cit.>. It jointly trains a watermark generator G and a watermark detector D. G takes a signal s ∈ℝ^T and generates an additive watermark δ_w, that is made imperceptible through perceptual losses ℒ_percep(s, s+δ_w). The watermarked audio s+δ_w is augmented into s'. Augmentations include padding the audio with 0, replacing intervals of watermarked audio with non-watermarked audio from the same batch, or dropping the δ_w. s' is fed to the detector, which is trained to output which part of a waveform is watermarked via a localization loss ℒ_loc(D(s'), y'), where y' ∈{0,1}^T indicates the watermarked intervals in s' (1 for watermarked, 0 otherwise). We make the following changes with regard to AudioSeal's recipe. We remove the message encoder to only focus on watermark detection. Furthermore, it is important to remember that the audio LM will be trained on tokens, not directly on audio. Hence, the LM will not retain watermark information if it is absent at the discrete representation level. Therefore, we train the watermark generation/detection to be very robust to the specific EnCodec compression model later used for audio tokenization (see Sec. <ref>). This is done by oversampling this EnCodec augmentation so that 50% of batches go through EnCodec before the detection phase. §.§ Audio language model We select MusicGen <cit.> as the audio LM to watermark. Watermarking. The first step is to watermark the audios with the model of Sec. <ref>. This is done on the fly at loading time (this takes around 10 ms for a 10-second audio). Tokenization. We use the EnCodec compression model to transform the audio signal into a discrete sequence of tokens suitable for language modeling. It uses residual vector quantization (RVQ) <cit.> which compresses an audio signal s ∈ℝ^T into K streams of tokens (u^(j)_i ) _j ∈ [1,K] ; i ∈ [0, T / f_r] (f_r being the frame rate). This model is trained on audio segments sampled at a rate of 32 kHz and f_r = 50 Hz, the number of codebooks is K=4 and the codebook size is 2048 (u^(j)_i ∈ [1,2048]). Overall, this results in a overall bit rate of 2.2 kbit per second Language modeling aims to build a probabilistic model of sequences of discrete tokens. MusicGen implements a delay pattern <cit.> that adds a delay k to the k-th residual. It allows the model to generate the tokens in a coarse to fine order. This way all the streams can be sampled in parallel while assuring that all the previous residuals are fixed when sampling a given token. Put differently, the transformer is fed with a sequence of embeddings created from K tokens: s_i = {u^(4)_i-4, u^(3)_i-2, u^(2)_i-1, u^(1)_i}. The embedding of s_i is then the sum of the embedding of each of its constitutive tokens, with additional sinusoidal embeddings. As most current language models <cit.>, training is done with next token prediction and the cross-entropy is computed per codebook. § EXPERIMENTS §.§ Experimental details Watermarking models. We train on an internal music dataset containing 1.5k songs at 32 kHz sample rate, with 1-second audio excerpts. The model is trained for 400k steps with batch size 64. Hyper-parameters (network architectures, optimizer...) are kept the same as in the original work <cit.>. MusicGen models. We use 20k hours of licensed music to train the models with two different model sizes: small (300M parameters) and medium (1.5B parameters). They are similar in quality and diversity to the ones used by Copet et al. <cit.>. We use public implementation and default parameters available on the https://github.com/facebookresearch/audiocraftAudioCraft GitHub page. Each model is trained for 200 epochs with a batch size of 128, with the default optimizer. We use 64 GPUs for the medium model and 32 for the small. Inference. For music generation sampling, we use top-k sampling with k=250 tokens and a temperature of 1.0. §.§ Quality of the audio generative model We first subjectively evaluate how watermarking influences the quality of the generative models. We adhere to the original paper's protocol <cit.>, using (OVL) to assess sample quality and (REL) to evaluate relevance to text prompts. Models are tested on 15-second generation using 40 text prompts from the test set of MusicCaps <cit.>. Every sample is rated by 20 listeners that rate them on a scale from 1 to 100. For every study, we report both mean score and CI95. <ref> shows that the performance difference between a model trained on watermarked data and one trained on normal data is negligible. This holds true for both sizes, with the rating difference falling within the confidence interval. §.§ Detection and localization results Detection. To evaluate the detection performance, we generate 10k positive 15-second samples with the watermarked model and use 10k negative samples from our test set that we compress using the codec model. The watermark detector gives a score per time-step of the audio, which we average to get a score for the whole audio. Audio is flagged as watermarked if this score is higher than a threshold τ. We report in Tab. <ref> the area under the ROC curve, as well as the accuracy for the best τ (and true positive and false positive rates at this τ, TPR, and FPR). The generated output is indeed watermarked as indicated by the detection metrics: the AUC is close to 1 and TPR is higher than 0.95 at FPR around 10^-4. Localization. We then evaluate if the detector still has the property to locally detect watermarked segments. To do so, we generate 15-second samples and replace parts with other non-watermarked audio from our test set. The proportion of signal that is watermarked is 50% on average. We then measure the precision of the detection using the detection accuracy at the sample level together with the intersection over union (IoU) metric. For localization, we use a fixed detection threshold set at 0.5. As shown in Tab. <ref>, results are on-par (although a bit lower) to AudioSeal, showing that the detector keeps a good-enough performance on the localization of generated outputs. Robustness. We evaluate the robustness to different audio edits, and compare the performance of our in-model watermarking and of post-hoc watermarking that directly applies the watermark to generated outputs with AudioSeal. The evaluation is made on 10k samples. <ref> shows that while in-model watermarking keeps a decent robustness to common audio edits there is a slight performance decrease compared to the original watermarking model. Therefore, when post-hoc watermarking is feasible, it might be preferable to in-model watermarking. The latter is better suited for scenarios where post-hoc watermarking is not possible, such as when open-sourcing a model. § ATTACKS ON THE MODEL'S WATERMARK We now focus on model-level attacks, i.e., modifications of the model attempting to make its outputs undetectable. §.§ Switching decoder Previous works alter the latent decoder to embed the watermark <cit.>. However, audio vocoders are relatively easy to train and interchange <cit.>. They do not necessitate extensive data or computational power compared to those required for training an audio LM. Therefore, replacing the decoder to use the watermark-free generative model is rather straightforward. In contrast, our work embeds the watermark at the latent stage for robustness against decoder changes. We now evaluate how the change of the decoder influences detection performance. In previous experiments, the default decoder was the codec model from MusicGen. We replace it with Multi-Band Diffusion <cit.> which uses diffusion to map discrete EnCodec tokens into the waveform domain, and a discrete version HiFi-GAN <cit.>, which we trained on tokens-waveform pairs. We use the same experimental setup as in Sec. <ref>, but with different algorithms to decode the tokens. <ref> shows that changing the decoder has very little impact on the detection metrics. Notably, using the diffusion-based decoder reduces the AUC only by around 0.01. §.§ Model fine-tuning One potential attack could be to remove the watermark through “model purification”, which involves fine-tuning the language model on a non-watermarked dataset. To test this we fine-tune with different learning rates the small version of the model using a different internal music dataset D_FT of similar size without watermarks. For each setting, the model is trained for 20 epochs (10% of the total pre-training steps). We then generate 10k samples with the purified models and obtain scores through the watermark detector. For each experiment, we report the accuracy obtained for the best threshold on the detection score as well as the TPR when the threshold is chosen such that the FPR is at 10^-3. We also report the Fréchet Audio Distance (FAD) <cit.> that evaluates the quality of the generative model. We include as a reference the performance of the model before fine-tuning and of a model trained from scratch on D_FT. <ref> suggests that a higher learning rate during fine-tuning makes watermarks more difficult to detect, but it also causes the distribution of generated data to deviate further from the protected model's dataset; at larger learning rates, the distribution has a similar FAD to a model trained from scratch on different data. In other words, since the FAD is almost the same as a model trained from scratch, it may not be worthwhile to start from the watermarked model. § CONCLUSION & DISCUSSION This work introduced a straightforward yet effective approach to watermarking audio language models. This is done via watermarking their training data in a way that is robust to the compression algorithm used to create tokens. It does not require modifications to the model architecture or the training process. Our method is the first to watermark at the latent level and is robust to changes in the decoding process. The main drawback of the current approach is that it requires training the model from scratch, which may be difficult for versioning large models or for adapting already-trained models. While there is a slight decrease in robustness to audio edits compared to post-hoc watermarking, this method allows to keep the watermark in situations for which post-hoc watermarking is not suitable (open sourcing...). In conclusion, watermarking can help trace content origin and support regulatory efforts. It is not a standalone solution and should be complemented with measures like policies, education, or monitoring. IEEEbib
http://arxiv.org/abs/2409.03486v1
20240905125124
Integer Factorization via Continued Fractions and Quadratic Forms
[ "Nadir Murru", "Giulia Salvatori" ]
math.NT
[ "math.NT", "11A51, 11Y05" ]
§ ABSTRACT We propose a novel factorization algorithm that leverages the theory underlying the SQUFOF method, including reduced quadratic forms, infrastructural distance, and Gauss composition. We also present an analysis of our method, which has a computational complexity of O ( exp( 3/√(8)√(ln N lnln N)) ), making it more efficient than the classical SQUFOF and CFRAC algorithms. Additionally, our algorithm is polynomial-time, provided knowledge of a (not too large) multiple of the regulator of ℚ(N). Quantum features of the transport through ion channels in the soft knock-on model @ September 9, 2024 ================================================================================= § INTRODUCTION The integer factorization problem is a fascinating challenge in number theory, with many important theoretical aspects and practical applications (e.g., in cryptography, where the most important public key cryptosystems are based on the hardness of solving this problem for large composite numbers). Indeed, currently, there does not exist a polynomial algorithm for factorizing integers and thus the research in this field is fundamental and active. To date, the most efficient algorithm known for factoring integers larger than 10^100 is the General Number Field Sieve designed by Pomerance <cit.>, with a heuristic running time of exp ( (√(64/9) + o(1) )(ln N)^1/3(lnln N)^2/3 ). However, for smaller numbers other algorithms perform better, such as the Quadratic Sieve and SQUare FOrm Factorization (SQUFOF). SQUFOF algorithm (the best method for numbers between 10^10 and 10^18) was proposed by Shanks in <cit.> and it is based on the properties of square forms and continued fractions. The algorithm is discussed, for example, in <cit.> and <cit.>. A rigorous and complete description of the method and its complexity is provided in the well-regarded paper by Gower and Wagstaff <cit.>, where the details are meticulously presented and the algorithm is examined in depth. Recently, the SQUFOF algorithm has been revisited by Elia <cit.> who proposed an improvement whose complexity is based on the computation of the regulator of a quadratic field. Another improvement, which exploits a sieve inspired by the Quadratic Sieve, can be found in <cit.>. The other main algorithm, which exploits the theory of continued fractions, is CFRAC <cit.> which was implemented and used for factorizing large numbers (such as the 7th Fermat number) by Morrison and Brillhart <cit.>. In this paper, we focus on the underlying theory of SQUFOF and, starting from the work by Elia <cit.>, we improve it, obtaining a novel factorization algorithm whose complexity for factoring the integer N is O ( exp (3/√(8)√(ln N lnln N) ) ), making it more efficient than the classical CFRAC and SQUFOF algorithms. The time complexity is similar to that obtained in <cit.>, although their method uses a different approach. The paper is structured as follows. Section <ref> is devoted to introducing the notation and developing the auxiliary results utilized in the design of the factorization algorithm. Specifically, in Section <ref>, we deal with the theory of continued fractions, focusing on the expansion of square roots and the properties of particular sequences arising from these expansions. Section <ref> discusses when a nontrivial factor of N can be found in the case that the period of the continued fraction expansion of √(N) is even. Finally, in Section <ref>, we introduce the tools regarding quadratic forms, including the notion of distance between forms, the reduction operator, and the Gauss composition, focusing on the properties of particular sequences of quadratic forms used in the algorithm. In Section <ref>, we present and discuss all the details of our new algorithm and analyze the time complexity, highlighting also the fundamental role played by the computation of the regulator of ℚ(√(N)). § PRELIMINARIES AND AUXILIARY RESULTS §.§ Continued fractions It is well-known that the continued fraction expansion of quadratic irrationals is periodic and in this case the Lagrange algorithm can be used for obtaining such expansion. Let us consider, without loss of generality, quadratic irrationals α_0 = P_0 + √(N)Q_0, with P_0, Q_0, N ∈ℤ, N > 0 not square, Q_0 ≠0 and Q_0 | N - P_0^2. The continued fractions expansion [a_0, a_1, …] of α_0 can be obtained computing a_m = ⌊α_m ⌋ P_m+1 = a_m Q_m - P_m Q_m+1 = (N-P_m+1^2)/Q_m, where α_m = P_m + √(N)/Q_m, m ≥ 0. We recall that the continued fraction expansion of √(N) is periodic and has the following particular form √(N) = [a_0, a_1, a_2, a_3, …, a_τ -1, 2a_0], where the sequence (a_1, …, a_τ-1) is a palindrome. Kraitchik <cit.> showed that the period τ of the continued fraction expansion of √(N) is upper bounded by 0.72 √(N)ln N, for N > 7. However, the period length has irregular behavior as a function of N: it may assume any value from 1, when N = M^2 + 1, to values greater than √(N)lnln N (see <cit.> and <cit.>, respectively). From now on, we always consider the continued fraction expansion of √(N) as in (<ref>) (i.e., quadratic irrationals with Q_0 = 1 and P_0 = 0). Let {p_n}_n ≥ -1 and {q_n}_n ≥ -1 be the sequences of numerators and denominators of convergents of √(N), defined by p_-1 = 1, p_0 = a_0, q_-1 = 0, q_0 = 1 and p_m = a_m p_m-1 + p_m-2, q_m = a_m q_m-1 + q_m-2, ∀ m ≥ 1. We also recall the following two properties [ a_0 1 1 0 ]⋯[ a_m 1 1 0 ] = [ p_m p_m-1 q_m q_m-1 ] ∀ m ≥ 0, and p_τ -2 = -a_0 p_τ -1 + Nq_τ -1 q_τ -2 = p_τ -1 -a_0 q_τ -1, see, e.g., <cit.>. Next, we examine the sequence {𝔠_n }_n ≥ -1, defined by 𝔠_m := p_m + q_m √(N). The result in the next proposition can also be found in <cit.>. Here, we provide a slightly different proof for completeness. The sequence {𝔠_n }_n ≥ -1 satisfies the relation 𝔠_m + k τ = 𝔠_m𝔠_τ -1^k for all k ∈ℕ and m≥-1. The claimed equality is trivial for k = 0. First, we prove by induction on m the equality for k=1, and then we generalize for k>1. The case m=-1 is trivial, since 𝔠_-1 = 1. Now, we proceed by induction. Using the inductive hypothesis, we consider the following chain of equalities 𝔠_τ +m+1 = a_m+1𝔠_τ +m + 𝔠_τ +m-1 = a_m+1𝔠_m𝔠_τ -1 + 𝔠_m-𝔠_τ -1 = (a_m+1𝔠_m + 𝔠_m-1 )𝔠_τ -1 = 𝔠_m+1𝔠_τ -1 which concludes the proof in the case k=1. For the case k>1 we iterate as follows 𝔠_m + kτ = 𝔠_m + (k-1)τ𝔠_τ - 1 = ⋯ = 𝔠_m𝔠_τ - 1^k. We recall that the minimal positive solution of the Pell Equation X^2 - N Y^2 = 1 is (p_τ-1, q_τ -1) if τ is even, and (p_2τ-1, q_2τ -1) if τ is odd. We denote by R^+(N) the logarithm of the minimal positive solution of the Pell equation, by R(N) the regulator of ℚ(√(N)), and by 𝒩 the field norm of ℚ(√(N)). Moreover, given x + y √(N)∈ℚ(√(N)), we denote by x + y √(N) the element x - y √(N). We have (-1)^m P_m+1 = p_mp_m-1 - Nq_mq_m-1 ∀ m ≥ 0 and (-1)^m+1 Q_m+1 =p_m^2 -Nq_m^2= 𝒩(𝔠_m) ∀ m ≥ -1. The proof is straightforward by induction. Since we will exploit these sequences to factor the integer N, it is computationally important to bound their elements. We have 0< Q_m < 2/a_m√(N) , 0 ≤ P_m < √(N) ∀ m ≥ 0. See <cit.>. The following lemma proves two equalities useful in Theorem <ref>. It holds that p_τ -m-2 = (-1)^m-1p_τ -1p_m + (-1)^mNq_τ -1q_m q_τ -m-2 = (-1)^mp_τ -1q_m + (-1)^m-1q_τ -1p_m ∀ -1 ≤ m ≤τ -1. Using Equation (<ref>) and the fact that (a_1, …, a_τ -1) is palindrome, we obtain, for all 0 ≤ m ≤τ - 2, [ p_τ -1 p_τ -2 q_τ -1 q_τ -2 ] = [ p_τ-m-2 p_τ-m-3 q_τ-m-2 q_τ-m-3 ][ a_τ -m-1 1 1 0 ]⋯[ a_τ -1 1 1 0 ] = [ p_τ-m-2 p_τ-m-3 q_τ-m-2 q_τ-m-3 ][ a_m+1 1 1 0 ]⋯[ a_1 1 1 0 ] = [ p_τ-m-2 p_τ-m-3 q_τ-m-2 q_τ-m-3 ] [ [ a_0 1 1 0 ]^-1[ p_m+1 p_m q_m+1 q_m ] ]^T = [ p_τ-m-2 p_τ-m-3 q_τ-m-2 q_τ-m-3 ][ p_m+1 q_m+1 p_m q_m ][ 0 1 1 -a_0 ]. Multiplying by the inverse of the matrices [ p_m+1 q_m+1 p_m q_m ] and [ 0 1 1 -a_0 ], and using Equation (<ref>), we obtain [ p_τ-m-2 p_τ-m-3 q_τ-m-2 q_τ-m-3 ] = (-1)^m[ p_τ -1 p_τ -2 q_τ -1 q_τ -2 ][ a_0 1 1 0 ][ q_m -q_m+1 -p_m p_m+1 ] =(-1)^m [ Nq_τ-1q_m - p_mp_τ-1 -Nq_τ-1q_m+1 p_τ-1q_m - p_mq_τ-1 -p_τ-1q_m+1+q_τ-1p_m+1 ]. The cases m=-1 and m=τ -1 are straightforward to verify. The transformation defined by (<ref>) is identified by the matrix M_τ -1 = [ -p_τ -1 Nq_τ -1; -q_τ -1 p_τ -1 ] . The results that follow in this subsection are those found by Elia in <cit.>, but they are further extended, approaching also the case of odd periods, and we include more detailed proofs. The sequences {Q_n}_n ≥ 0 and {P_n}_n ≥ 1 are periodic of period τ, where τ is the period of the sequence of partial quotients { a_n}_n ≥ 0 of the continued fraction expansion of √(N). Further, within a period, there exist interesting symmetries. The sequence {Q_n}_n ≥ 0 is periodic with period τ. The elements of the first block { Q_n }_n=0^τ satisfy the symmetry relation Q_m = Q_τ -m, ∀ 0 ≤ m ≤τ. Using Equation (<ref>) and the fact that 𝒩(p_τ -1 + q_τ -1√(N)) = (-1)^τ, the following chain of equalities holds Q_m+τ = |𝒩(𝔠_m-1+ τ) | = | 𝒩(𝔠_m-1𝔠_τ -1) | = | 𝒩(𝔠_m-1) 𝒩( 𝔠_τ -1) | = Q_m, from which we deduce that the period of {Q_n}_n ≥ 0 is τ. The symmetry of the sequence {Q_n}_n ≥ 0 within the τ elements of the first period follows from Equation (<ref>). We have p_τ -m-2^2 -Nq_τ -m-2^2 = (p_τ -1p_m -Nq_τ -1q_m)^2 -N(-p_τ -1q_m + q_τ -1p_m)^2 = (p_m^2 - Nq_m^2)(p_τ -1^2 - Nq_τ -1^2) = (-1)^τ(p_m^2 - Nq_m^2), implying that (-1)^τ -m-1Q_τ -m-1 = (-1)^τ(-1)^m+1 Q_m+1 for all -1 ≤ m ≤τ -1. The sequence {P_n}_n ≥ 1 is periodic with period τ. The elements of the first block { P_m }_m=1^τ -1 satisfy the symmetry relation P_τ -m-1 = P_m, ∀ 1 ≤ m ≤τ -2. The periodicity of the sequence {P_n}_n ≥ 1 follows from the property expressed by Equation (<ref>) and Equation (<ref>), noting that (-1)^m P_m+1 = 1/2(𝔠_m𝔠_m-1 + 𝔠_m𝔠_m-1) =1/2 (𝔠_m+τ/𝔠_τ - 1𝔠_m-1+τ/𝔠_τ - 1 + 𝔠_m + τ/𝔠_τ - 1𝔠_m-1+τ/𝔠_τ - 1 ) = (-1)^τ (-1)^m+τ P_m+1+τ. The next chain of equalities proves the symmetry property (-1)^τ -m-1P_τ -m = p_τ -m-1p_(τ -1)-m-1 - Nq_τ-1-mq_(τ -1) -m-1 = -(p_τ -1p_m - Nq_τ -1q_m)(p_τ -1p_m-1 - Nq_τ -1q_m-1) + N(p_τ -1q_m - q_τ -1p_m)(p_τ -1q_m-1 - p_m -1q_τ-1) = -(p_τ -1^2 -Nq_τ -1^2)(p_mp_m-1 -Nq_mq_m-1) = -(-1)^τ(p_mp_m-1 -Nq_mq_m-1) = (-1)^τ +1 (-1)^m P_m+1, where, in the second-to-last equality, we used Equation (<ref>). Note that M_τ -1^2 = (-1)^τI_2, with I_2 the identity matrix, and, if τ is even, the eigenvalues of M_τ -1 are λ_0 =1 and λ_1 =-1, with eigenvectors 𝐗^(h) = [ p_τ -1 + λ_h/d , q_τ -1/d ]^T for λ_h = (-1)^h, where d = (p_τ -1 + λ_h, q_τ -1) for h ∈{ 0,1 }. If the period τ of the continued fraction expansion of √(N) is even, a factor of 2N is located at positions τ/2 + jτ with j = 0, 1, …, in the sequence {Q_n}_n ≥ 0. It is sufficient to consider j = 0, due to the periodicity of {Q_n}_n ≥ 0. Since τ is even, M_τ -1 is involutory and has eigenvalues λ_0 =1 and λ_1 =-1 with corresponding eigenvectors shown in (<ref>). Considering Equation (<ref>) written as [ p_τ - j-2; q_τ - j-2 ] = (-1)^j-1M_τ -1[ p_j; q_j ], we see that 𝐘^(j) = [ p_j, q_j ]^T is an eigenvector of M_τ -1, of eigenvalue (-1)^j-1, if and only if j satisfies the condition τ -j-2 = j, that is j=τ -2/2= τ_0. From the comparison of 𝐘^(j) and 𝐗^(h), we have p_τ_0 = p_τ -1 + (-1)^τ_0/d q_τ_0 = q_τ -1/d, where the equalities are fully motivated because gcd ( p_τ_0, q_τ_0 )=1, recalling that d = (p_τ -1 + (-1)^τ_0,q_τ -1). Direct computation yield (-1)^τ_0+1 Q_τ_0+1 = (p_τ -1 + (-1)^τ_0 -1)^2 - Nq_τ -1^2/d^2 = 2(-1)^τ_0p_τ -1 +1/d^2, which can be written as p_τ_0^2 - Nq_τ_0^2 = 2(-1)^τ_0p_τ_0/d. Dividing this equality by 2p_τ_0/d we have d p_τ_0/2- N1/2p_τ_0/dq_τ_0^2 = (-1)^τ_0. Noting that gcd ( p_τ_0, q_τ_0 )=1, it follows that 2p_τ_0/d is a divisor of 2N, i.e. Q_τ_0+1 = Q_τ/2| 2N. In the case where τ is odd, we can state the following two results. Let N be a positive integer such that the continued fraction expansion of √(N) has odd period τ. The representation of N = x^2 + y^2 is given by x = Q_(τ +1)/2 and y = P_(τ +1)/2. Since τ is odd, by the anti-symmetry in the sequence { Q_n}_n=0^τ -1, we have Q_(τ +1)/2 = Q_(τ -1)/2, so that the quadratic form Q_(τ -1)/2 X^2 + 2P_(τ +1)/2XY - Q_(τ +1)/2 Y^2 has discriminant 4P_(τ +1)/2^2 - 4Q_(τ +1)/2 Q_(τ -1)/2 = 4N, which shows the assertion. From this, we can deduce a result similar to that in Theorem <ref> for the case of an odd period. Let N>0 be a composite non-square integer such that the continued fraction expansion of √(N) has odd period τ. If -1 is a quadratic nonresidue modulo N, then Q_(τ +1)/2 contains a nontrivial factor of N. Using the previous theorem, N = Q_(τ +1)/2^2 + P_(τ +1)/2^2, and so P_(τ +1)/2^2 ≡ - Q_(τ +1)/2^2 N. If (N, Q_(τ +1)/2) = 1, then Q_(τ +1)/2^-1N exists. Therefore, (Q^-1_(τ +1)/2 P_(τ +1)/2 )^2 ≡ -1 N, and so -1 is a quadratic residue modulo N. The following identity holds √(N) + P_m+1/Q_m+1 = - p_m-1 - q_m-1√(N)/p_m - q_m√(N) ∀ m ≥ 0. The proof is straightforward. The following result will be useful in the next section. If τ is even, defining γ as γ = ∏_m=0^τ -1 (√(N) + P_m+1), we have γ/γ =(p_τ -1 + q_τ -1√(N))^2 = 𝔠_τ -1^2. If τ is odd, defining ω as ω = ∏_m=0^2τ -1 (√(N) + P_m+1), we have ω/ω =(p_2τ -1 + q_2τ -1√(N))^2 = 𝔠_2τ -1^2. We provide a proof for the case τ even; the proof for the odd case follows the same procedure. We have γ/γ = ∏_m=0^τ -1√(N) + P_m+1/-√(N) + P_m+1 = ∏_m=0^τ -1(√(N) + P_m+1)^2/ P_m+1^2 -N = ∏_m=0^τ -1(√(N) + P_m+1)^2/-Q_m+1 Q_m. Noting that ∏_m=0^τ -1 - Q_m Q_m+1 = ∏_m=0^τ -1 - Q_m+1^2 = ∏_m=0^τ -1 Q_m+1^2 due to the periodicity of the sequence {Q_m}_m ≥ 0 and the parity of τ, we deduce that γ/γ is a perfect square. Considering the identity of the Lemma <ref>, we have that the base of the square giving γ/γ is ∏_m=0^τ -1√(N) + P_m+1/Q_m+1 = ∏_m=0^τ -1 - p_m-1 - q_m-1√(N)/p_m - q_m√(N) = p_-1 - q_-1√(N)/p_τ -1 - q_τ -1√(N) = p_τ -1 + q_τ -1√(N). Therefore, ∏_m=0^τ -1√(N) + P_m+1/Q_m+1 = p_τ -1 + q_τ -1√(N) = 𝔠_τ -1, and in conclusion γ/γ = 𝔠_τ -1^2. Similarly, if τ is odd, the following equation holds ∏_m=0^2τ -1√(N) + P_m+1/Q_m+1 = p_2τ -1 + q_2τ -1√(N) = 𝔠_2τ -1. §.§ Even period and nontrivial factor In this subsection, we provide sufficient conditions on the integer N to ensure that the period τ is even and that Q_τ/2 2. Theorem <ref> provides a sufficient condition on N for an even period: if N cannot be written as sum of two squares, then the period of the continued fraction expansion of √(N) is even. Using Fermat's theorem on sums of two squares, we derive the following sufficient condition on the factors of N. Let N>0 be a non-square integer. If there exists a prime p ≡ 3 4 that divides N with odd exponent, then τ≡ 0 2. The following theorem, proven by Mollin in <cit.>, provides necessary and sufficient conditions, involving Diophantine equations, to have τ even and Q_τ/2=2. The following statements are equivalent for N > 2. * X^2 - NY^2 = ± 2 is solvable, indicating that at least one of the equations X^2 - NY^2 = 2 and X^2 - NY^2 =-2 has a solution. * τ is even and Q_τ/2=2. This result implies that, if τ≡ 0 2 and the two Diophantine equations X^2 - N Y^2 = 2 and X^2 - N Y^2 = -2 are insoluble, then the central term Q_τ/2 2, and so it contains a proper factor of N. The following result, due to Yokoi, and presented in <cit.>, gives us sufficient conditions for the insolubility of the two equations in (<ref>). For any positive non-square integer N, if the Diophantine equation X^2 - N Y^2 = ± 2 has an integral solution, then N ≡ 2 4 or N ≡ 3 4. Hence, if N has the following form N=p^k_1q^k_2 with p≡ q ≡ 3 4 and k_1 ≡ k_2 ≡ 1 2, then the period τ of the continued fraction expansion of √(N) is even and Q_τ/2 contains a proper factor of N. Integers of that form, also known as Blum integers, have been broadly employed in cryptographic applications. They were first introduced by L. Blum, M. Blum, and Shub in <cit.> in the context of pseudo-random number generators, and later applied in zero-knowledge proofs (see, e.g., <cit.>, <cit.>, <cit.>, <cit.>). §.§ Quadratic forms An overview about binary quadratic forms can be found in <cit.>. A binary quadratic form is a polynomial F(x, y) = ax^2+bxy+cy^2, with a, b, c ∈ℤ. The matrix associated with F is M_F =[ a b/2; b/2 c ]. We abbreviate a binary quadratic form with coefficients a,b,c as (a,b,c). Two quadratic forms F and F' are equivalent if there exists a matrix C ∈ℤ^2 × 2 such that M_F' = C^T M_F C and (C) = ± 1. If (C) = 1, the forms are properly equivalent and we write F ∼ F'. The discriminant Δ of a quadratic form (a,b,c) is Δ = b^2 -4ac. We define 𝔽_Δ as the set of all quadratic forms of discriminant Δ. The discriminant is an invariant for the equivalence relation of quadratic forms ∼: if F ∈𝔽_Δ, and F ∼ F', then F' ∈𝔽_Δ. A quadratic form (a,b,c), with positive discriminant Δ= b^2 -4ac is reduced if | √(Δ) -2 | a | | < b < √(Δ). Given a quadratic form F, it is always possible to find a reduced quadratic form equivalent to F. In the following we are going to prove it giving a reduction algorithm on quadratic forms of positive non-square discriminant. To do so, we first need the next definition. For any form F = (a, b, c) with ac 0 of discriminant Δ, a non-square positive integer, we define the standard reduction operator ρ by ρ(a, b, c) = (c, r(-b, c), r(-b, c)^2 - Δ/4c ), where r(-b, c) is defined to be the unique integer r such that r +b ≡ 0 2c and -|c| < r ≤ |c| if √(Δ)< |c|, √(Δ) - 2|c| < r < √(Δ) if |c| < √(Δ). ρ(F) is called the reduction of F. The inverse reduction operator is defined by ρ^-1(a,b,c) = ( r(-b, c)^2 - Δ/4c, r(-b, a), a ). We denote ρ^n(F) the result of n applications of ρ on F. The identities ρ(ρ^-1(F)) = ρ^-1(ρ(F)) = F hold when F is reduced. We point out the fact that (a,b,c)∼ρ(a,b,c) through the transformation given by the matrix [ 0 -1; 1 t ], where r(-b,c)=-b+2ct. The proof of the following, fundamental, proposition can be found in <cit.>. * The number of iterations of ρ which are necessary to reduce a form (a, b, c) is at most 2 + ⌈log_2 ( |c|/ √(Δ) ) ⌉. * If F = (a, b, c) is a reduced form, then ρ(a, b, c) is again a reduced form. If (a, b, c), of discriminant Δ >0, is reduced, then | a |, b and | c | are less than √(Δ), and a and c are of opposite signs (<cit.>). This implies that the number of reduced quadratic forms of discriminant Δ is finite. Two forms F(x, y) = ax^2+bxy+cy^2 and F'(x, y) = a'x^2+b'xy+c'y^2 are adjacent if c = a' and b + b' ≡ 0 2c. Given a reduced quadratic form F, there exists a unique reduced quadratic form equivalent to F and adjacent to F. This form is ρ(F). As we have seen in Remark <ref>, there exists a finite number of reduced quadratic forms of positive discriminant Δ, and so this process eventually repeats, forming a cycle. The important aspect of this is that the cycle is actually all of the reduced forms equivalent to the first form, as proven in <cit.>. We call the principal form the reduced form of discriminant Δ having as first coefficient 1. It is denoted with 1 and the cycle in which it lies is called the principal cycle. Let Υ= {_n }_n ≥ 0 be the sequence of binary quadratic forms defined as _m(x,y) = (-1)^mQ_m x^2 + 2 P_m+1 xy + (-1)^m+1Q_m+1y^2, for m ≥0. If τ is even, then the sequence Υ is periodic of period τ, and if τ is odd Υ is periodic of period 2τ. This is due to Theorem <ref> and Theorem <ref>. Let F=(a_1, b_1, c_1) and G=(a_2, b_2, c_2) two quadratic forms having same discriminant Δ. The Gauss composition of F and G is F ∘ G = (a_3, b_3, c_3) = ( d_0 a_1 a_2/n^2, b_1 + 2 a_1/n ( s(b_2 - b_1)/2 - c_1 v ), b_3^2 - Δ/4 a_3 ), where β = (b_1 + b_2)/2, n = (a_1, a_2, β), s, u, v such that a_1 s + a_2 u + β v = n, and d_0 = (a_1,a_2,β, c_1,c_2,(b_1-b_2)/2). Although the composition is not unique, all compositions of given forms F and G are equivalent. We remark that all quadratic forms in Υ have the same discriminant Δ =4N, where N>0 is the non-square integer we want to factorize. This implies that for all (a,b,c) ∈Υ, we have Δ≡ b^2 ≡ b 2, and so b ≡ 0 2. Therefore, the value β in the Definition <ref> is an integer. A quadratic form F=(a,b,c) is primitive if gcd( a, b, c ) = 1. As proven in the next proposition, the forms in Υ are primitive. The forms _n are primitive for all n≥0. We prove the statement by induction, for n ≥0. Base step (n=0): The base step is proven by the following chain of equalities (Q_0, 2 P_1, Q_1) = (1, 2a_0, a_0^2 - N) =1. Inductive step (n ⇒ n+1): The thesis follows from the next equalities (Q_n, 2 P_n+1, Q_n+1) = (Q_n, 2(a_nQ_n -P_n), (N-P_n+1^2)/Q_n) = (Q_n,-2P_n, Q_n-1 -a_n^2Q_n +2a_nP_n) = (Q_n, -2P_n, Q_n-1) =1. This implies that in the Gauss composition of two elements of Υ, the coefficient d_0 is always equal to 1. Using the definition of ρ and Equation (<ref>), it is straightforward to prove that ρ^n(_0)=_n. The form _0 is reduced and equal to 1, so the quadratic forms in Υ are reduced, and Υ is the principal cycle. Moreover, for any pair of forms _n, _m ∈Υ, their Gauss composition _n ∘_m is equivalent to _0. This follows from the property proven by Gauss in Article 237-239 of <cit.>: if F ∼ G, then H ∘ F ∼ H ∘ G, for all quadratic forms F, G, H having same discriminant. In particular, we obtain _n ∘_m ∼_n ∘_0 ∼_n ∼_0, using also the fact that _0 ∘_n ∼_n for all n ≥ 0. Consequently, applying the Gauss composition to any couple of elements of Υ, followed by a sufficient number of applications of ρ to obtain a reduced form, results in an element of Υ. As mentioned previously, we are interested in quickly finding the coefficient Q_τ/2 when τ is even, or Q_(τ + 1)/2 when τ is odd. Consequently, we aim to determine the quadratic form _τ/2 or _(τ - 1)/2 in an efficient manner (i.e. with time complexity O(ln(N)^α), with α constant). The value of τ could be too large (see Remark <ref>), so we need a way to make longer jumps within the principal cycle. As we will see, Gauss composition, followed by the reduction, will allow us to make long jumps in Υ. To estimate the length of these jumps we use the (well-known) infrastructural distance δ. A comprehensive definition and detailed description of distance is provided in <cit.>. Given a quadratic form F=(a,b,c) of discriminant Δ>0, the infrastructural distance δ of F and ρ(F) is δ(F, ρ(F)) = 1/2ln | b + √(Δ)/b - √(Δ) |. Given n>0, the distance δ of F and ρ^n(F) is δ(F,ρ^n(F)) = ∑_i=1^n δ(ρ^i-1(F),ρ^i(F)). We now restrict ourselves to forms in the principal cycle. We then have the following proposition. Let _n and _m be two reduced forms in the principal cycle, and let _0 be the unit form. Then if we define G = _n ∘_m, G may not be reduced, but let _r be a (non unique) form obtained from G by the reduction algorithm, i.e. by successive applications of ρ. Then we have δ(_0,_r) = δ(_0,_n) + δ(_0,_m) + δ(G,_r), and furthermore, | δ(G,_r) | < 2 lnΔ, where Δ is the discriminant of these forms. The above proposition follows from the property that δ is exactly additive under composition (before any reductions are made) (see <cit.>) and the estimation of the bound for | δ(G,_r) | discussed in Section 12 of <cit.>. The next proposition is fundamental for a computational point of view. Indeed, given _i, _j, with j > i, and their distance δ(_i, _j), it gives an estimation of j-i. In particular, if δ(_i, _j) = D, then 2D/ln (4N)<j-i< 2D/ln 2 + 1. Let F ∈𝔽_Δ reduced. The following two bounds hold (see <cit.> for the proofs): * δ(F, ρ(F)) < 1/2lnΔ, * δ( F , ρ^2( F )) > ln 2, and the same holds for ρ^-1. In our case, the discriminant of the forms in Υ is Δ=4N, where N is an odd non-square integer. Theorem <ref> (proven also in <cit.>) and Theorem <ref> show that the distance between quadratic forms is considered modulo R^+(N) = ln(p_τ -1 + q_τ -1√(N))= ln(𝔠_τ -1) if τ≡ 0 2 ln(p_2τ -1 + q_2τ -1√(N))= ln(𝔠_2τ -1) if τ≡ 1 2 . If τ is even, the distance δ(_0, _τ) (the distance of a period) is exactly equal to ln(𝔠_τ -1) and the distance δ(_0,_τ/2) is exactly equal to 1/2δ(_0, _τ). The distance between _τ and _0 is the summation d(_0,_τ) = ∑_i=0^τ -1 d(_i, _i+1) = ∑_i=1^τ1/2ln( √(N) + P_i/√(N) - P_i) = 1/2ln ( ∏_i=1^τ√(N) + P_i/√(N) - P_i ). Recalling that N- P_i^2 = Q_i Q_i-1 >0, and taking into account the periodicity of the sequence {Q_n}_n ≥ 0, the last expression can be written with rational denominator as 1/2ln ( ∏_i=1^τ(√(N) + P_i)^2/Q_i Q_i-1 ) = 1/2ln ( ∏_i=1^τ(√(N) +P_i)^2/ Q_i^2 ) = ln ( ∏_i=1^τ√(N) + P_i/ Q_i ). The conclusion follows from Equation (<ref>). The equality d(_0, _τ/2) = 1/2d(_0,_τ) is an immediate consequence of the symmetry of the sequence {P_n}_n ≥ 1 within a period. We now give the same result for the case of odd period. If τ is odd, the distance δ(_0, _2τ) (the distance of a period) is exactly equal to ln(𝔠_2τ -1) and the distance δ(_0, _τ) is equal to ln(𝔠_2τ -1)/2. The distance between _2τ and _0 is the summation d(_0,_2τ) = ∑_i=0^2τ -1 d(_i, _i+1) = ∑_i=1^2τ1/2ln( √(N) + P_i/√(N) - P_i) = 1/2ln ( ∏_i=1^2τ√(N) + P_i/√(N) - P_i ). Recalling that N- P_i^2 = Q_i Q_i-1 >0, and taking into account the periodicity of the sequence {Q_n}_n ≥ 0, the last expression can be written with rational denominator as 1/2ln ( ∏_i=1^2τ(√(N) + P_i)^2/Q_i Q_i-1 ) = 1/2ln ( ∏_i=1^2τ(√(N) +P_i)^2/ Q_i^2 ) = ln ( ∏_i=1^2τ√(N) + P_i/ Q_i ). The conclusion follows from Equation (<ref>) and the periodicity of { P_n }_n ≥ 1. If τ is odd, the distance δ(_0, _(τ -1)/2) (the distance of a target form) is equal to ln(𝔠_2τ -1)/4 + O(ln (N)). From the previous theorem, we know that δ(_0, _τ)=ln(𝔠_2τ -1)/2. Moreover, using the symmetry (<ref>), we obtain the following equality δ(_0,_τ) = 2 δ(_0,_(τ - 5)/2) + 1/2ln( √(N) + P_(τ - 1)/2/√(N) - P_(τ - 1)/2) + 1/2ln( √(N) + P_τ - 1/√(N) - P_τ - 1) + 1/2ln( √(N) + P_τ/√(N) - P_τ). Therefore, δ(_0,_(τ - 5)/2) = 1/4ln(𝔠_2τ -1) - 1/4ln( √(N) + P_(τ - 1)/2/√(N) - P_(τ - 1)/2) - 1/4ln( √(N) + P_τ - 1/√(N) - P_τ - 1) - 1/4ln( √(N) + P_τ/√(N) - P_τ), and so δ(_0,_(τ - 1)/2) = 1/4ln(𝔠_2τ -1) - 1/4ln( √(N) + P_(τ - 1)/2/√(N) - P_(τ - 1)/2) - 1/4ln( √(N) + P_τ - 1/√(N) - P_τ - 1) - 1/4ln( √(N) + P_τ/√(N) - P_τ) + 1/2ln( √(N) + P_(τ - 1)/2/√(N) - P_(τ - 1)/2) + 1/2ln( √(N) + P_(τ + 1)/2/√(N) - P_(τ + 1)/2). Using Proposition <ref>, the upper bound holds 1/4 | ln( √(N) + P_(τ - 1)/2/√(N) - P_(τ - 1)/2) - ln( √(N) + P_τ - 1/√(N) - P_τ - 1) - ln( √(N) + P_τ/√(N) - P_τ) + 2 ln( √(N) + P_(τ + 1)/2/√(N) - P_(τ + 1)/2) | < 5/4ln (4N), from which, we deduce that δ(_0,_(τ - 1)/2) = 1/4ln(𝔠_2τ -1) + O(ln (N)). The following remark is fundamental from a computational point of view, because it provides an upper bound on the distance of a cycle in Υ. We have that R^+(N) = n R(N), with n ≤ 6. Hua, in <cit.>, proves that R(N) ≤√(N) (1/2ln N + 1 ) if N ≡ 1 4 2√(N) (1/2ln (4N) + 1 ) if N ≡ 3 4 and so, R^+(N) = O(√(N)ln N). However, we do not know which is the largest value that R(N) can attain as a function of N. It is conjectured that there exists an infinite set of values of N such that R(N) ≫√(N)lnln N (see <cit.> for large-scale numerical experiments and more details). § THE FACTORIZATION ALGORITHM In this section, we present our factorization method. The integer N>0 to be factorized is odd, non-square and composite. In the first part of this section, we describe the method and provide the pseudocodes (Algorithms <ref> and <ref>). We then prove the correctness of our approach and analyze its computational cost. This method is a modification of the one presented by Elia <cit.>. We assume that R^+(N) has been preliminarily computed. In the final part of this section, we mention a method for computing an integer multiple of R^+(N). To simplify the notation, we introduce the following definition. Given two forms _n, _m∈Υ, the giant step of _n and _m is the composition _n∙_m = ρ^r(_n∘_m), realized through the Gauss composition _n∘_m, followed by the minimum number r of reduction operations ρ to obtain a reduced form. The notation _n^t represents t successive applications of the giant step of _n with itself, i.e., _n∙⋯∙_n (repeated t times). We define our method for both the even-period and odd-period cases. To enhance readability, we define the following quantity (N) = R^+(N)/2 if τ≡ 0 2 R^+(N)/4 if τ≡ 1 2 , which represents the distance of the quadratic form we want to reach (or an approximation of it). Indeed, if τ≡ 0 2, then δ(_0, _τ/2) = R^+(N)/2 = (N) and if τ≡ 1 2, then δ(_0, _(τ + 1)/2) = R^+(N)/4 + O(ln(N)) = (N) + O(ln(N)), using Theorem <ref> and Corollary <ref>. We distinguish two cases: R^+(N) ≤ (ln N)^2 and R^+(N) > (ln N)^2. In the first case, we compute _i = ρ^i(_0) until a non-trivial factor of N is found among their coefficients, if such a factor exists. By Proposition <ref>, the number of reduction steps ρ is at most 2 δ(_0, _τ/2)/ln 2 + 1 = R^+(N)/ln 2 + 1 = O((ln N)^2) when the period is even, and 2 δ(_0, _(τ - 1)/2)/ln 2 + 1 ≤R^+(N)/2 ln 2 + 5/2 ln 2ln (4N) + 1 = O((ln N)^2), when the period is odd. If the number of iterations exceeds R^+(N)/ln 2 + 5/2 ln 2ln (4N) + 1 the procedure is stopped: our algorithm cannot find a factor of N. The pseudocode for this method is given in Algorithm <ref>. If R^+(N) > (ln N)^2 we proceed in the following way. * First phase: In this phase we compute an approximation of _τ/2, if τ is even, or _(τ + 1)/2, if τ is odd. Starting from _0, we compute the forms _i in the principal cycle, for i=0, …, ℓ, until δ(_0, _ℓ) ≥ 2ln (4N) +1 and δ(_0, _ℓ) ≤ 4ln N (this is possible using Proposition <ref> and the definition of distance). Then, we compute the quadratic forms _ℓ^2^i, using giant steps, and their exact distance d_i = δ(_0, _ℓ^2^i), using Proposition <ref>, for i=1, …, t, with t such that d_t-1≤(N) < d_t. We point out the fact that d_i+1>d_i for all i ≥ 0. Then, using the forms _ℓ, …, _ℓ^2^t-1, we compute F, an approximation of _τ/2. To do so, first we set F= _ℓ^2^t -1 and d = d_t-1. Then we start by computing d + d_t-2 and we check if it is greater or smaller than (N); in the second case we update F with F∙_ℓ^2^t -2 and d with d + d_t-2. We iterate this procedure for i=t -3, …, 0 by computing d+ d_i, comparing it with (N), and, if it is smaller, updating F with F∙_ℓ^2^i. * Second phase: Starting from F, we iterate the operators ρ and ρ^-1 until a factor of N is found. An upper bound on the number of iterations of ρ and ρ^-1 needed to find a factor (both in the case of even and odd period) is given by: Ψ(R^+(N), N) := 2/ln 2 ( 4 ln (4N) log_2 ( R^+(N)/2 ) + 13/4ln (4N) ) + 1. This bound derives from the results demonstrated later in this section. A priori, we do not know the parity of τ, so we proceed as follows (as outlined in Algorithm <ref>). First, we run the procedure assuming τ≡ 0 2, which implies (N) = R^+(N)/2. If, at the end of the second phase, after Ψ(R^+(N), N) steps, a factor is not found, we then try again assuming τ≡ 1 2, which implies (N) = R^+(N)/4. If, even in this case, no factor is found after Ψ(R^+(N), N) steps during the second phase, the output is -1: our method cannot factor N. 1.5em 1.5em 1.5em 1.5em In the following we present two propositions, which are fundamental to the analysis of our method for the case R^+(N)> (ln N)^2. The first shows that t (the number of powers of G_0) is always “small", the second proves that our approximation of _τ/2, if τ even, or _(τ + 1)/2 if τ is odd, is good. The value of t in Algorithm <ref> is at most ⌈log_2(N) ⌉. Using Proposition <ref> and the above notation, we have that δ(_0, G_i) > 2δ(_0, G_i-1) - 2ln (4N) ∀ i >0, and so δ(_0, G_i) > 2^iδ(_0, G_0) - 2 ∑_k=0^i-1 2^k ln(4N) = 2^iδ(_0, G_0) - 2(2^i -1)ln(4N) ≥ 2^i(2 ln(4N) +1) - 2(2^i -1)ln(4N) = 2^i + 2ln(4N). Therefore, for i ≥⌈log_2 (N) ⌉, we have δ(_0, G_i) > (N). This proposition implies that t = O(ln N), thanks to Remark <ref>. Let F be the quadratic form obtained at the end of the second phase of the method (using the notation of Algorithm <ref>). Then, the following holds | (N) - δ(_0, F) | = O((ln N)^2). In the for loop at line 20 of Algorithm <ref>, we have at most t-1 giant steps. Therefore, using the previous proposition and Proposition <ref>, we have that | d - δ(_0, F) | = O((ln N)^2). Now, we prove that | d- (N) | = O((ln N)^2). We define I ⊆{0, …, t-1 } the set of indexes of the distances d_0, …, d_t-1 that appear in the computation of d, i.e. d = ∑_i ∈ I d_i. We distinguish two cases: * Case 0 ∉ I: Then we have d + d_0 > (N), and so d≤(N) < d + d_0, from which we deduce that 0 ≤(N) - d < d_0 ≤ 4 ln N. * Case 0 ∈ I: Let j = min{ i ∈ℕ| i ∉ I }, then 1 ≤ j ≤ t-2. We have that d + d_0 = ∑_i ∈ I∖{ 0 } d_i + 2 d_0 = ∑_i ∈ I∖{ 0 } d_i + d_1 + O(ln N) = ∑_i ∈ I∖{ 0,1 } d_i + 2d_1 + O(ln N) = ∑_i ∈ I∖{ 0,1 } d_i + d_2 + O(ln N) ⋮ = ∑_i ∈ I∖{ 0, …, j-1} d_i + d_j + γ(N) where γ(N)=O((ln N)^2). By construction, we have that ∑_i ∈ I∖{ 0, …, j-1} d_i + d_j > (N), from which d≤(N) ≤d + d_0 - γ(N), and so 0 ≤(N) - d≤ d_0 - γ(N) = O((ln N)^2). Therefore, |(N) - δ(_0, F) | ≤ |(N) - d | + | d - δ(_0, F) | = O((ln N)^2). Therefore, if τ is even, then | δ(_0, _τ /2) - δ(_0, F) | = O((ln N)^2). If τ is odd, we have that | δ(_0, _(τ + 1 )/2) - δ(_0, F) | = | (N) -δ(_0, F) | + O(ln N) = O((ln N)^2). since δ(_0, _(τ + 1 )/2) = (N) + O(ln N) by Corollary <ref>. We analyze the computational cost of this algorithm, assuming that R^+(N)> (ln N)^2 is preliminarily computed. The cost of the computation of the form G_0 such that δ(_0, G_0) ≥ 2 ln(4N)+1 is at most 2(2 ln(4N)+1)/ln 2 + 1, using Proposition <ref>. We then analyze the cost of the while loop at line 9. The number t of the giant steps is at most ⌈log_2(N) ⌉ = O(ln N), proven in Proposition <ref>. Each giant step requires: O(ln N) elementary operations for the extended Euclidean algorithm, used to compute s, u and v in the Gauss composition, and at most O(ln N) applications of ρ. Indeed, if we apply the Gauss composition of two forms in Υ, using the extended Euclidean algorithm, we obtain (a,b,c) such that | c | = O(N^4). This follows from Proposition <ref> and the classical bounds on the solution of Bézout's identity via the extended Euclidean algorithm. Hence, by Proposition <ref>, the number of applications of ρ is O(ln N). Therefore, the cost of the while loop at line 9 is O((ln N)^2). The computation of F requires at most O((ln N)^2) steps: at most O(ln N) giant steps, and at most O(ln N) application of ρ for each giant step. Finally, as proven in Proposition <ref>, the distance between the approximation F and _τ/2, or _(τ +1)/2, is at most O((ln N)^2), and so, using again the fact that, for each reduced form F, δ(F, ρ^2(F))> ln 2, are needed only O((ln N)^2) applications of ρ to reach _τ/2 or _(τ +1)/2. In conclusion, the method has a computational complexity of O((ln N)^2). It is remarked that the cost of elementary arithmetic operations (i.e. additions, subtractions, multiplications and divisions of big integers) and logarithm valuations are not counted. Using Proposition <ref>, Proposition <ref>, Corollary <ref>, and Proposition <ref>, it is possible to derive the following upper bound on the number of iterations of ρ and ρ^-1 in the second phase of the method Ψ(R^+(N), N) = 2/ln 2 ( 4 ln (4N) log_2 ( R^+(N)/2 ) + 13/4ln (4N) ) + 1. This bound holds for both the cases when τ is even and τ is odd. We point out that it is not necessary to have R^+(N) precomputed; it is sufficient to have an integer multiple of it: R'(N) = k R^+(N), with k ∈ℕ. For simplicity, we describe how the method is modified in the case of an even period. In this case, running Algorithm <ref> with R'(N) instead of R^+(N) is equivalent to considering the principal cycle with multiplicity k (i.e. k times the principal cycle). Our target form is located in the middle of some period of distance R^+(N)/2, so: * if k is odd, a factor of 2N is found (as coefficient of a form) at the position at distance k R^+(N)/2, from the beginning; * if k is even, the quadratic form _τ-1 is found in a position at distance k R^+(N)/2 (which reveals a posteriori that k is even); in this case, the procedure can be repeated, targeting the form at position at distance k R^+(N)/4 from _0. Again, either a factor of 2N is found, or k is found to be a multiple of 4. Clearly the process can be iterated h times until k R^+(N)/2^h is an odd multiple of R^+(N), and a factor of 2N is found. If k, as a function of N, is O(N^α) with α constant, then the computational cost of the algorithm does not change. Indeed, in this case, the value of t in Algorithm <ref> is at most ⌈log_2 ( k(N) ) ⌉ = O(ln N) (see Proposition <ref>), and the number of iterations of ρ and ρ^-1 is at most Ψ(kR^+(N), N) = O((ln N)^2). The odd-period case follows similarly. As outlined in Remark <ref>, we are focused on studying and researching methods to efficiently calculate or approximate R^+(N), or one of its integer multiples that is “not too large". In particular, we are seeking a method that efficiently computes kR^+(N), where k, as a function of N, is O(N^α), with α constant. Since R^+(N) = n R(N), with n ≤ 6, our problem is equivalent to finding an efficient algorithm that computes (a multiple of) the regulator of ℚ(√(N)). The method due to Vollmer, described in <cit.>, currently has the best known complexity. It is a Monte Carlo algorithm that computes R(N) in time O ( exp ( 3/√(8)√(ln N lnln N) ) ), assuming the GRH. Other methods for computing the regulator are detailed in <cit.>. In conclusion, using this approach for the precomputation of R(N) and the algorithm previously described, we have obtained a factorization method of (conjectured) time complexity O ( exp (3/√(8)√(ln N lnln N) ) ), which is more efficient than CFRAC and SQUFOF. § ACKNOWLEDGMENTS The first author is member of GNSAGA of INdAM and acknowledges support from Ripple’s University Blockchain Research Initiative. The second author is partially supported by project SERICS (PE00000014 - https://serics.eu) under the MUR National Recovery and Resilience Plan funded by European Union - NextGenerationEu.
http://arxiv.org/abs/2409.02860v1
20240904164002
Adaptive and frugal BDDC coarse spaces for virtual element discretizations of a Stokes problem with heterogeneous viscosity
[ "Tommaso Bevilacqua", "Axel Klawonn", "Martin Lanser" ]
math.NA
[ "math.NA", "cs.NA", "65F08, 65N30, 65N55" ]
Adaptive and frugal BDDC coarse spaces for virtual element discretizations of a Stokes problem with heterogeneous viscosity BDDC coarse spaces for virtual element discretizations of Stokes equations Tommaso Bevilacqua^1, Axel Klawonn^1,2, Martin Lanser^1,2 ^1Department of Mathematics and Computer Science, Division of Mathematics, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany, [email protected], [email protected], [email protected], url: <https://www.numerik.uni-koeln.de> ^2Center for Data and Simulation Science, University of Cologne, Germany, url: <https://www.cds.uni-koeln.de> * Tommaso Bevilacqua Axel Klawonn Martin Lanser September 9, 2024 ================================================= The virtual element method (VEM) is a family of numerical methods to discretize partial differential equations on general polygonal or polyhedral computational grids. However, the resulting linear systems are often ill-conditioned and robust preconditioning techniques are necessary for an iterative solution. Here, a balancing domain decomposition by constraints (BDDC) preconditioner is considered. Techniques to enrich the coarse space of BDDC applied to a Stokes problem with heterogeneous viscosity are proposed. In this framework a comparison between two adaptive techniques and a computationally cheaper heuristic approach is carried out. Numerical results computed on a physically realistic model show that the latter approach in combination with the deluxe scaling is a promising alternative. § INTRODUCTION The virtual element method (VEM) <cit.> is a finite element discretization approach for partial differential equations (PDEs) which can deal with very general polygonal/polyhedral computational grids. The linear systems that arise from these discretizations of PDEs are then generally worse conditioned than in the case of standard finite element methods (FEMs) for which recent studies proposed robust BDDC methods <cit.>. In this work we analyze a Stokes problem with high heterogeneity in the viscosity function; thus an adequatly enriched coarse space is needed. In the adaptive framework considered here, this is done by solving a generalized eigenvalue problem on each subdomain edge and by adding the solutions to the coarse space in an appropriate way. In our numerical simulations we make use of two coarse spaces analyzed for standard low order FEM in the paper <cit.>, identifying with "First" the approach present in Section 4.5 and with "Second" the one in Section 5. The first approach, was originally introduced in <cit.> and already successfully used for the VEM in <cit.>. The second one, extensively used in dual-primal finite element tearing and interconnecting (FETI-DP) and BDDC in <cit.>, has been also recently extended to the VEM for diffusion and linear elasticity problems in <cit.>. An alternative approach to enrich the coarse space denoted as frugal, has been introduced in <cit.> and already succesfully used for the VEM for stationary diffusion and linear elasticity in <cit.>. This is a heuristic and cheaper technique, that does not involve the solution of eigenvalue problems. This often allows to construct robust coarse spaces in a computationally efficient way when it is sufficient to approximate the largest, or smallest, eigenvalues depending on the chosen coarse space. In the present work we extend the first and second adaptive coarse space approaches as well as the frugal coarse space to the virtual element discretization of a Stokes problem with a heterogeneous viscosity function. § CONTINUOUS PROBLEM AND VIRTUAL ELEMENT DISCRETIZATION Let Ω⊆ℝ^2 be a bounded Lipschitz domain, with Γ = ∂Ω, and consider the stationary Stokes problem with homogeneous Dirichlet boundary conditions: find (𝐮,p) s.t. -ν(𝐱) Δ𝐮 - ∇ p = 𝐟 in Ω div 𝐮 = 0 in Ω 𝐮=0 on Γ. Here 𝐮 and p are respectively the velocity and the pressure fields, 𝐟∈ [H^-1(Ω)]^2 represents the external force and ν∈ L^∞(Ω), ν(𝐱) >0, 𝐱∈Ω, is the heterogeneous viscosity function. Introducing 𝐕:=[H^1_0(Ω)]^2 and Q:=L^2_0(Ω)= { q∈ L^2(Ω) s.t. ∫_Ω q =0 }, the standard variational formulation reads: find (𝐮,p) ∈𝐕× Q s.t. a(𝐮,𝐯)+b(𝐯,p)=(𝐟,𝐯) for all 𝐯∈𝐕, b(𝐮,q)=0 for all q ∈ Q, where a(𝐯,𝐰) := ∫_Ων(𝐱) ∇ 𝐯 : ∇ 𝐰 for all 𝐯,𝐰∈𝐕, b(𝐯,q) := ∫_Ωdiv 𝐯 q for all 𝐯∈𝐕, q ∈ Q and (𝐟,𝐯):=∫_Ω𝐟·𝐯 for all 𝐯∈𝐕. The discretization of problem (<ref>) is based on a virtual element space which is designed to solve a Stokes problem. In the following we present the basic elements of this discretization; we refer to <cit.> for further details. Let {_h}_h be a sequence of triangulations of Ω into general polygonal elements K with h_K:=diameter(K) and h:=sup_K∈_hh_K. We suppose that, for all h, each element K∈_h satisfies, for some γ>0 and c>0, the following assumptions 0em * K is star-shaped with respect to a ball of radius greater or equal than γ h_k, * the distance between any two vertices of K is greater or equal than c h_K, For k ∈ℕ, we then define the spaces: ℙ_k(K) the set of polynomials on K of degree smaller or equal than k, 𝔹_k(K):={v ∈ C^0(∂ K ) s.t. v_|e∈ℙ_k(e) ∀ edge e ∈∂ K }, 𝐺_k(K):=∇(ℙ_k+1(K)) ⊆ [ℙ_k(K)]^2 and its L^2-orthogonal complement 𝐺_k(K)^⊥⊆ [ℙ_k(K)]^2. The local virtual element spaces are defined, for k≥2, on each K ∈𝑇_h as 𝐕_h^K:={𝐯∈ [H^1(K)]^2 | 𝐯_| ∂ K∈ [𝔹_k(∂ K)]^2, -νΔ𝐯 - ∇ s ∈𝐺_k-2(K)^⊥, div 𝐯∈ℙ_k-1(K),-20mu s ∈ L^2(K)}, Q_h^K:=ℙ_k-1(K), and the global virtual element spaces are 𝐕_h:={𝐯∈ [H^1_0(Ω)]^2 | 𝐯_|K∈𝐕^K_h, ∀ K∈_h} and Q_h:={q∈ L^2_0(Ω) | q_|K∈ Q^K_h, ∀ K∈_h}. In the VEM framework the basis function are never explicitly computed since it would be necessary solve the PDE given in 𝐕_h^K for each element. Alternatively, they are defined by using of some polynomial projection operators (see <cit.>) and suitable degrees of freedom (dofs). Given 𝐯∈𝐕_h^K we take the following linear operators 𝐃_𝐕, split into 0em * 𝐃_𝐕^1: the values of 𝐯 at the vertices of the polygon K, * 𝐃_𝐕^2: the values of 𝐯 at k-1 internal points of the (k+1)-Gauss-Lobatto quadrature rule in e∈∂ K, * 𝐃_𝐕^3: the moments of 𝐯: ∫_K𝐯·𝐠^⊥_k-2 for all 𝐠^⊥_k-2∈𝐆_k-2(K)^⊥; * 𝐃_𝐕^4: the moments of div𝐯: ∫_Kdiv 𝐯 q_k-1 for all q_k-1∈ℙ_k-1(K)/ℝ. Furthermore, for the local pressure, given q∈ Q^K_h, the linear operators 𝐃_𝐐: * 𝐃_𝐐: the moments of q: ∫_Kq p_k-1 , for any p_k-1∈ℙ_k-1(K). We note that b(𝐯_h,q_h) for all 𝐯_h ∈𝐕_h and q_h ∈ Q_h can be computed directly from the 𝐃^1_𝐕, 𝐃^2_𝐕 and 𝐃^3_𝐕. While a(𝐯_h,𝐰_h) and (𝐟,𝐯_h) for all 𝐯_h, 𝐰_h ∈𝐕_h can not be exactly computed. It is therefore necessary to introduce approximations a_h(𝐯_h,𝐰_h) and (𝐟_h,𝐯_h) making use of suitable polynomial projections. Further details of the construction of these bilinear forms and their related theoretical estimates can be found in <cit.>. The discrete virtual element problem states: find (𝐮_h,p_h) s.t. a_h(𝐮_h,𝐯_h)+b(𝐯_h,p_h)=(𝐟_h,𝐯_h) for all 𝐯_h ∈𝐕_h b(𝐮_h,q_h) = 0 for all q_h ∈ Q_h . § DOMAIN DECOMPOSITION, BDDC PRECONDITIONER, AND COARSE SPACES We decompose 𝒯_h into N non-overlapping subdomains Ω_i with characteristic size H_i as 𝒯̅_h = ⋃_i=1^N Ω̅_i where each Ω_i is the union of different polygons of the tessellation 𝒯_h and we define Γ = ⋃_i≠ j∂Ω_i ∩∂Ω_j as interface among the subdomains. We assume that the decomposition is shape-regular in the sense of <cit.> Section 3. We refer to the edges of the subdomains Ω_i as macro edges and we denote them with ℰ_i, moreover ℰ_ij denote the macro edge shared by the subdomains Ω^i and Ω^j. From now we omit the underscore h since we will always refer to the finite-dimensional space and we write 𝐕× Q instead of 𝐕_h× Q_h. We split the velocity dofs into interface (Γ) and internal (I) dofs. In particular, the 𝐃^1_V and 𝐃^2_V that belongs to a single subdomain Ω_i are classified as internal, while the ones that belong to more than a single subdomain as interface dofs. All the dofs 𝐃^3_V and 𝐃^4_V are classified as internal ones. Following the notations introduced in <cit.> and <cit.>, we decompose the discrete velocity and pressure space 𝐕 and Q into 𝐕 = 𝐕_I ⊕𝐕_Γ and Q = Q_I⊕ Q_0, with Q_0:=∏_i=1^N {q∈Ω_i | q is constant in Ω_i}. 𝐕_Γ is the continuous space of the traces on Γ of functions in 𝐕, 𝐕_I = ⊕_i=1^N𝐕_I^(i) and Q_I = ⊕_i=1^N Q_I^(i) are direct sums of subdomain interior velocity and pressure spaces. We also define the space of interface velocity variables of the subdomain Ω_i by 𝐕_Γ^(i) and the associated product space by 𝐕_Γ = ∏_i=1^N𝐕_Γ^(i). The discrete global saddle-point problem (<ref>) can be written as: find (𝐮_I,p_I,𝐮_Γ,p_0) ∈ (𝐕_I,Q_I,𝐕_Γ,Q_0) s.t. [ [ A_II B_II^T A_Γ I^T 0; B_II 0 B_IΓ 0; A_Γ I B_IΓ^T A_ΓΓ B_0Γ^T; 0 0 B_0Γ^T 0; ]] [ [ 𝐮_I; p_I; 𝐮_Γ; p_0; ]] = [ [ 𝐟_I; 0; 𝐟_Γ; 0; ]], where the blocks · related to the continuous interface velocity are assembled from the corresponding subdomain submatrices. By static condensation one eliminates the interior variables and obtains the global interface saddle point problem Su = [ [ S_Γ B_0Γ^T; B_0Γ 0; ]] [ [ 𝐮_Γ; p_0; ]] = [ [ 𝐠_Γ; 0; ]] = 𝐠, where S_Γ is the Schur complement of the submatrices constituted by the top 3 × 3 block of the left-hand side matrix in (<ref>) and 𝐠 the corrispective right-hand side. We introduce 𝐕_Γ = 𝐕_Π⊕𝐕_Δ = 𝐕_Π⊕( ∏_i=1^N 𝐕_Δ^(i)), a partially assembled interface velocity space where 𝐕_Π is the continuous coarse-level primal velocity space, which dofs are shared by neighboring subdomains and the complementary space 𝐕_Δ, that is the direct sum of the subdomain dual interface velocity spaces 𝐕_Δ^(i). In particular, the primal dofs of our problem are represented by the nodal evaluation of both components of the velocity in the vertices of the subdomain and one extra dof for each subdomain edge to satisfy the no-net-flux condition ∫_∂Ω_i𝐯_Δ^(i)·𝐧 = 0 ∀𝐯_Δ∈𝐕_Δ, where 𝐧 is the outward normal of ∂Ω_i. This last condition is needed to ensure that the operator of the preconditioned system (<ref>) with the BDDC is symmetric and positive definite on some particular subspaces <cit.>, so the preconditioned conjugate gradient (CG) method can be used for solution. In our study we use a generalized transformation of basis approach <cit.> such that each primal basis function corresponds to an explicit dof. Firstly, on each ℰ_ij, we assume that the velocity vector should fulfill N_ij constraints c_ij^l for l = 1,...,N_ij, i.e., c_ij^l^T𝐯_Γ|ℰ^(i) = c_ij^l^T𝐯_Γ|ℰ^(j). Then, we compute the orthonormal trasformation with a modified Gram-Schmidt algorithm. Finally, since the transformations are independent of each other, we construct the resulting block diagonal global transformation. The constraints c_ij^l are established via the no-net-flux condition and the techniques to enrich the coarse space. More details can be found in <cit.>. For each Ω_i we introduce the scaling matrices D. These can be chosen in different ways and can be either diagonal or not, but they always must provide a partition of unity, i.e., R_D,Γ^T R_Γ = R_Γ^T R_D,Γ = I, where R_Γ,R_D,Γ :𝐕_Γ→𝐕_Γ are respectively a restriction operator, and its scaled version. We use the standard multiplicity-scaling and a variant of the deluxe-scaling to preserve the normal fluxes <cit.>. We then define the average operator E_D = RR_D^T, which maps 𝐕_Γ× Q_0, with generally discontinuous interface velocities, to elements with continuous interface velocities in the same space. Here R and R_D^T are simply the two previous operators extended by identity to the space of piecewise constant pressures. The preconditioner for solving the global saddle-point problem (<ref>) is then M^-1=R_D^T S^-1R_D, where S is the Schur complement system that arises using the partially assembled velocity interface functions. Theoretical estimates show that the condition number is bounded by the norm of the average operator <cit.>. Adaptive and frugal coarse spaces The coarse spaces is alternatively enriched by two adaptive techniques or a heuristic one. The idea is to detect the largest, or smaller, eigenvalues on each macro edge and then include the corresponing eigenvectors in the coarse space as primal constraints c_ij with the transormation of basis approach we saw before. We define here the frugal coarse space for our model problem. Like in the linear elasticity case, when applying the BDDC method to the Stokes problem in two dimensions, we need three constraints for each edge to control the three (linearized) rigid-body motions (two translations and one rotation). Given two subdomains Ω_l, l=i,j with diameter H_l, we have 𝐫_1:= [ [ 1 0; ]], 𝐫_2:= [ [ 0 1; ]], 𝐫_3:=1/H_l[ [ x_2 - x_2 -x_1 + x_1; ]], where 𝐱∈Ω_l is the center of the rotation. Differently from the approach in <cit.>, we do not rescale the rigid body modes and we define the "approximate" eigenvector v(𝐱)^(m,l)_ℰ_ij:= r(𝐱)_m^(l), 𝐱∈ℰ_ij, 0 , 𝐱∈∂Ω_l ∖ℰ_ij, for m=1,2,3 and v(𝐱)^(m) T_ℰ_ij:=[v(𝐱)^(m,i) T_ℰ_ij, -v(𝐱)^(m,j) T_ℰ_ij]. The three frugal edge constraints are then obtained by c_ij := B_D_ij^T S_ij P_D_ij v(𝐱)^(m) T_ℰ_ij, where S_ij := diag(S_Γ^(i),S_Γ^(j)), B_D_ij is a scaled jump operator, B_D,ℰ_ij its restriction to the edge ℰ_ij, and P_D_ij = B_D,ℰ_ij^T B_ℰ_ij. § NUMERICAL RESULTS We solve a lid-driven cavity benchmark problem on the unit square domain Ω = [0,1]×[0,1], applying Dirichlet boundary conditions on the whole ∂Ω and using a VEM implementation of degree k = 2. The heterogeneity is introduced to physically represent a practical example where drops (or sinkers) of a high viscosity material are spread in the fluid, in particular this is modeled defining ν(𝐱) as a continuous function that exhibits sharp gradients (Figure <ref>) <cit.>. These inclusions of equal size are placed randomly in the unit square domain so that they can overlap and intersect the boundary. For 𝐱∈Ω, the viscosity ν(𝐱) ∈ [ν_min,ν_max], 0<ν_min<ν_max<∞, is defined as ν(𝐱) := (ν_max-ν_min)(1-χ_n(𝐱))+ν_min. Here, χ_n(𝐱) ∈ C^∞ is an indicator function χ_n(𝐱)∈ [0,1] that accumulates n sinkers defined as χ_n(𝐱) := ∏_i=1^n 1-exp (-δmax(0,|𝐜_i-𝐱|-Ω/2)^2 ), where 𝐜_i ∈Ω, i = 1,...,n are the centers of the sinkers, Ω≥ 0 is their diameter and δ>0 a parameter that controls the exponential decay. By choosing δ = 2000, Ω = 0.05, λ_min = 10^-3 and λ_max = 10^3, we ensure that the viscosity exibits sharp gradients. The right hand side is defined as 𝐟(𝐱):=(0,β(χ_n(𝐱-1))), with β = 10 to simulate gravity that takes down the high viscosity material. In our experiments we use meshes with a Centroid Voronoi Tassellation (CVT) and Random meshes (RND), while the subdomain partitioning is performed by METIS. We compare the two adaptive coarse spaces, with TOL = 100, and the frugal one by applying the two different type of scaling mentioned before. Our numerical simulation have been performed with MATLAB R2023A therefore no CPU time analysis is provided. tr0.4 < g r a p h i c s > METIS decomposition of a RND mesh. ν = 1e3 on yellow elements, ν = 1e-3 on blue ones. In the following tables, we report the number of iterations to solve the global interface saddle-point problem (<ref>) with the PCG method, accelerated by a BDDC preconditioner, where we set the tolerance for the relative residual error to 10^-6. In the tables we use the following notation: nSub = number of subdomains, nSink = number of sinkers, n_Π = number of primal constraints, it = iteration count (CG), k_2 = condition number, Frugal = frugal coarse space, First = first adaptive technique, Second = second adaptive technique. We consider two different tests. We first set a configuration with nSink = 11 and we increase the number of the subdomains; see Table <ref>. For both the type of the mesh considered the adaptive coarse spaces with the multiplicity scaling respect our expectations, while the frugal approach exibits a high condition number since the heuristic coarse space is not able to catch all the largest eigenvalues. Introducing the deluxe scaling we see that the number of primal constraints is drastically reduced in the adaptive coarse spaces. The frugal approach is then able to control the largest eigenvalues and performs well. In Table <ref> we instead keep fixed the number of subdomains at 4×4 and we increase the number of the inclusions. Again, the adaptive coarse spaces are robust and also when introducing the deluxe scaling the frugal one shows a good improvement presenting itself as a valid alternative. spmpsci
http://arxiv.org/abs/2409.02983v1
20240904180000
Precise and Accurate Mass and Radius Measurements of Fifteen Galactic Red Giants in Detached Eclipsing Binaries
[ "D. M. Rowan", "K. Z. Stanek", "C. S. Kochanek", "Todd A. Thompson", "T. Jayasinghe", "J. Blaum", "B. J. Fulton", "I. Ilyin", "H. Isaacson", "N. LeBaron", "Jessica R. Lu", "David V. Martin" ]
astro-ph.SR
[ "astro-ph.SR" ]
Eclipsing Red Giants]Precise and Accurate Mass and Radius Measurements of Fifteen Galactic Red Giants in Detached Eclipsing Binaries ^1Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH, 43210, USA ^2Center for Cosmology and Astroparticle Physics, The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH, 43210, USA ^3Department of Physics, The Ohio State University, Columbus, Ohio, 43210, USA ^4Independent Researcher, San Jose, California, USA ^5Department of Astronomy, University of California Berkeley, Berkeley CA 94720, USA ^6NASA Exoplanet Science Institute/Caltech-IPAC, Pasadena, CA 91125, USA ^7Leibniz Institute for Astrophysics Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam, Germany ^8Department of Physics and Astronomy, Tufts University, Medford, MA 02155, USA § ABSTRACT Precise and accurate mass and radius measurements of evolved stars are crucial to calibrating stellar models. Stars in detached eclipsing binaries (EBs) are excellent potential calibrators because their stellar parameters can be measured with fractional uncertainties of a few percent, independent of stellar models. The All-Sky Automated Survey for Supernovae (ASAS-SN) has identified tens of thousands of EBs, >35,000 of which were included in the ASAS-SN eclipsing binaries catalog. Here, we select eight EBs from this sample that contain giants based on their colors and absolute magnitudes. We use LBT/PEPSI, APF, and CHIRON to obtain multi-epoch spectra of these binaries and measure their radial velocities using two-dimensional cross-correlation methods. We simultaneously fit the ASAS-SN light curves and the radial velocities with to derive accurate and precise masses and radii with fractional uncertainties of ≲ 3%. For four systems, we also include Transiting Exoplanet Survey Satellite () light curves in our models, which significantly improves the radius determinations. In seven of our systems, both components have evolved off of the main sequence, and one system has a giant star component with a main sequence, Sun-like companion. Finally, we compare our mass and radius measurements to single-star evolutionary tracks and distinguish between systems that are first ascent red giant branch stars and those that are likely core helium-burning stars. [ D. M. Rowan 0000-0003-2431-981X^1,2, K. Z. Stanek 0009-0001-1470-8400^1,2, C. S. Kochanek ^1,2, Todd A. Thompson 0000-0003-2377-9574^1,2,3, T. Jayasinghe 0000-0002-6244-477X^4 J. Blaum 0000-0003-1142-3095^5, B. J. Fulton 0000-0003-3504-5316^6, I. Ilyin 0000-0002-0551-046X^7, H. Isaacson 0000-0002-0531-1073^5, N. LeBaron 0000-0002-2249-0595^5, Jessica R. Lu 0000-0001-9611-0009^5, David V. Martin 0000-0002-7595-6360^8 Received XXXX; Accepted YYYY ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Detached eclipsing binaries (EBs) can be used to obtain the most accurate and precise measurements of stellar masses and radii without the need for stellar models. The EB light curve can be used to determine the orbital period, inclination, and the radii of the two stars relative to the orbital semimajor axis. With the addition of radial velocities for both components, the physical radii and masses can be determined with fractional uncertainties of a few percent <cit.>. Detached EBs have long served as powerful observational constraints to develop stellar evolution models, characterize exoplanets, and calibrate other methods of mass estimation. <cit.> compiled a sample of 95 detached eclipsing binaries with mass and radius uncertainties ≲ 3% and used these measurements to derive empirical relations between spectroscopic parameters and masses and radii. Their results have been widely used to make comparisons with stellar models <cit.> and to characterize exoplanets <cit.>. Despite the substantial expansion of photometric and spectroscopic surveys in recent years, only a few hundred detached eclipsing binaries have been fully characterized to precisions of a few percent <cit.>. Furthermore, the distribution of stars with dynamical mass and radius measurements is not uniform across the Hertzsprung-Russell diagram. In particular, only ∼16% of the eclipsing binaries included in the <cit.> catalog and only three stars in the <cit.> catalog are significantly evolved off the main sequence. The vast majority of the evolved binaries with precise mass and radius measurements are in the Magellanic Clouds <cit.>, and these systems serve as distance indicators in addition to being benchmarks for comparing with stellar models at low metallicity. There have been some eclipsing red giants identified in Kepler and the All-Sky Automated Survey <cit.> in the Milky Way field <cit.>, but more systems are needed to make direct comparisons to stellar models at a range of masses and metallicities. There are a number of physical processes in stars where mass and radius measurements can be used to constrain theoretical models. For example, convective overshoot in stars is expected to bring extra hydrogen from convective envelopes into the core, increasing the core mass and extending the main sequence lifetime. Dynamical mass and radius measurements for evolved stars can be used to determine the extent of this extra-mixing <cit.>. More generally, <cit.> compared dynamical mass measurements to predicted masses from stellar models and found larger differences for subgiant and red giant stars than for main sequence stars. Expanding the sample of evolved stars with dynamical mass measurements at a range of masses and metallicities will allow for more comprehensive comparisons to stellar models. Masses derived from eclipsing binaries can also be used to calibrate other mass estimation methods such as abundances <cit.> or asteroseismology <cit.>. The masses of evolved stars determined from eclipsing binaries provide benchmarks for asteroseismology where scaling relations are used to convert the oscillation frequencies into measure masses and radii <cit.>. <cit.> used Kepler photometry to identify oscillations in an eclipsing red giant and spectroscopic follow-up found that scaling relations were in agreement with the dynamical masses and radii <cit.>. <cit.> used a sample of 10 oscillating giants in Kepler eclipsing binaries and found that radii and masses were typically overestimated by ∼5% and ∼15%, respectively, when using asteroseismic scaling relations. Larger samples of oscillating giants in eclipsing binaries are needed to better calibrate these scaling relations. Large, all-sky surveys can be used as a starting point to considerably expand the sample of detached EBs with precise mass and radius measurements <cit.>. More than 200,000 eclipsing binaries have been identified in light curves from the All-Sky Automated Survey for Supernovae <cit.> using machine learning methods <cit.>. <cit.> focused on the sample of detached eclipsing binaries from <cit.> and used PHysics Of Eclipsing BinariEs <cit.> to model >35,000 of their light curves. The ASAS-SN EB catalog includes parameter measurements such as the orbital eccentricity and ratio of effective temperatures, as well as estimates of the evolutionary state of the photometric primary based on photometry <cit.>, distances from <cit.>, and three-dimensional dust extinction maps from <cit.>. More than 600 systems in the catalog were classified as red giants. Here, we have obtained spectroscopic follow-up of eight of these bright, double-lined spectroscopic (SB2) EBs. Section <ref> describes how the targets were selected and the ASAS-SN and light curves. Section <ref> describes the spectroscopic observations and the radial velocity measurements. We simultaneously fit the ASAS-SN light curves and radial velocity measurements in Section <ref>. In Section <ref> we compare our derived masses and radii to evolutionary tracks, and we discuss the results for each target in <ref>. § TARGET SELECTION AND PHOTOMETRIC OBSERVATIONS The ASAS-SN EB catalog[<https://asas-sn.osu.edu/binaries>] includes parameter estimates for more than 35,000 detached eclipsing binaries. The catalog also reports the evolutionary state of the photometric primary based on the extinction-corrected color-magnitude diagram (CMD). We selected ten detached systems on the giant branch that are bright enough for easy spectroscopic follow-up (V ≲ 13 mag). Figure <ref> shows these targets on the CMD and Table <ref> reports their parameters. Two systems were found to be single-lined spectroscopic binaries (SB1s). Since dynamical masses can only be determined for SB2s, we discuss these two systems in Appendix <ref> and focus the rest of our analysis and discussion on the eight SB2s. We use light curves from the All-Sky Automated Survey for Supernovae <cit.>. These eclipsing binaries were identified and classified in the ASAS-SN Variable Stars Catalog <cit.> and further characterized in . ASAS-SN observed primarily in the V-band from 2012 to mid-2018. At the end of 2017, ASAS-SN switched to the g-band and added three additional telescope units. Here, we use only the ASAS-SN g-band data. The light curves are obtained from SkyPatrol V2 <cit.>. We do an additional inter-camera calibration using a damped random walk Gaussian process for interpolation to optimize the camera offsets <cit.>. Figure <ref> shows ASAS-SN light curves of the eight eclipsing binaries. It is immediately apparent that some of the targets show variability due to star spots in addition to the eclipses. This produces additional scatter in the out-of-eclipse variability in the phase-folded light curve. We detrend the light curves using a biweight filter as implemented in wotan <cit.> with a window matching the orbital period of the binary. The model trends are shown in red in Figure <ref> where they are used. All of our targets have been observed by the Transiting Exoplanet Survey Satellite <cit.>. We use the - Light Curve pipeline <cit.> to extract aperture photometry light curves from the full-frame images. Since our targets have orbital periods that exceed the length of an individual sector, half of our targets do not have visible eclipses in , or have eclipses that are cut off by the orbit/sector gap. Four of our targets have observations where one or both eclipses are visible in a single sector (Table <ref>). Because of the challenges in combining observations from different sectors, we only use one sector per target even if more than one sector is available. Figure <ref> shows the TGLC light curves for these four targets. We also apply a detrending procedure to the TGLC light curves. Since some of the trends observed in the TGLC light curves are due to systematic instrumental effects, we use a smaller window of 5 days and the cosine method in wotan, which allows us to mask out the eclipses during detrending. Figure <ref> shows the light curves and the trend models from wotan. While this detrending process is fairly effective at flattening the light curve so it can be fit with (Section <ref>), the large time windows masked during detrending, particularly for J1109 and J2236, limit our ability to remove systematic effects on timescales of <5 days. § SPECTROSCOPIC OBSERVATIONS AND RV MEASUREMENTS To measure the radial velocities and other properties of our target stars, we collected 88 spectra of eight binaries using three different spectroscopic instruments. Here we briefly describe the three instruments. We obtained high-resolution (R≈ 43,000) spectra for four targets with the Potsdam Echelle Polarimetric and Spectroscopic Instrument <cit.> on the Large Binocular Telescope. The observations used the 300μm fiber and two cross-dispersers (CDs) covering 4758–5416 Å (CD3) and 6244–7427 Å (CD5). The CD5 data is not used for RV determination because of its significant overlap with telluric features. Exposure times ranged from 200 to 1000 seconds. The 2D echelle spectra are processed following the procedure described in <cit.>. We obtained high-resolution (R≈ 80,000) spectra for four targets with the Automated Planet Finder (APF) Levy spectrograph on the Lick Observatory 2.4m <cit.>. The observations used the 2×3 Decker-T slit. The APF spectra have a wavelength range of 3730–10206Å and the raw 2D echelle spectra are reduced to 1D spectra through the California Planet Survey <cit.> pipeline. The 1D spectra are then de-spiked to remove signals from cosmic rays and blaze corrected by fitting polynomials to the continuum in each order. The APF observations had a typical exposure time of 1700 seconds. We selected 33 orders spanning 4600–7813Å for RV analysis, excluding orders affected by telluric lines. We obtained high-resolution spectra (R≈ 28,000) for four targets using CHIRON on the SMARTS 1.5m telescope <cit.>. Spectra are taken in the fiber mode using 4×4 pixel binning and a Th-Ar comparison lamp. As with the APF observations, the extracted 1D spectra are de-spiked to remove signals from cosmic rays and blaze corrected by fitting polynomials to the continuum in each order. The CHIRON observations used a typical exposure time of 500 seconds. We use 36 orders spanning 4700–7792Å for the RV analysis, avoiding regions containing telluric lines. We measure radial velocities for both binary components using a two-dimensional cross-correlation function <cit.>. The TODCOR method generalizes the one-dimensional cross-correlation function (1D-CCF) by applying two template spectra to compute the correlation function over a two-dimensional grid of velocity shifts. With TODCOR, radial velocities for both components can be determined even with small radial velocity differences or when the flux ratio of the two components F_2/F_1 ≪ 1. Many of our binaries have two well-separated velocity components where the 1D-CCF is effective at measuring accurate and precise RVs, but TODCOR is necessary for some systems and at some orbital phases. We therefore use TODCOR uniformly for all of our RV determinations. TODCOR requires two template spectra for the cross-correlation. For the best results, the template spectra should match the spectral types of the binary. We identified the best templates using a combination of spectral disentangling and empirical spectroscopic parameter estimation. First, we derive RVs using Solar-type templates (effective temperature T_eff=6000, surface gravity log g=4.0) using ATLAS model atmospheres <cit.> implemented in <cit.> for both stars. We calculate the two-dimensional cross-correlation function, ℛ, over a range of velocity shifts based on the 1D-CCF results. For the APF and CHIRON spectra, we apply TODCOR to each echelle order independently. We combine the TODCOR profiles from each order following the scheme described in <cit.>. We determine the RVs by maximizing ℛ with the Nelder-Mead algorithm as implemented in <cit.>. To determine RV uncertainties, we take slices through the maximum along each axis. We measure RV uncertainties as σ_RV^2 = - ( N C^''(ŝ)/C(ŝ)C^2(ŝ)/1-C^2(ŝ))^-1, following <cit.> where C is the slice through ℛ, ŝ is the value of velocities where ℛ at maximum, and N is the number of bins in the spectra. For the APF and CHIRON spectra where ℛ is combined from all the echelle orders, Equation <ref> is modified such that the factor N is replaced by NM, where M is the number of orders. Figure <ref> shows an example of the TODCOR profile for one of the J0611 spectra. After preliminary RVs have been derived using two Solar-templates, we fit a Keplerian orbit model of the form, RV_1(t) = γ + K_1 [(ω+f)+ecosω] RV_2(t) = γ - K_2 [(ω+f)+ecosω], where γ is the center-of-mass velocity, K_1 and K_2 are the radial velocity semiamplitudes, f is the true anomaly, and ω is the argument of periastron. The true anomaly, f, is related to the eccentric anomaly, E, and the eccentricity, e by cos f = cos E - e/1-e cos E, and the eccentric anomaly is E - esin E = 2π(t-t_0)/P where P is the period and t_0 is the time of periastron. We sample over the orbital parameters using Markov Chain Monte Carlo with emcee <cit.>. The orbital period of the binaries is well-constrained from the ASAS-SN light curve, so we set a Gaussian prior on the orbital period with σ=10^-3 P. We also include terms for the stellar jitter, s, of each component in the log-likelihood following TheJoker <cit.>. This term is included to model the effects of intrinsic stellar variability and underestimated radial velocity uncertainties. We re-scale the RV errors based on the measured stellar-jitter from our RV orbit model to σ^2 = σ_RV^2 + s^2, where σ_RV is the measured RV uncertainty from Equation <ref>. After the preliminary RV orbit has been derived using Solar-type templates, we use FDBinary <cit.> to disentangle the observed spectra into component spectra. FDBinary can solve for both the RV orbit model and the spectra, but we fix the orbit at the solution from our MCMC model. The flux ratio, α, is needed for the components to be disentangled, and we estimate the flux ratio using the TODCOR profile. As described in <cit.>, the cross-correlation functions can be used to estimate the flux ratio, α̂, where the cross-correlation function is maximized. We take the median value of α̂ across all spectra, excluding those taken in eclipse, to be the flux ratio for disentangling. We then run FDBinary separately for each echelle order. Figure <ref> shows an example of the disentangled spectral components of J0611 compared to one of the observed PEPSI spectra. The disentangled spectra have some sinusoidal continuum, which is a known artifact of the Fourier-based disentangling method <cit.>. We use <cit.> to normalize the disentangled components with a second-degree spline and remove this signal. We then use to estimate the effective temperature T_eff, the surface gravity log g, the metallicity [Fe/H], and the projected rotational velocity, v sin i of each disentangled component. The α-element enhancement, microturblent velocity, and macroturbulent velocity are all set using default empirical relations within . We fit the spectra in windows around 5150–5200 Å and 5125–5220 Å for APF/PEPSI and CHIRON targets, respectively. Table <ref> reports the best-fit parameters that we adopt for our templates and Figure <ref> shows an example of the disentangled spectra of J0611 and J1109. We generate synthetic templates using the model corresponding to the atmospheric parameters in Table <ref> over the full wavelength range of the APF/PEPSI/CHIRON spectra. We then repeat the TODCOR RV determination process described above using these templates. Figure <ref> compares the RVs derived with the best-fitting templates and the Solar type templates for two targets. While the difference in RVs is small, we find that the choice of template can introduce systematic effects on the final RV measurements at the ∼ 100 m/s level. Table <ref> reports our RV measurements for all targets. § PHOEBE MODELS We use the Physics Of Eclipsing BinariES <cit.> modeling tool to simultaneously fit the ASAS-SN light curves and the RV observations to measure masses and radii. has been used extensively for eclipsing binaries of various morphologies, including contact binary systems <cit.>, semi-detached binaries <cit.>, detached binaries <cit.>, and ellipsoidal variables <cit.>. We start by using the light curve and radial velocity geometry estimators within to get an initial guess for the orbital and stellar parameters. Then, we use Nelder-Mead optimization method within to optimize the solution and determine the starting point for our MCMC walkers. We sample over orbital parameters (P, t_0, e, i, ω, γ) and stellar parameters (M_1, q, R_1, R_2, T_eff,2/T_eff,1). We also sample over the T_eff,1 to allow the models to account for temperature effects in limb-darkening, but do not report the posteriors on T_eff,1 since the individual temperatures are only well constrained when fitting light curves in multiple filters that cover different wavelength ranges. Similarly, we sample over the passband luminosity, which controls the scaling of the absolute fluxes computed by to the normalized fluxes <cit.>. For targets with both APF and PEPSI observations, we include an RV offset parameter to account for any differences in the RV zeropoint of the two spectrographs. Finally, if there is additional light in the photometry from a physical tertiary companion, blended light from nearby stars, or a poorly estimated sky background, the light curve can be “diluted” and the inclination underestimated. To test for the possible impact of this “third light” in the system, we fit two sets of models with and without a fractional third parameter, l_3. We discuss possible sources of third light in more detail for each target below. We sample over these n=13–15 parameters, depending on if l_3 and an RV offset are included, and use 2n walkers. We start by running MCMC for 20,000 iterations with the ellc backend <cit.>. We visually inspect the walker probabilities to select an appropriate burn-in period for each target. Burn-in periods are chosen to be between three and five times the maximum autocorrelation time, which corresponds to ∼5000 iterations. For three targets, we manually remove one or two walkers that have low probability and have not converged. We resample from this MCMC run and run another 5,000 iterations using the PHOEBE backend, which is more accurate but more computationally expensive. Table <ref> reports the median posterior values and 1σ uncertainties for the four binaries without data. The light curve fits are shown in Figure <ref> and we show an example of the MCMC posteriors in Figure <ref>. For the four systems with eclipses in the light curves, we run models that also include the data. We initialize the MCMC walkers continuing from the posteriors obtained with just the ASAS-SN and RV data (Table <ref>). We then add in the detrended light curve, binning the out-of-eclipse observations to reduce the computational cost. We add two additional parameters for the passband luminosity and fractional third-light in the T-band. Here we use the ellc backend in , rather than the default PHOEBE backend, since the later is computationally expensive with the high-cadence light curves. We run the MCMC for 10,000 iterations, but find that the walkers converge quickly since they are starting from the ASAS-SN MCMC solution. Table <ref> reports the MCMC posteriors for the ASAS-SN and ASAS-SN +  fits to four EBs and Figure <ref> shows the light curves fits. There are some statistically significant differences between the two models, which we discuss for each individual system below. Unsurprisingly, adding in the light curves generally decreases the uncertainty on the stellar radii, since the eclipse ingress and egress times are very well-constrained with the high-cadence light curves. Figure <ref> shows an example of a corner plot comparing the MCMC posteriors using just the ASAS-SN light curve and after adding in the light curve for J0611. § COMPARISONS TO EVOLUTIONARY TRACKS Unless a binary has formed though interactions in dense stellar environments, we expect the two stellar components to have the same age and metallicity. Here, we verify that this is the case by comparing our measured masses and radii to theoretical evolutionary tracks from MIST <cit.>. We download MIST evolutionary tracks[<https://waps.cfa.harvard.edu/MIST/interp_tracks.html>] for stars corresponding to our measured masses. For each binary component, we draw 1000 mass and radius samples from our posteriors (Tables <ref> and <ref>). Since not all targets have data, and spots and detrending complicate the analysis of the light curves, we adopt the posteriors corresponding to the ASAS-SN+RV model with third-light as our final mass and radius measurements. For each sample, we determine the age when a star of mass M will have radius R. If there are multiple ages for a given M and R sample, we randomly select one and construct age posteriors. We fit a one or two component Gaussian model to the age distribution to estimate the age and uncertainties. We then compare the age posteriors of the two binary components. We do this both for Solar metallicity and the [Fe/H] estimated for the RV templates (Table <ref>). We assume negligible mass loss since the expected mass loss is smaller than our mass uncertainties. Figures <ref> and <ref> show the evolutionary tracks for each target. The smaller panels show the age posteriors for each component. For stars where the measured mass and radius could be consistent with either the first ascent up the red giant branch (RGB) or the core He-burning sequence, there are two possible stellar ages. Although the relative amplitudes of the corresponding peaks in the age posteriors differ, we do not know a priori whether a given star is a first ascent RGB star or a core He-burning star, and both have equal probability. For some systems, we can determine a more specific evolutionary state by combining information on the ages from both components. We discuss each individual system below. For systems that may be in the core He-burning stage, it is important to consider if they could have interacted in the past, even if the systems are currently observed as detached binaries. We use the Eggleton approximation <cit.> to estimate the Roche-lobe radius, R_Roche/a≈0.49 q^-2/3/0.6 q^-2/3 + log(1+q^-1/3) where q is the mass ratio and a is the binary semimajor axis. The filling factor f is then f=R/R_Roche, where f>1 indicates that the star has overflowed its Roche lobe. For the systems that could have core He-burning stars, we compute the maximum f the stars reach when ascending the giant branch in the MIST models and estimate how long a star has f>1.0. § DISCUSSION OF INDIVIDUAL TARGETS Figure <ref> shows the mass and radius measurements for the 16 stars compared to the <cit.> catalog. Here we briefly discuss each system. §.§ J0611: GDR3 3328584192518301184 (ASASSN-V J061119.27+082957.4) has components and Despite the small difference in mass between the two components (∼ 0.06 M_⊙), the radius difference is large ∼ 6.4 R_⊙. However, as compared to the other targets, has shallower eclipses, leading to larger uncertainties on the radii. This target is also the faintest in the sample (Table <ref>), but the relatively poor light curve fit is due to the lower inclination of ∼ 87^∘ rather than photometric uncertainties. The model that includes third light prefers a solution with l_3=0.16±0.10, but both models predict masses and radii that are consistent within their uncertainties. We use the ATLAS All-Sky Reference Catalog <cit.> to search for nearby stars that could contribute to ASAS-SN flux and dilute the observed light curve. There is a nearby star (GDR3 3328584196815079168), 45 from , that has g=16.8 mag, and there is another g=16.5 mag star (GDR3 3328584231177668736) that is separated by 110. These nearby stars, which are Δ m ≈ 3.8 mag fainter than , are too faint to explain the estimated fractional third light contribution of l_3 = 0.16 in the g-band, but we note that the uncertainties on l_3 are large and consistent with zero at ∼ 1.5σ. also has a TESS light curve (TIC 166929994). The Sector 33 light curve shown in Figure <ref> contains only one eclipse. In general, both primary and secondary eclipses are needed to measure both masses and radii in an EB, but here we simultaneously fit the data with the ASAS-SN light curve, which covers both eclipses. The high-cadence light curve gives more precise eclipse times and improves the fit to the shape of the eclipse. This is especially important for this target, which has the shallowest eclipses in the sample. Figure <ref> shows a comparison of the MCMC posteriors between the models with and without the light curve. The addition of the light curve improves the orbital inclination determination, which improves the mass and radius posteriors. Figure <ref> shows the light curve fit, which has residuals consistent with noise. Figure <ref>a shows the evolutionary tracks corresponding to the models of J0611. The evolutionary models are consistent with a system where the primary is on the first ascent of the RGB and the secondary has just evolved off the main sequence. §.§ J0656: GDR3 3157581134781556480 The two components of (ASASSN-V J065618.52+092626.8) have very similar masses, and The radii ( and ) differ by ∼ 0.2 R_⊙, but agree within the uncertainties. The ASAS-SN eclipses are deep (>30%) and have roughly equal depth, so the effective temperature ratio is close to one. This target has a large fractional third light that is significantly greater than zero, l_3 = 0.25±0.02. There is a nearby star, GDR3 3157581139074431488, that is separated by 415 with g=15.8 mag. Even if a star of that magnitude was entirely under the ASAS-SN PSF, it would only contribute ∼ 6% to the total flux. The next nearest star is 88 away and is g=19.6 mag. There is also no evidence for a wide, bound companion in DR3. The renormalized unit weight error is =1.2, and an additional resolved companion was not identified in any observations (=0). There is no published astrometric orbit solution. Additional RV observations could be used to search for evidence of a tertiary companion in the RV orbit model residuals. The model run without a third light component predicts a smaller inclination (88.6^∘ versus 85.4^∘, which increases the masses of both components, but the two sets of mass measurements are consistent within 1σ. Figure <ref>b shows the evolutionary tracks of J0656. This is a twin system, so the evolutionary tracks overlap for the entire evolution. Based on the measured stellar radii, the system could be either on the first ascent of the RGB or a core He-burning star. If the system is a core He-burning star, the radii were previously much larger and mass transfer could have occurred. We use the MIST evolutionary tracks to compute the Roche-lobe filling factor f (Equation <ref>). We find that both components would have filled their Roche-lobes immediately before He burning began with maximum f=1.05. However, this period of mass transfer would be brief, lasting ≲ 0.3 Myr. More detailed binary evolution models would be necessary to determine how mass transfer could alter the evolution of this system. §.§ J1108: GDR3 5388654952421552768 (ASASSN-V J110800.86-440658.9) is one of two eccentric binaries in the sample (e=0.26). The primary and secondary masses are consistent within 1σ, with and . However, the radii differ by more than a factor of two with and . The temperatures of the stars are also different by ∼ 9%, with the secondary being the hotter of the pair. Based on the location on the CMD (Figure <ref>) and the mass-radius figure (Figure <ref>), the binary has just evolved off of the main sequence, so it is not surprising that a small difference between the masses of the two components results in a large difference in radius. is also one of the three targets with additional rotational variability in the light curve (Figure <ref>). This target is detected as an X-ray source in eROSITA <cit.> with a separation of 44 and is classified as a coronal emitting source <cit.>. was observed by the RAdial Velocity Experiment <cit.> and found to be chromospherically active based on the CaII triplet <cit.>. We use the geometry estimator to mask out the eclipses and search for periodicity in the non-detrended ASAS-SN light curve. There is periodic variability at P=65.46 d, which is ∼ 2 times the orbital period. This suggests that the stars are tidally synchronized even if the orbit is not tidally circularized. For a q=1 binary at P=32.4 days, the difference between the tidal circularization and tidal synchronization timescales is only ≈ 10^4 years <cit.>. As the giants continue to evolve and R increases, the circularization timescale will decrease further. (TIC 71877648) was observed by in four sectors. Figure <ref> shows the Sector 63 light curve, which includes both eclipses. One of the eclipses occurs close to the end of the Sector, which could introduce systematic effects on the eclipse shape. The light curve (Figure <ref>) shows that the model fits the deeper eclipse well. The flat-bottomed eclipse, indicating a total eclipse, is much more apparent in the light curve than in the ASAS-SN data (Figure <ref>). As with J0611, the light curve improves the inclination constraint, leading to better-constrained masses and radii (Table <ref>). The shallower eclipse shows a clear asymmetry in the light curve, which introduces correlated residuals on either side of the minimum. This is likely due to star spots, which create asymmetric eclipse profiles as the star transits the non-uniform stellar disk. This is consistent with the evidence for chromospheric activity from eROSITA and RAVE. We investigated the other three sectors and find that the eclipse changes shape, which is expected since the spots evolve over time, although it is difficult to disentangle this astrophysical asymmetry origin from systematic effects in the detrending procedure. While it may be possible to simultaneously fit for the spot parameters and improve the fit to this eclipsing feature, we do not do so here because of the degeneracies in modeling the light curves of spotted stars <cit.>. Figure <ref>c shows the evolutionary tracks of . The evolutionary tracks indicate that both stars have only recently evolved off of the main sequence, which is consistent with their CMD position. At [Fe/H]=0.18, the MIST tracks predict an age ≈ 10.5 Gyr. At Solar metallicity, the binary age is predicted to be slightly younger, ≈ 8.9 Gyr. This is one of the two low mass (∼ 1 M_⊙) binaries in our sample, and it is unsurprising that it is older than the higher mass systems despite being relatively less evolved. Based on the dynamical mass and radius, log g=3.15, so the primary has likely just completed first dredge-up <cit.>. §.§ J1109: GDR3 5347923063144824448 (ASASSN-V J110949.25-521017.0) has two stars of similar mass, and in a circular orbit. The radii of the two stars are and . The phase folded light curves show significant scatter even after removing long-term variations in the ASAS-SN light curve (Figure <ref>). Unlike J1108 and J1705, where the wotan trend shows periodic variability on the timescale of the orbital period, the photometry of is dominated by a long-term trend and a sudden jump in brightness around JD=2459650. Since this target is g∼11.4 mag, this trend is likely due to systematic effects for stars approaching the ASAS-SN saturation limit (g<11.9 mag, see Figure 3 of ). However, like J1108, this source is listed as a chromospherically active star in RAVE <cit.> and detected as an X-ray coronal source in eROSITA <cit.> with a separation of 04. Whether the scatter is from systematic effects in ASAS-SN or chromospheric activity, the orbital and stellar parameters are well-constrained with fractional uncertainties of ≲ 1%. is the brightest target in our sample and is also the only one to be included in the DR3 catalog of SB2 orbit models <cit.>. The SB2 RV orbit model fits 13 epochs and finds K_1=52.4±1.1 km/s and K_2=44.8±0.8 km/s, so q=0.85. This is significantly less than the q=1 we measure from our RV observations and model. <cit.> used the RV orbit model with the ASAS-SN light curve to measure masses and radii of 61 binaries, including , and consequently reported mass and radius measurements that disagree with those we report here. Unfortunately, only the RV orbit model is included in DR3, and the epoch RV measurements from are unavailable, limiting our ability to make a more direct comparison between the two sets of measurements. However, <cit.> also found that ∼ 50% of the RV orbit models for ASAS-SN eclipsing binaries had incorrect periods or eccentricities, so it may not be surprising that some RV orbit models have inaccurate velocity semi-amplitudes as well. The MCMC posteriors suggest a large fractional third light, l_3 = 0.21. There is a nearby star, GDR3 5347923097493539072 that is separated by 37. This nearby star has g=13.1 mag, which cannot contribute 21% extra flux in the ASAS-SN photometry. There is no evidence for a tertiary companion to in the RUWE or ipd_frac_multi_peak statistics. The model that does not include third light predicts a lower inclination (83.4^∘ versus 86.1^∘), but the masses are only increased slightly and are consistent within their uncertainties. (TIC 81462274) has been observed in five sectors. Figure <ref> shows the Sector 63 light curve where both eclipses were observed. As compared to the fits that only used the ASAS-SN light curve, the combined ASAS-SN and model prefers a higher inclination (88.6^∘ versus 86.1^∘). As a result, both component masses decrease by ∼ 0.03 M_⊙. As compared to J0611 and J1108, the eclipses of this target are wider, which could introduce systematic effects in the detrending process since the eclipses are masked. Figure <ref> shows the light curve and model. We find there are some correlated residual features, but the scale of these residuals is small (<2.5%) relative to the photometric errors, so it is reasonable to conclude that MCMC walkers have converged on this solution. Figure <ref>d shows the evolutionary tracks for . The components have masses consistent with each other within their uncertainties and the evolutionary tracks overlap for the full age range. The evolutionary tracks show that a star of mass ∼ 1.4 M_⊙ does not shrink back to ∼ 11 R_⊙ following the He-flash, so the components of J1109 are probably first ascent RGB stars. 10 §.§ J1329: GDR3 6188279177469245952 (ASASSN-V J132912.67-283324.7) is the only one of our systems where one component is still on the main sequence. This is also the only other target in the sample besides J1108 to have an eccentric orbit with e=0.142. The photometric primary has and The secondary is a Sun-like star with and It follows that this binary has the faintest absolute magnitude out of the sample, M_G=2.6 mag (Table <ref>) and is near the boundary between subgiant and giant binaries based on the criteria used in (Figure <ref>). This target was included in the DR3 catalog of single-lined spectroscopic binaries <cit.>. The RV orbit solution is P=37.3±0.03 km/s, e=0.16±0.02, and K_1=38.4±0.4 km/s. The orbital period and eccentricity are consistent with our solution to within 1σ, and the velocity semi-amplitude is consistent to within 2σ. J1329 (Figure <ref>a) has the largest difference between the age estimates of the two components, though they do agree within 1σ. The primary of this system has just evolved off the main sequence and the companion is towards the end of its main sequence lifetime. As a result, the age posterior on the secondary is fairly broad. Figure <ref> shows the evolutionary tracks at super-Solar metallicity [Fe/H]=+0.5, as measured from the iSpec disentangled spectra (Table <ref>). If we instead use evolutionary tracks at Solar metallicity, the age posteriors do not agree within 1σ. Based on the surface gravity of the photometric primary, log g≈ 3.5 from the dynamical mass and radius, the star has likely only just begun first-dredge up <cit.>, so it is unlikely that additional constraints on its age could be determined from surface abundances. As with J1108, this system must be old (>8 Gyr) in order to observe the low mass binary near the start of its first ascent up the RGB. We would naively expect older stars to be metal poor, but RAVE reports a metallicity [Fe/H]=0.24 <cit.>, and uses a super-Solar metallicity template ([Fe/H]=0.25) for their RV measurements, supporting the higher metallicity we measure for the RV templates. §.§ J1705: GDR3 5966976692576953216 (V603 Sco) is a twin system with masses and and radii and . This is also the third and final target to show additional variability in the light curve, though at much lower amplitude than the other two targets (Figure <ref>). We mask out the eclipses in the non-detrended light curve and find a periodic signal at P=51.42 days. This is only slightly less than the orbital period of the binary P=52.62 days, suggesting the binary is nearly tidally synchronized. This target was detected by eROSITA in the 0.2–2.3 keV band with a separation of 26 <cit.>, but it is not included in the eROSITA coronal source catalog. The ASAS-SN binary stars catalog incorrectly reports the orbital period for this system to be ∼ 26.3 days, which is roughly half of the orbital period we report here. The estimates of the other orbital and stellar parameters reported in are also likely unreliable for this target. The evolutionary tracks of J1705 (Figure <ref>b) show that our measurements are inconsistent with both stars being first ascent RGB stars. Instead, both components are more likely core He-burning stars. The MIST evolutionary tracks show that if both stars are core He-burning, the maximum Roche-lobe filling factors of the primary and secondary on the first ascent of the RGB would be f≈0.83 and f≈0.77 for the primary and secondary star, respectively, so mass transfer is unlikely to have occurred. §.§ J2107: GDR3 1969468871480562560 (ASASSN-V J210726.63+421401.3) is the highest mass binary in our sample, with and and it has the brightest absolute G-band magnitude (Figure <ref>). Despite the mass difference of ∼ 0.17 M_⊙, the model prefers two stars with similar radii, and . The model also prefers a large fractional third light in the g-band, l_3=0.29^+0.02_-0.03. The nearest star is separated by 66 with g∼18.8 mag, which is Δ g = 5.4 mag fainter than the target. While the RV model has larger residuals for this target compared to other binaries, the ASAS-SN light curve shows deep, sharp eclipses and the model light curve residuals are consistent with noise. If the third light contribution is from a bound tertiary companion, we might expect to see long-term trends in the RV residuals. The PEPSI observations, which were all taken between 120 and 300 days after the APF observations, do all have negative residuals, but additional observations would be needed to model the dynamical effects from a potential third body. The model run without including third light finds a lower inclination and masses that are larger by >2σ. In this model, the radii are no longer equal, with the less massive secondary having a larger radius. We investigate evolutionary tracks for both sets of mass and radius measurements. Figure <ref>c shows the evolutionary tracks for the models including l_3 at [Fe/H]=+0.5. This shows our mass and radius measurements are consistent with a system where the primary star has started core-He burning and the secondary is on the first ascent of the RGB. This is also true when we use the mass and radius measurements from the model that does not include l_3, but the age estimates of the two stars are only consistent at the ∼ 1.5σ level. In both cases, the MIST evolutionary tracks suggest that the primary likely filled its Roche-lobe when it was first expanding on the RGB. The filling factor of the primary reached a maximum f≈1.1 and was f>1.0 for ∼ 0.75 Myr. Finally, was detected by Chandra with an X-ray-optical source separation of 03 <cit.> and it is classified as a high-mass star with classification probability P_class=0.5 in <cit.> based on a combination of X-ray features and optical/near-IR photometry. Unlike the other X-ray sources detected by eROSITA, does not show any large-amplitude, long-term variability in the full ASAS-SN light curve (Figure <ref>). §.§ J2236: GDR3 2002164086682203904 Finally, (AK Lac) is a twin system with nearly equal mass components of and . The radii differ by ∼ 0.4 R_⊙, and the temperature ratio is consistent with unity. This is also the most circular orbit, with e=0.0007^+0.0003_-0.0002. The fractional third light is l_3=0.09±0.01. There are three nearby stars separated by <100, but all have g>18.8 mag and are unlikely to contaminate the ASAS-SN flux, so the third light is likely from the sky background. (TIC 428064231) was observed in six sectors and Figure <ref> shows the TGLC light curve for Sector 76, which includes both eclipses. The detrended light curve (Figure <ref>) shows eclipses with a fractional depth of 0.5 in flux, compared to 0.42 in ASAS-SN (Figure <ref>). As a result, the model that includes the light curve prefers a completely edge-on inclination, i=90.00^∘±0.02^∘. The slightly higher inclination pushes the masses to decrease and both radii to increase. This total eclipse indicated by the light curve also requires a much higher third light in the g-band to effectively dilute the flux and produce the observed eclipse. There are large, correlated residuals in the light curve model of , but we are unable to find a better solution with . It is possible that systematic effects in the light curve pipeline or detrending procedure have altered the shape of the eclipse, producing nearly symmetric residuals on each side of the minima. Figure <ref>d shows the evolutionary tracks of J2236. The measured radii are consistent with these stars either being first ascent RGB stars or core He-burning stars. If the stars are core He-burning, they likely filled their Roche-lobes when they expanded up the RGB. The MIST evolutionary tracks indicate that they reached maximum Roche-lobe filling factors of f≈ 1.09 and had f>1 for ∼ 0.77 Myr. Binary stellar evolution models could be used to determine how much mass transfer could have occurred in this system and how the subsequent evolution differs from standard single-star evolution. § DISCUSSION AND CONCLUSIONS Here we have characterized eight eclipsing binary systems on the giant branch selected from the ASAS-SN eclipsing binaries catalog (Figure <ref>; ). We use PEPSI, APF, and CHIRON to obtain multi-epoch spectra of these targets and then use a combination of spectral disentangling (Figure <ref>) and two-dimensional cross-correlations (Figure <ref>) to measure the velocities of both stellar components. We then use to simultaneously fit the ASAS-SN light curves and the radial velocities to measure both masses and radii with fractional uncertainties of ≲ 3%. For four systems, we also model the light curves (Figures <ref> and <ref>), which improves the constraints on the stellar radii by more than a factor of two (Table <ref>). The light curves do introduce additional challenges. For J1108, J1109, and J2236, we find correlated residuals from the light curve model. For the J1108, this could be due to spots which produce an asymmetric eclipse shape. For J1109 and J2236, the eclipses last for ≳ 20% of the Sector, which could introduce systematic errors from the detrending process. We report our final mass and radius measurements for all systems from the ASAS-SN+RV model that includes the fractional third light parameter. Out of our eight systems, six are on circular orbits. Both of the eccentric systems are lower mass (Figure <ref>), which could reflect the difference in tidal circularization timescales between stars with convective and radiative envelopes. Three of our systems (Sections <ref>, <ref>, and <ref>) also show evidence for chromospheric activity in the ASAS-SN light curves (Figure <ref>). All three are detected eROSITA X-ray observations, and two have chromospheric emission line features in RAVE. For J1109, we also see evidence for asymmetry during the eclipse in the light curve, which we attribute to star spots. We report the projected rotational velocity measured from the disentangled spectra in Table <ref>. For the six binaries on circular orbits, we find that the projected rotational velocity of our fit RV templates is consistent with 2π R/P within ∼ 5 km/s, so the binaries are likely tidally locked. For the two eccentric systems, J1108 and J1329, the measured vsin i is ∼ 10–15 km/s larger than 2π R/P, and these systems are therefore neither synchronized nor circularized. Figure <ref> shows our mass and radius measurements compared to the <cit.> catalog. The majority of our systems have evolved substantially off of the main sequence, and only one binary (J1329, Section <ref>) has a component still firmly on the main sequence. Since the release of the catalog in <cit.>, the majority of dynamical measurements of evolved EBs have come from the Magellanic Clouds. Figure <ref> shows our systems compared to the updated sample from <cit.>. The vast majority of the evolved stars are in the LMC/SMC <cit.>. These binaries were primarily targeted to determine precise distances to the LMC/SMC, but are also probes of stellar evolution at lower metallicities. However, measurements of stellar parameters at a range of metallicities are needed to make complete comparisons to stellar theory. Figure <ref> shows how our measurements probe a new part of the parameter space, not only for evolved stars at near-Solar metallicities, but also at smaller stellar radii. The <cit.> catalog also includes Galactic eclipsing red giants identified in Kepler <cit.>, some of which have been asteroseismically characterized as well. The Kepler eclipsing red giants are mostly less massive (∼ 1–1.5 M_⊙) than our sample (Figure <ref>). We compare our mass and radius measurements to MIST evolutionary tracks in Figures <ref> and <ref>. For all systems we find that the ages predicted from evolutionary tracks given our mass and radius measurements agree for both components. In some cases, we can distinguish between systems that are core-helium burning based on the measured radii. Since the exact ages depend strongly on the metallicity of the targets, which we only roughly estimate here for the purpose of RV template determination, we do not report our age posteriors and leave more detailed comparisons to theoretical models to future study. Two of our systems (J1108 and J1329) must be at least ≳ 8 Gyr old, given that they are low mass (∼ 1 M_⊙) and have evolved off the main sequence. We use BanyanΣ <cit.> and the criteria for thin disk, thick disk, and halo membership from <cit.> to compute membership probabilities for J1108 and J1329. Both systems are consistent with being part of the Galactic thin disk with probabilities >99%. The evolutionary tracks of four of our systems (J0656, J1705, J2107, and J2236) show that one or both binary components could be core He-burning stars or first ascent RGB stars. If these stars are core He-burning, their radii were larger earlier in their evolution. Therefore, even if the systems are all observed as detached, non-interacting binaries now, they could have undergone mass transfer in the past. We use the MIST evolutionary tracks and the Eggleton approximation for Roche-lobe radii to determine if these stars could have filled their Roche-lobes and transferred mass. We find that three systems (J0656, J2107, and J2236) likely interacted in the past if they are currently core-He burning stars rather than first ascent RGB stars. We estimate the amount of time the stars had Roche-lobe filling factors f>1 and find it was likely brief (∼ 0.3–0.8 Myr). More detailed binary evolution models would be needed to see how possible mass transfer could have altered the evolutionary pathways of the stars in these systems. Additional age estimates from asteroseismology could also be useful to independently determine whether these stars have started He-burning. For example, <cit.> showed that red clump stars can be discriminated from RGB stars based on the period spacing and frequency spacing measured from asteroseismology. We are continuing to target more ASAS-SN eclipsing binaries to expand the sample of evolved stars with mass and radius measurements. Large spectroscopic surveys can also be used to characterize eclipsing binaries and measure masses and radii on a large scale. Spectra from the Apache Point Observatory Galaxy Evolution Explorer <cit.> have been used to identify >7,000 SB2s <cit.>, and with future data releases from Milky Way Mapper <cit.>, we can expect to have enough epochs (>6) for tens to hundreds of eclipsing binaries. DR3 also includes ∼ 5000 RV orbit models for SB2s, and the next data release is expected to include epoch RV measurements, allowing for simultaneous fitting of the RVs and ASAS-SN and photometry. The number of detached EBs with precise mass and radius measurements is small relative to their importance as direct calibrators of stellar models. Large photometric and spectroscopic surveys are a promising path to expand this sample considerably, especially for parts of the parameter space where few previous measurements exist. § ACKNOWLEDGEMENTS We thank Jack Roberts, Casey Lam, and Marc Pinsonneault for helpful discussions. DMR is supported by the OSU Presidential Fellowship. CSK and KZS are supported by NSF grants AST-1907570, 2307385, and 2407206. We thank Las Cumbres Observatory and its staff for their continued support of ASAS-SN. ASAS-SN is funded by Gordon and Betty Moore Foundation grants GBMF5490 and GBMF10501 and the Alfred P. Sloan Foundation grant G-2021-14192. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This work presents results from the European Space Agency space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement. The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona Board of Regents; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, The Leibniz Institute for Astrophysics Potsdam, and Heidelberg University; The Ohio State University, representing OSU, University of Notre Dame, University of Minnesota, and University of Virginia. PEPSI was made possible by funding through the State of Brandenburg (MWFK) and the German Federal Ministry of Education and Research (BMBF) through their Verbundforschung grants 05AL2BA1/3 and 05A08BAC. mnras § APPENDIX: SINGLE-LINED SPECTROSCOPIC BINARIES In addition to the eight systems identified as SB2s, we observed two systems that were SB1s. In order to measure masses and radii, the velocities of both components must be measured so the mass ratio can be determined directly. Table <ref> reports the parameters of these systems and Figure <ref> shows their RV orbits. We also report the mass functions f(M) = P K^3/2π G(1-e^2)^3/2 = M_2^3 sin^3(i)/(M_1+M_2)^2, where P is the orbital period, K is the velocity semiamplitude, and e is the orbital eccentricity. The mass function represents the minimum mass of the unseen secondary star. If we estimate that both systems have photometric primaries with masses ∼ 1.5 M_⊙ and edge-on orbital inclinations, the companions would have to be ∼ 1.77 and ∼ 1.3 M_⊙ for J0628 and J2201, respectively. Both of these systems are slightly bluer than the SB2s on the color-magnitude diagram (Figure <ref>). The ratio of effective temperatures are =0.42 and =0.68 for J0628 and J2201, respectively , suggesting that the companions are cooler main sequence stars and the flux ratio F_2/F_1 ≪ 1. While it may be possible to identify the spectral signatures of the companion with more careful disentangling or more spectra, we focus our analysis here on the most clear SB2 systems.
http://arxiv.org/abs/2409.03022v1
20240904182810
Boundless: Generating Photorealistic Synthetic Data for Object Detection in Urban Streetscapes
[ "Mehmet Kerem Turkcan", "Ian Li", "Chengbo Zang", "Javad Ghaderi", "Gil Zussman", "Zoran Kostic" ]
cs.CV
[ "cs.CV" ]
[ Eya Ben Charrada ==================== § ABSTRACT We introduce Boundless, a photo-realistic synthetic data generation system for enabling highly accurate object detection in dense urban streetscapes. Boundless can replace massive real-world data collection and manual ground-truth object annotation (labeling) with an automated and configurable process. Boundless is based on the Unreal Engine 5 (UE5) City Sample project with improvements enabling accurate collection of 3D bounding boxes across different lighting and scene variability conditions. We evaluate the performance of object detection models trained on the dataset generated by Boundless when used for inference on a real-world dataset acquired from medium-altitude cameras. We compare the performance of the Boundless-trained model against the CARLA-trained model and observe an improvement of 7.8 mAP. The results we achieved support the premise that synthetic data generation is a credible methodology for training/fine-tuning scalable object detection models for urban scenes. § INTRODUCTION Pedestrian safety and traffic management in bustling cities can be enhanced by using video-based AI systems for real-time monitoring and interaction with street objects <cit.>. Autonomous vehicles will face challenges due to irregular street layouts, numerous cohabitants, and unpredictable pedestrian behavior <cit.>. This calls for the use of infrastructure-mounted cameras and sensors to gather real-time video data at locations like traffic intersections, where object detection, tracking, trajectory prediction, and high-level reasoning can be performed on edge servers in real time <cit.>. To scale up video monitoring systems in large cities, deep learning (DL) models need to be trained and fine-tuned for hundreds of intersections, each with multiple cameras at varying micro-locations. Successful training of supervised DL models depends highly on the availability of ground-truth annotated (labeled) data. For traffic intersections, the annotation applies to vehicles, bicycles, pedestrians, and other moving objects, as well as immovable traffic furniture. Real-world image collection is complicated by uncontrollable environmental conditions such as variations in lighting, weather, and the unpredictable behavior of transient objects like pedestrians and vehicles. “Manual” ground-truth annotation of street objects from arbitrary camera angles requires a large time commitment and monetary resources. Data collection in real-world scenarios additionally faces legal issues due to privacy violations. Consequently, existing urban datasets often comprise isolated scenes rather than exhaustive city maps, lacking in the ability to capture the multifaceted nature of cityscapes. These datasets offer limited perspectives, predominantly at eye-level street or high-altitude aerial views, which only partially represent the possible views that one might acquire in the urban deployment of cameras. In this work, we investigate the use of synthetic data/image generators that can automatically create ground-truth annotations for training of object detection models, and therefore avoid the complexity and cost of manual ground-truth annotations. We focus on the generation of synthetic image datasets using Unreal Engine 5[<https://www.unrealengine.com/>] to address the shortcomings of existing object detection datasets in urban environments. We incorporate realistic variable lighting, apply post-processing effects to simulate weather conditions and improve render quality, and make adjustments for the level of detail of graphical assets and accurate capture of bounding boxes. The resulting simulator, which we call “Boundless”, can simulate diverse conditions, control environmental factors, and automatically generate high-quality ground-truth annotations. We investigate the suitability of medium-altitude data generated using Boundless for improving object detection performance in a setting for which available real-world training data is scarce. § RELATED WORKS Object Detection in Urban Environments. A large number of datasets focus on low-altitude vehicle and pedestrian detection <cit.>. Meanwhile, many other datasets focus on high-altitude aerial environments, where small object detection becomes an important challenge <cit.>. However, there remains a gap for public mixed-perspective datasets that can adapt to a multitude of different deployment conditions. Real-Time Object Detection. Many models have been proposed for object detection. For real-time applications, in recent years single-stage detectors like SSD and YOLO models or transformer-based DETR architecture variants have achieved significant results for real-time detection <cit.>. In this work, we use the state-of-the-art YOLOv8x model for experiments. Rather than model development, our focus is the quality of data used for training. Synthetic Data Generation. Realistic 3D simulators have been used extensively for various urban computer vision problems. The SYNTHIA dataset provides a collection of images from a simulated city along with pixel-level semantic annotations, to support semantic segmentation and scene understanding tasks <cit.>. CARLA, developed in Unreal Engine 4, is an open-source autonomous driving simulator offering extensive resources, including environments and open digital assets explicitly designed for development, testing, and validating self-driving systems <cit.>. CARLA-generated imagery has been used for object detection and segmentation <cit.>. Extensive work has been conducted on GTA V, using frames collected from this video game for training autonomous driving agents <cit.>. In addition to urban traffic simulators like CARLA, Unreal Engine itself has been considered as a rendering engine for computer vision approaches <cit.>. Recently, MatrixCity adapted the photo-realistic City Sample project as a benchmark for training neural rendering models by designing a plugin for saving frames <cit.>, focusing on pedestrian- and vehicle-free environments and neural rendering applications. The authors in <cit.> used the City Sample project[<https://www.unrealengine.com/marketplace/en-US/product/city-sample>] to detect pedestrians and hand-annotated them for this purpose. In contrast to previous work, here we focus on enhancing the available City Sample project to enable accurate and automated data collection for training performant models for urban deep learning applications. We find that extensive customization is required to enable accurate bounding box annotation collection in Unreal Engine due to a variety of challenges. We further improve the project with live lighting changes and weather and release multiple datasets for medium-altitude object detection, a problem of interest in urban metropolises <cit.>. § BOUNDLESS SIMULATOR DESIGN The freely available City Sample project provides an environment for North American cityscapes, including vehicles, pedestrians, and traffic lights . To facilitate realistic data collection, we created Boundless by making technical changes to the City Sample project to enable realistic data collection for use in AI applications. We show examples of scenes created with Boundless under different weather conditions, including bounding boxes, in Figure <ref> to show the capabilities of the simulator. We detail the technical improvements below. Lighting. We allow lighting conditions to be changed dynamically before each new frame collection, thus allowing the scene to change substantially in a single capture session. We implement four new weather conditions corresponding to rain, snow, dust and heat waves. Implemented using decals projected onto the map, the rain and snow effects allow for a more realistic alternative to image-level augmentations <cit.>. We add particle effects for all these weather conditions. We note that the City Sample comes with a default night-time implementation; however, the stylized and overly dark night weather does not correspond to real-world night-time conditions. Anti-Aliasing. The default temporal anti-aliasing approach in UE5 produces blurred frames. We replace the approach with MSAA and change the camera settings to output 3840x2160 resolution frames. Level of Detail. The City Sample project includes three levels of detail for pedestrian and vehicle actors. For the medium and low levels of detail, the substituted meshes have insufficient resolution and detail quality for facilitating real-world applications. We change the available levels of detail for each vehicle and pedestrian agent, enabling the simulator to capture distant object bounding boxes accurately from medium-altitude and street-level scenes. Updated Bounding Boxes. Due to the design of pedestrian and vehicle actors in the City Sample project which results in large or missing collision boundaries for different 3D meshes, significant changes are needed to correctly capture bounding boxes of objects. We re-compute the level of detail, visibility, and occlusion properties of individual objects in a scene before capturing a frame to make sure annotations of all visible objects are obtained. Export Options. Boundless exports 3D bounding boxes in the KITTI and 2D bounding boxes in the YOLO format. § DATASETS We give an overview of the different datasets we introduce for our experiments in Figure <ref>. We generated two synthetic datasets using Boundless: ∙ Medium-Altitude City Sample Training Set. The City Sample comes with default city maps. We use Boundless to collect an 8,000-frame dataset from a synthetic intersection from the City Sample project with a static camera angle. All bounding boxes are generated automatically by the simulator. Lighting conditions are changed throughout the training set in between every frame. A frame is saved from the simulation every 3 seconds in simulation time. For comparison purposes, we follow the same approach to create a corresponding dataset consisting of 22,000 frames using the CARLA simulator. ∙ Medium-Altitude Digital Twin Training Set. We create a realistic 3D digital twin of a real-world intersection within Boundless. We collect 8,700 frames from this highly accurate scene, replicating the camera angles of the medium-altitude real-world validation dataset. In addition to our synthetically generated datasets, we use the following two real-world image datasets: ∙ Medium-Altitude Real World Validation Set. We create a real-world image dataset collected from a major North American metropolis for one intersection. This validation set consists of 3,084 frames collected on different days with a variety of weather and time-of-day conditions. The dataset contains 12,380 vehicles and 15,225 pedestrian bounding boxes. ∙ VisDrone Dataset. The VisDrone dataset <cit.> contains 7,019 images captured from various perspectives, including top-down and ground-level views, with varying camera angles. Although the dataset originally contains multiple object classes, we adapt it to a two-class object detection problem. We use the 561-image validation split for reporting our results. § EXPERIMENTS Medium-Altitude Object Detection. We used a medium-altitude object detection task to demonstrate the capabilities of Boundless, where the amount of data available is scarce compared to other types of urban data. In this task, we seek to detect pedestrians and vehicles from a static camera at ∼40m height. We wanted to explore how well a model performs on this task when trained on different datasets. To do so, we fine-tuned a COCO-pretrained YOLOv8x model <cit.> on three different datasets: (i) VisDrone, (ii) CARLA, and (iii) Boundless. We evaluated the models on a custom dataset collected from a real-world traffic intersection <cit.>. All models were trained with SGD for 10 epochs using a learning rate of 1e-3. We show the results of this comparison in Table <ref>. Use of Boundless-generated data using the City Sample map yields a model that greatly outperforms the CARLA-generated data on the real-world validation set, and further exceeds VisDrone performance in this dataset. Motivated by the superior performance obtained using Boundless, we further implemented a digital twin of the real-world intersection from the real-world validation set in the form of a map in Unreal Engine. We collected more data using Boundless from this map to see how much the results could be improved. Using a 3D model as a map, we collect an additional 8,700 frames. Repeating the experiment by adding these additional frames further improves the mAP score significantly. § ETHICAL CONSIDERATIONS The real-world dataset collected is deliberately installed at an altitude that makes it impossible to discern the pedestrian faces or to read car license plates. The academic institution at whose facilities the camera is installed provided an IRB waiver for sharing this “high elevation data" with the general public since the data inherently preserves privacy. § CONCLUSION We created and described the Boundless simulation platform, and benchmarked it in a real-world medium-altitude object detection task that demonstrates challenges with the deployment of object detection models to urban streetscapes. Our best-performing model, using Boundless and incorporating a digital twin of the real-world testbed, achieved an mAP of 66.6, demonstrating the ability of the simulator to create imagery with sufficient realism to be deployed in real-world scenarios. With Boundless, we seek to support research on urban object detection for metropolises, where data collection, ground-truth annotation/labeling, and model training face technical and legal challenges. By releasing the simulator along with collected datasets, we aim to facilitate future research and applications in urban computer vision problems. § DATASET AND CODE AVAILABILITY Datasets and code for the experiments in this study are available at <https://github.com/zk2172-columbia/boundless>. § ACKNOWLEDGEMENTS This work was supported in part by NSF grants CNS-1827923 and EEC-2133516, NSF grant CNS-2038984 and corresponding support from the Federal Highway Administration (FHA), NSF grant CNS-2148128 and by funds from federal agency and industry partners as specified in the Resilient & Intelligent NextG Systems (RINGS) program, and ARO grant W911NF2210031. ieeenat_fullname
http://arxiv.org/abs/2409.03438v1
20240905113943
Shuffle Vision Transformer: Lightweight, Fast and Efficient Recognition of Driver Facial Expression
[ "Ibtissam Saadi", "Douglas W. Cunningham", "Taleb-ahmed Abdelmalik", "Abdenour Hadid", "Yassin El Hillali" ]
cs.CV
[ "cs.CV" ]
Shuffle Vision Transformer: Lightweight, Fast and Efficient Recognition of Driver’s Facial Expression Thanks to the EUNICE Alliance for the financial support. Ibtissam Saadi1,2, Douglas W. Cunningham2, Taleb-ahmed Abdelmalik1, Abdenour Hadid3, Yassin El Hillali1 1Laboratory of IEMN, CNRS, Centrale Lille, UMR 8520, Univ. Polytechnique Hauts-de-France, F-59313, France 2Faculty of Graphical Systems, Univ. BTU Cottbus-Senftenberg, Cottbus, Germany 3Sorbonne Center for Artificial Intelligence, Sorbonne University Abu Dhabi, Abu Dhabi, UAE Email: {ibtissam.saadi, abdelmalik.taleb-ahmed, yassin.elhillali}@uphf.fr, [email protected], [email protected] September 9, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Existing methods for driver's facial expression recognition (DFER) are often computationally intensive, rendering them unsuitable for real-time applications. In this work, we introduce a novel transfer learning-based dual architecture, named "ShuffViT-DFER," which elegantly combines computational efficiency and accuracy. This is achieved by harnessing the strengths of two lightweight and efficient models using convolutional neural network (CNN) and vision transformers (ViT). We efficiently fuse the extracted features to enhance the performance of the model in accurately recognizing the facial expressions of the driver. Our experimental results on two benchmarking and public datasets, KMU-FED and KDEF, highlight the validity of our proposed method for real-time application with superior performance when compared to state-of-the-art methods. Driver emotion recognition, Real-time facial expression recognition, Lightweight methods, Vision transformer § INTRODUCTION Human factors are responsible for a significant percentage of traffic road accidents <cit.>. For this reason, there has been an increasing interest on driver's facial expression recognition as a potential solution to improve road safety. Autonomous vehicles and Advanced Driver Assistance Systems (ADAS) both incorporate this feature, which enable recognizing and comprehending the emotional state of the driver. As a result, the systems are able to make well-informed decisions, which help to create a road environment that is safer and more effective. In this context, several research works have focused on the development of techniques for the recognition of driver's facial expressions as for example  <cit.>, <cit.>, <cit.>, and <cit.>. However, most of these attempts were faced with the challenge of operating in real-time while accurately recognizing the driver's emotional state in a real-world driving environment. This challenge usually involves a range of factors, such as non-frontal driver-head position, occlusions, and variation in lighting condition. While some approaches, especially those based on deep learning (e.g. <cit.>, <cit.>) perform better compared to traditional machine learning-based techniques (e.g. <cit.>, <cit.>), they require large amounts of data and significant computational resources for model training, making them less attractive in real-time applications. Nevertheless, some recent works have attempted to overcome the computational challenge by utilizing lightweight models designed to efficiently operate and with low latency at the cost of lower performance, as observed in <cit.>, <cit.>. Our work introduces a novel transfer learning-based approach that combines performance and computational efficiency by leveraging lightweight and efficient models, making it suitable for embedded systems and real-time applications. The main contributions of our present work can be summarized as follows: * We introduce ShuffViT-DFER, a novel lightweight, and efficient approach for driver's facial expression recognition. * We leverage the strengths of ShuffleNet V2 <cit.> and EfficientViT <cit.> architectures, exploiting features from both models with a new classification scheme, achieving accurate and fast recognition. * We demonstrate that dual-architecture transfer learning allows to efficiently capture subtle facial cues and expressions with limited data while maintaining real-time processing. * We explore the use of Efficient ViT approach in drivers' facial expression recognition by utilizing Grid Search to find the optimal hyper-parameters. * We conduct extensive experiments and evaluations on two benchmarking and publicly available databases, obtaining interesting performance compared to the state-of-the-art. * To support the principle of reproducible research, we share the code with the research community for comparison and future extensions at: https://github.com/Ibtissam-SAADI/ShuffViT-DFERhttps://github.com/Ibtissam-SAADI/ShuffViT-DFER. § PROPOSED MODEL: SHUFFVIT-DFER The pipeline of our proposed architecture is illustrated in Figure <ref>. The input to our model consists of a cropped face, detected using Multi-Task Cascaded Convolutional Networks (MTCNN) <cit.>. Subsequently, the detected faces undergo data augmentation for enhancing the size of the training set. Features are extracted using two pretrained models namely ShuffleNet V2 <cit.> and EfficientViT-M2 <cit.>. The extracted features are fed to a classifier for accurate expression recognition. §.§ Preprocessing The faces are first detected using MTCNN <cit.>. Then, they are cropped and resized into 224×224 pixels. Then, data augmentation is used by applying Random Horizontal Flip, Random Rotation, ColorJitter, Random Affine, and Gaussian Blur. The goal is to alleviate the overfitting issues related with small FER datasets. Finally, face normalization is performed. §.§ Lightweight and efficient feature extraction We extract features from both ShuffleNet V2 and EfficientViT-M2 models, employing transfer learning to address the limited data constraints and harness the strengths of both models for effective feature extraction, thereby enhancing classification accuracy. ShuffleNet V2, a lightweight CNN architecture, is known for its high computational efficiency and remarkable accuracy. However, it has limitations in capturing complex hierarchical features compared to deeper networks, which are needed for distinguishing subtle facial expressions. The outputs of ShuffleNet V2 model can be written as: X = (X_1, X_2, …, X_d). To enrich the extracted features, we also consider the High-Speed ViT family, specifically EfficientViT-M2. EfficientViT-M2 has indeed shown to be effective in encoding both local and global information, including more complex features, while utilizing limited computational resources. Its outputs can be written as: Y = (Y_1, Y_2, …, Y_d). We fuse the features extracted from both models into a single feature vector to enhance recognition accuracy and maintain trade-off between speed and accuracy. As a result we obtain: Z = (Z_1, Z_2, …, Z_k, …, Z_n) which can be expressed as: Z = X ⊕ Y. §.§ Classification For accurate classification, we consider three fully connected layers, facilitating linear transformations to capture additional features. The inclusion of two batch normalization layers serves to stabilize and accelerate the training process. Additionally, two ReLU activation functions are applied to introduce non-linearity and capture complex relationships within the data. To mitigate overfitting, we integrate two dropout layers, randomly disabling a portion of input units during training. § EXPERIMENTAL ANALYSIS Our model was implemented using the open-source PyTorch framework, on an NVIDIA GPU device, specifically the Quadro RTX 5000 with 16 GB of RAM. We conducted the experiments using two publicly available datasets namely KMU-FED and KDEF (See Figure <ref> for some face samples). §.§ Experimental Data KMU-FED To evaluate the efficiency of our approach in real-world driving scenarios, we first used KMU-FED (Keimyung University Facial Expression of Drivers) <cit.> dataset, which is captured in real driving environment. The database has a total of 1106 images from 12 subjects, with labels for the six basic emotions. The dataset includes different lighting variations and partial occlusions caused by hair or sunglasses. For comprehensive evaluation, we considered both 10-fold and 5-fold cross-validation protocol and a train–test ratio of 80%–20%. KDEF In addition to KMU-FED dataset, we also considered the Karolinska Directed Emotional Face (KDEF) dataset <cit.>, comprising 4900 images of human emotional facial expressions captured from 35 male and 35 female subjects at five different angles: -90^∘, -45^∘, 0^∘, 45^∘, and 90^∘, and without any accessories, makeup, or glasses. It includes seven different emotions (afraid, angry, disgust, happy, neutral, sad, and surprise). For a fair comparison with previous works, we divided the KDEF dataset using a train–test ratio of 80%–20%. §.§ Experimental Setup We utilized the Grid Search technique to determine the optimal hyper-parameters on both datasets. For the KMU-FED dataset, we used a batch size of 128 and the model was optimized using the Adaptive Moment Estimation (Adam) optimizer with a fixed learning rate of 0.001. The training process extended over 90 epochs, employing the cross-entropy loss function. For the KDEF dataset, a batch size of 32 was employed, and a learning rate of 0.0001 was utilized. Similar to the KMU-FED dataset, the Adam optimizer and cross-entropy loss function were applied. The training lasted 400 epochs. §.§ Obtained Results The results in terms of confusion matrices on KMU-FED and KDEF datasets are depicted in Figure <ref>, showing detailed overview of the recognition performance of our proposed model across different facial expressions. In the case of the KMU-FED dataset, we conducted experiments with varying data splits to investigate the generalization ability of our model. The results showed good performance for most expression categories across different split strategies. When 10-fold cross-validation (Figure <ref>a) is used, our model performed very well at identifying surprise, sadness, happiness, and fear. However, the performance decreased at differentiating between the disgust and anger emotions, due to the similarities in their appearance. In 5-fold cross-validation scenario, excellent performances were obtained by our model, successfully classifying the expressions of fear and surprise with no error rate. Moreover, our model showed very good accuracy in categorizing all expressions while using an 80%–20% data split, demonstrating the effectiveness of our proposed model on the KMU-FED dataset. For the KDEF dataset (Figure <ref>b), our model performed well, especially in recognizing neutral and happy expressions. However, a slight decrease in performance was observed for expressions of afraid, indicating that it can be challenging to distinguish between expressions of afraid, sad or surprise. §.§ Ablation Analysis To better gain insight into the performance of our model, we carried out an ablation analysis by comparing the performance of each individual module in our architecture before and after fusion. We considered a variety of factors to ensure a comprehensive comparison including the number of parameters, the processing time of single image, the accuracy, the precision, the recall, and the F1-score. The results are shown in Table <ref>. As expected, our model has slightly more parameters and uses longer processing time than the individual modules, while maintaining a relatively low computational cost and providing the best performances, yielding in an average accuracy of 97% and a processing time of 3.3 ms per a single image after face detection and cropping. The original ShuffleNet V2, either with its original classifier or with our proposed classifier, has less parameters, making it suitable for real-time applications but at the cost of lower performances. When EfficientViT-M2 is considered, the performances are increasing but still are less accurate than our proposed architecture. Based on these results, we can conclude that our proposed method does enhance the performance by combining the strengths of ShuffleNet V2 and EfficientViT-M2 alongside with the proposed classifier. §.§ Comparison with State-of-the-Art We also performed a thorough comparison with state-of-the-art and some recently proposed methods. The results of the comparison are summarized in Tables <ref> and <ref>. Our approach showed an average accuracy of 97.3% on the KMU-FED dataset which is comparable to the results published in <cit.> and consistently outperforming all other approaches across different data splits. Our proposed method also outperformed all other methods on the KDEF dataset with an accuracy of 92.44%. These results assess the validity of our proposed architecture when it comes to the recognition of driver's facial expressions. § CONCLUSION This paper introduced ShuffViT-DFER, an efficient and fast method for recognizing driver's facial expressions. The approach adopted a transfer learning-based technique, utilizing two lightweight and efficient pre-trained models, ShuffleNet V2 and EfficientViT-M2. The method combines the strengths of the two approaches using a specialized classification scheme. Extensive experiments are conducted on two distinct and challenging datasets simulating real-world scenarios. The obtained results demonstrated the effectiveness of the proposed approach in improving the accuracy while maintaining low processing time as needed in real-time driving environments. In future work, we plan to explore the integration of multi-modal information, such as combining facial expressions with audio cues. It is also of interest to consider driver's facial expression recognition from multiple cameras to further enhance the performance and cope with the multi-view face angles. § ACKNOWLEDGMENT We wish to convey our deep appreciation to the EUNICE Alliance for the financial support. IEEEtran
http://arxiv.org/abs/2409.02299v1
20240903212621
C-semigroups with its induced order
[ "D. Marín-Aragón", "R. Tapia-Ramos" ]
math.AC
[ "math.AC" ]
Black holes of type D revisited: relating their various metric forms Marco Astorino September 9, 2024 ====================================================================== § ABSTRACT Let ⊂^p be an integer polyhedral cone. An affine semigroup S⊂ is a -semigroup if |∖ S|<+∞. This structure has always been studied using a monomial order. The main issue is that the choice of these orders is arbitrary. In the present work we choose the order given by the semigroup itself, which is a more natural order. This allows us to generalise some of the definitions and results known from numerical semigroup theory to C-semigroups. Keyword: -semigroup; Wilf's conjecture; commutative monoids § INTRODUCTION Let be the set of non-negative integers, we say that S⊂ is a numerical semigroup if it is an additive monoid and its complementary is finite. This structure has been study broadly in the literature (see for example <cit.>). In <cit.> it is proven that if a_1,…,a_e∈ are coprimes, then ⟨ a_1,…,a_e⟩={λ_1 a_1+…+λ_e a_e|λ_1,…λ_e∈} is a numerical semigroup. The number of generator, i.e. e is called the embedding dimension of S and the number |∖ S|=g∈, the genus of S. Other relevant invariants are: the Frobenius number defined as F(S)=max{n∈| n∉ S}, the conductor defined as c(S)=F(S)+1 and the left elements defined as L(S)=|{x∈ S| x<F(S)}|. Related to these invariants there are still several open problems as Brass-Amorós's conjecture (see <cit.>) or Wilf's conjecture (see <cit.>) and, consequently, a lot of papers are published trying to solve them (see <cit.>). The first conjecture, put forward 15 years ago, states that if S_g is the set of all numerical semigroups with genus g then |S_g+1|≥|S_g| for all g≥ 0. This conjecture is true for the numerical semigroups with genus less than 67 (see <cit.>) and with genus greater than an unknown g (see <cit.>). On the other hand, the Wilf's conjecture was raised in 1978 and establishes that e(S)L(S)≥ c(S). In order to try to solve these conjectures, in <cit.> the concept of numerical semigroup is generalised as follows: let ⊂^p be an integer polyhedral cone, then a -semigroup is an additive affine semigroup S⊂ with finite complementary set. Thus, the transfer the problem from to . In this new structure, the classical invariants of commutative monoids theory cited previously have been study (see for example <cit.> or <cit.>). In all these papers there is a huge issue, unlike in , in there is no a canonical total order. Therefore researchers have to choose a total order (as the graded lexicographical order) in order to define invariants as the Frobenius number but it is not always clear what happens when this order is changed. Our main goal is to show that this generalisation can be done in a different way which does not depend of the researcher's choice. In this work, we use the order induced by the semigroup itself in order to do the generalisation, i.e., given a,b∈ S we say that a≤ b if and only if a-b∈ S. This order has already been applied to numerical semigroups (see for example <cit.>) but never to -semigroups. With this choice, the order is always clear and does not depend of an arbitrary choice. Thanks to this order, we open a new line for obtaining results which can be applied back to numerical semigroups. In order to provide the examples shown we have used a computer with a CPU Inter Core i7 8th Gen and the codes which are available at <cit.>. The content of this work is organised as follows: In Section <ref>, we give the main definitions and some general results in order to provide background to the reader. We show our generalisation and how apply the changes to the main invariants. Then, in Section <ref> we introduce a new invariant, the quasi-elasticity (based on the concept of elasticity in numerical semigroups, see <cit.>) and show its properties. Section <ref> is devoted to idemaxial semigroups and we show bounds for computing some invariants in this family. Finally, in Section <ref>, we generalize the Wilf's conjecture by means of this partial order. § PRELIMINARIES Let ⊂^p be an integer polyhedral cone, with τ_1,…,τ_q its extremal rays and h_1,…,h_r its supporting hyperplanes. We say that S is a -semigroup if it is an affine semigroup (i.e. S is finitely generated, cancellative, reduced and torsion free), S⊂ and |∖ S|<+∞. The set (S)=∖ S is called the set of gaps, or the gap set, of S. We define the induced order of S as x≤_Sy y-x∈ S. We use the symbol ≤ instead of ≤_S if there is no risk of misunderstanding. Note that if x≤_Sy then x≤_^py. The converse, trivially, is not true. We recall that in a numerical semigroup, the multiplicity is the least element not zero of S and the Frobenius number is the greatest element of which is not in S. We generalise these definitions as follows. We define the set of multiplicities of S as _≤(S) we denote this set as (S). We define the set of Frobenius as (S)=_≤_((S)). As before, we use and when there is no risk of confusion. Since (S) is finite and ⊂(S) then is also finite. Now we prove that verifies this finiteness. Let S be a -semigroup, then the set (S) is finite. For each supporting hyperplane h_i≡ a_1x_1+…+a_px_p=0 we define h'_i(α)≡ a_1x_1+…+a_px_p=α. We pick α_1,…,α_r such that if f∈ and x∈ h'_i(α_i) then fx≤ 0. We define D={d∈| dx≤ 0, ∀ x∈∪_1^r h'_1(α_i)}. Let c∈∖ 3D, then c∈α for some α∈, α>4. Since there exists d∈ 3D∩ such that c-d∈, ⊂ 3D. Therefore is finite. In <cit.>, the authors give an algorithm for computing the gap set of a -semigroup, this allows us to give Algorithm <ref> for computing . Given a -semigroup, it admits a unique minimal system of generators, denoted by (S)={a_1,…,a_e}. That means that S={n_1a_1+…+n_ea_e| n_i∈, a_i∈(S), i=1,…,e} and there is not a proper subset of (S) verifying this condition. The following result proves that the minimal system of generators is in fact the minimals of the S with its induced order. Let S be a -semigroup, then (S)=(S). (S)⊂(S): Let x∈(S) and we assume that x∉(S). Then there exists y∈ S such that y≤ x, i.e. x-y=z∈ S. So x=y+z which contradicts the fact that x∈(S). (S)⊂(S): Let x∈(S) and we assume that x∉(S). Then there exists y,z∈ S such that x=y+z, so x-y=z∈ S which contradicts the fact that x∈(S). In <cit.> the authors define the pseudo-Frobenius number of a C-semigroup as a∈(S) verifying a+(S∖{0}⊂ S. The set of all the pseudo-Frobenius number of S is denoted by PF(S). Our following result relates PF(S) and . If S is a -semigroup then ⊂ PF(S). If f∈ and f∉ PF(S) then f+s=h∈(S) for some s∈ S. But that means f-h∈ S and this is a contradiction. Next example shows that the converse is not true. Let be the cone spanned by (1,0) and (1,1) and S=∖{(1,1), (2,2)}. The element (1,1)∈ PF(S) but since (2,2)-(1,1)∈ we have that, (1,1)∉. In <cit.> the following definition for the Apery set is given. Given b∈ S, Ap(S,b)={a∈ S| a-b∈(S)}. Clearly, this set is finite (|Ap(S,b)|≤ |(S)|≤ +∞) and Ap(S,b)-b∈(S). The following question arise: is there a relationship between Ap(S,b)-b and ? Let S be a -semigroup, then ⊂ Ap(S,b)-b for all b∈ S∖{0}. Let f be in , then f+b=s∈ S. Note that if f+b=h∈(S) then h>f and this is a contradiction with f∈. Therefore, s-b∈(S) and f∈ Ap(S,b)-b. In <cit.>, the authors define the Frobenius elements, denoted by F(S), as the gaps such that they are the maximum of the set of gaps for some term order of ^p. We study the relation of this set with . Firstly, by <cit.>, every monomial ordering, ≼, can be considered a weight order for some a=(a_1,…,a_d)∈ℝ^d_≥, i.e. v≼ w if and only if v· a≤ w· a with · the inner product. Note that if these inner products are the same, we can choose another vector ã for tiebreaker. This is the same that saying that there exist a hyperplane such that divides the space in two region, one containing v and the other one containing w. Next example shows that, in general, ≠ F(S). Let be the cone spanned by (1,0) and (1,1) and S=∖{(1,0), (1,1), (2,0),(2,2), (3,0), (3,1),(4,0)}. Then, (3,0)∈ but (3,0)∉F(S). The other inclusion, however, it is true. With the previous notation, F(S)⊂. Let f∈ F(S) then there exists a hyperplane π such that its normal vector has all its coordinates positives and that divides the space in two areas, A_1 and A_2 in such a way that (S)∩ A_1={f}. Therefore, (f+)∩(S)={f} and f∈ F. The set will be studied with more details in the following section. § WEIGHT SETS In <cit.>, it is proven that if τ is an extremal ray of and S is a -semigroup, then τ∩ S is isomorphic to a numerical semigroup. However, the projection of the sum of the coordinates of the elements of S does not verify this property as we show in this section. This projection is not a capricious choice but is based on the one made in the study of factorization lengths in the case of numerical semigroups, see for example <cit.>. Given an element (x_1,…,x_p)∈^p we define its weight as w(x_1,…,x_p)=x_1+…+x_p. We can extend this definition to a set: if A⊂ then w(A)={w(a)| a∈ A}. Let Π_t be the plane define as x_1+…+a_p=t. Given a C-semigroup, S, we associate a set W as follows: x∈ W S∩Π_x≠∅ & S∩Π_x∩=∅. Note that this set has the following properties: * It is unbounded. * Its complementary is finite. * It contains the zero element. * In general, it is not closed by addition as the following example shows. Let be the cone spanned by (1,0) and (1,1) and let S=∖{(1,1),(2,2)}. In this case, (S)={(2,2)} and W=∖{4}. Therefore, W is not a numerical semigroup. We are interested in study w(). In particular, we are want to be able to compute the following invariant which is inpired by the elasticity studied in <cit.>. Given S a -semigroup, we define the quasi-elasticity of w() as ρ(S)=max(w()/min(w()). Our first questions are: Given a fixed cone can be found a -semigroup such that ρ is as big as we want. If not, what value bounds it? Let be a cone then ρ is not bounded. Let S be a -semigroup with ρ(S)=M∈. Let f_1,f_2∈ such that ω(f_1)=min(ω()) and ω(f_2)=max(ω()). Denoting by _i=f_i+, we have two cases: * If f_1≠ f_2, then clearly f_2∉_1. We choose f_3∈ C_2∖_1. If we consider the -semigroup (_1∖ f_1)∪(_3∖ f_3), we obtain the result. * If f_1=f_2, then we only have to choose f_2 and f_3 such that they are not comparable and f_2≠ f_3. Then we built the semigroup (_2∖ f_2)∪(_3∖ f_3) and apply the previous case. Therefore, ρ is unbounded. We have the following corollary. Given a cone we can find a sequence S_k, k∈ of -semigroups such that lim_k→∞ρ(S_k)=∞. § IDEMAXIAL SEMIGROUPS In this section we introduce a new family of -semigroup, the idemaxial semigroups. As we recalled in the previous section, if S if a -semigroup with extremal rays τ_i with 1≤ i≤ k then S∩τ_i is isomorphic to a numerical semigroup. We denote this numerical semigroup by S_i. In this section, we denote by ϕ_i the isomorphism such that ϕ_i(S∩_tau_i)=S_i. We denote by π_j the hyperplane containing the j-th elements of each S∩τ_i, by F_i the Frobenius number of each S_i and by π_F the hyperplane containing ϕ_i^-1(F_i). We say that S is an idemaxial semigroup if S_1≈…≈ S_k and S=(∪_i≥ 1(∩π_i))∪{x∈:x· y > 0, y∈π_F}. A graphical example of this kind of semigroup can be found in Figure <ref>. In this case, S_1 and S_2 are isomorphic to ⟨ 3,5 ⟩. This family is usefull because it has good properties. We can, for example, find a bound where compute their Frobenius and pseudo-Frobenius set. Let S be an idemaxial semigroups such that Φ(S∩τ_i) = S_i for some isomorphism Φ. Let m_i and c_i the multiplicity and the conductor of S_i, respectively. Let a_i be Φ^-1(m_i), b_i be Φ^-1(c_i), π_1 the hyperplane which contains all b_i-a_i and π_2 the hyperplane which contains all b_i. Then ⊂{x∈∖ S:x≥_S π_1, x≤_Sπ_2}. Let f∈. Clearly, f≤_Sπ_2. Since f<π_2, π_1∈ f+a_i≤_Sπ_2∉S and this is a contradiction. We remind the following definition. Let S be an idemaxial semigroup PF_i the set of pseudo-Frobenius numbers of S_i and π_j the hyperplane containing the j-th pseudo-Frobenius number of each S_i. Then the set of pseudo-Frobenius numbers of S is {∪(π_j∩)}⊂ PF. By definition, {∪(π_j∩)}⊂(S). Moreover, if s∈ S and f∈ PF, then f+S∈π with π containing elements of S_i for all i, so f+s∈ S. § WILF'S CONJECTURE We cannot end a work about semigroups without a brief mention of the Wilf's conjecture. Let S be a numerical semigroup, Wilf's conjecture claims that e(S)n(S)≥ c(S) where e(S), n(S) and c(S) are its embedding dimension, the cardinal of its sporadic elements and its conductor (see <cit.>). This conjecture has been generalized in several ways (see <cit.>). In this works the authors generalise this conjecture for -semigroups using monomial total orders. On the other hand, in <cit.> Wilf's conjecture is extended to generalized numerical semigroup using partial orders. In this section we are going to give even a more general conjecture using the induced order of a -semigroup. Let S be a -semigroup. We use the notation in <cit.>: c(S)=|{a∈:a≤ bb∈ H(S)}|, n(S)=|{a∈ S:a≤ bb∈ H(S)}|, where H(S)=∖ S. Therefore we can pose the General Extended Wilf's conjecture: e(S)n(S)≥ pc(S). This conjecture has been checked in a computational way and no counterexamples have been found. § DISCUSSION The traditional approach to study C-semigroups was to set a total order in a completely arbitrary way and then study the desired properties. This approach has the following issue: if the order is changed, the results obtained may not be true. In this paper we replace the total order chosen by the researchers by the partial order induced by the semigroup itself. As this order depends exclusively on the semigroup, it does not depend on the researcher, so the results obtained are less artificial. For example, when classical invariants were generalised in the study of semigroups, such as the Frobenius number or multiplicity, being completely dependent on the order, they changed as the order changed. Moreover, since numerical semigroups have only one Frobenius element and only one multiplicity, C-semigroups were forced to have only one of these elements. However, with our method of study, we have a set of elements for each of these invariants, which is more natural since we are in a higher dimension. In addition to this, in this work, we have given a family of C-semigroups with good properties. This work aims to lay the foundations for future work on C-semigroups. 21 otrowilf Bilen, M.; Sakran, N. On generalized Wilf conjectures. 2023 Available online: <https://arxiv.org/abs/2306.05530>. ref2 Bras-Amorós, M. Fibonacci-like behavior of the number of numerical semigroups of a given genus. Semigroup Forum 2008, 76, 379–384. Cox Cox, D.; Little, J.; O'Shea, D. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Publisher: Springer, Switzerland, 2015. elasticity Chapman, S.T.; García-García, J.I.; García-Sánchez, P.A.; Rosales, J.C.Computing the elasticity of a Krull monoid. Linear Algebra and its Applications 2001, 336, 191–200. ref4 Chapman, S.T.; García-Sánchez, P.A.; Llena, D.; Marshall, J. Elements in a numerical semigroup with factorizations of the same length. Canadian Mathematical Bulletin 2011, 54, 39–43. wilfgeneralizado Cisto, C.; DiPasquale, M.; Failla, G; Flores, F.; Peterson, C.; Utano, R. A generalization of Wilf’s conjecture for generalized numerical semigroups. Semigroup Forum 2020, 101, 303–325. ordeninducido Delgado, M,; García-Sánchez, P.A.; Robles-Pérez, A.M. Numerical semigroups with a given set of pseudo-Frobenius numbers. LMS Journal of Computation and Mathematics 2016, 19, 186–205. Juande Díaz-Ramírez, J.D., García-García, J.I., Marín-Aragón, D.; Vigneron-Tenorio, A. Characterizing affine C-semigroups. Ricerche di Matematica 2022, 71, 283–296. wilfpalestina Dhayni, M. Wilf's conjecture for numerical semigroups. Palestine Journal of Mathematics 2018, 7(2), 385–396. wilfShalom Eliahou, S.; Marín-Aragón, D. On numerical semigroups with at most 12 left elements. Communications in Algebra 2021, 49(6), 2402-2422. fromentin Fromentin, J.; Hivert, F. Exploring the tree of numerical semigroups. Mathematics of Computation 2016, 85(301), pp. 2553–2568. Adrian García-García, J.I.; Marín-Aragón, D.; Sánchez-Loureiro, A.; Vigneron-Tenorio, A. Some Properties of Affine -semigroups. Results Math 2023, 79(52). codigo García-García, J.I.; Marín-Aragón, D.; Sánchez-Roselly Navarro, A.;Vigneron-Tenorio, A. Commutative monoids 2024. Available online: <https://github.com/D-marina/CommutativeMonoids>. omega García-García, J.I.; Marín-Aragón, D.; Vigneron-Tenorio, A. Asymptotic ω-Primality of Finitely Generated Cancelative Commutative Monoids. Mathematics 2023, 11(4), 790. nuestrowilf García-García, J.I.; Marín-Aragón, D.; Vigneron-Tenorio, A. An extension of Wilf's conjecture to affine semigroup. Semigroup Forum 2018, 96, 396–408. Ojeda García-García, J.I.; Ojeda, I.; Rosales, J.C.; Vigneron-Tenorio, A. On pseudo-Frobenius elements of submonoids of ^d. Collect. Math. 2020, 71, 189–-204. ref3 García-Sánchez, P.A.; Marín-Aragón, D.; Robles-Pérez. The tree of numerical semigroups with low multiplicity. 2018 Available online: <https://arxiv.org/abs/1803.06879>. libroRosales Rosales, J.C.; García-Sánchez, P.A. Numerical Semigroup; Publisher: Springer, New York, 2009. ref1 Rosales, J.C. García-Sánchez, P.A; García-García, J.I. Every positive integer is the Frobenius number of a numerical semigroup with three generators. Mathematica Scandinavica 2008, 94, 5–12 wilf Wilf, H.S. A circle-of-lights algortihm for the “money-changing problem”. Am. Math. Monthly 1978, 85(7), 562–565. zhai Zhai, A. Fibonacci-like growth of numerical semigroups of a given genus. Semigroup Forum 2013, 86, 634–662.
http://arxiv.org/abs/2409.02352v1
20240904005240
Upstream Allocation of Bidirectional Load Demand by Power Packetization
[ "Shiu Mochiyama", "Kento Hiwatashi", "Takashi Hikihara" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Upstream Allocation of Bidirectional Load Demand by Power Packetization Shiu Mochiyama, Kento Hiwatashi, and Takashi Hikihara September 9, 2024 ======================================================================= § ABSTRACT The power packet dispatching system has been studied for power management with strict tie to an accompanying information system through power packetization. In the system, integrated units of transfer of power and information, called power packets, are delivered through a network of apparatuses called power packet routers. This paper proposes upstream allocation of a bidirectional load demand represented by a sequence of power packets to power sources. We first develop a scheme of power packet routing for upstream allocation of load demand with full integration of power and information transfer. The routing scheme is then proved to enable packetized management of bidirectional load demand, which is of practical importance for applicability to, e.g., electric drives in motoring and regenerating operations. We present a way of packetizing the bidirectional load demand and realizing the power and information flow under the upstream allocation scheme. The viability of the proposed methods is demonstrated through experiments. Power packet, Routing, Upstream allocation, Bidirectional load § INTRODUCTION As a means of energy management in a system disconnected from a large-scale power source (hereafter, we denote an isolated system), there is a proposal of the power packet dispatching system <cit.>. In the system, the power is packetized by dividing the energy flow into power pulses with information tags attached physically (Fig. <ref> (a)). Packetized power is routed from a source to a load through a network of apparatuses called power packet routers according to the tag (Fig. <ref> (b)). The authors' group has developed a physical realization of a power-packet router and verified its operation in networked routers through experiments<cit.>. The key enablers of the concept are the division of power in the time domain and the attachment of the physical tag that indicates the origin and destination of each power pulse. The simultaneity between physical and cyber quantities is the most essential aspect in such a cyber-physical system <cit.>. For example, in a typical situation in isolated systems with increased harvesting sources, deployment of batteries of enormous capacity is not realistic due to the limited cost, space, and weight. It is not easy to balance instantaneous supply and demand, both of which have unpredictable and varying profiles, without a large buffer. Physical packetization gives complete traceability to power transfer, enabling the routing network to allocate supply and demand between sources and loads with arbitrary proportion counted by the packetized units. This realizes best-effort matching of source capacity and load demand in a decentralized way, like the Internet. From the viewpoint of the loads, the power packet dispatching system delivers power as a collection of discrete units. Thus, the load control at the edge of the network employs a discretized power processing method. This is quite different from the standard way in power electronics, namely pulse width modulation <cit.>. Previous studies<cit.> proposed a method to derive an optimal packet sequence to satisfy a load demand. The derived sequence is then shared with the routing network as a demand signal, and for each packet in the sequence, a source that meets the requirement generates a power packet to supply the load. In this sense, the method performs an upstream allocation of the discretized load demand to sources in the routing network <cit.>. However, a remaining issue of the previous studies was that the transfer of power and information was not completely closed in the physical layer. That is, the allocation of load demand was calculated at the edge of the routing network and then delivered to the source-side router not in packetized form but via another channel. Obviously, this brings about a flaw in the concept of integration of power and information transfer. This paper discusses an upstream allocation of load demand that fully integrates the transfer of both power and information as a power packet. The contribution of this paper is threefold. First, we propose a routing method of power packets (Section <ref>) where the load demand information is allocated by an information tag to the demanded power source in the upstream direction. The pivotal factor of the proposal is the elimination of an implicit assumption of the previous studies that the direction of power and information delivery coincides for a particular power packet. The proposed method allows a request from a load (information) and a response from a source (power) to coexist as a single power packet. Second, we present that the proposed routing method also offers a practical advantage in control of loads with bidirectional power flow, such as powering and regenerating operations of electric drives. The previous methods<cit.> only apply to the allocation of the unidirectional load demand. We propose a method for the upstream allocation of bidirectional load demand (Section <ref>), which realizes a seamless handling of a bidirectional power packet flow with completely integrated power and information transfer. Lastly, we demonstrate the viability of the proposed method in terms of the two aspects mentioned above through experiments (Section <ref>). § PROPOSED METHOD §.§ Description of Target System Fig. <ref> depicts the power packet dispatching system we consider in this paper. We focus on a part of a whole routing network that includes two power sources that supply a particular load. Note that the power sources in this paper include ones with sink operation, such as batteries, to consider the bidirectional management of the packetized power supply. The two-source setup is minimal but generic for discussing an upstream allocation. One can extend it to systems with more than three sources without essential changes. Furthermore, the proposed method can also be applied to multiple loads in the network since it works on a load-by-load basis. We introduce a power packet configuration protocol that is shared by the whole network. Fig. <ref> (a) presents the bit assignment of a power packet. In this paper, we fix the time duration of one bit and the bit length of the information tag and payload. We set a fixed sequence for the three bits from the beginning of the information tag, namely , to declare the start of a power packet. The remaining three-bit signal expresses the index of the demanded source. Of course, the bit length and their assignment are arbitrary and can be modified according to the system requirements. The transfer of a power packet is assumed to be synchronized throughout the routing network. Based on the assumption, we define an index of time slot k (k=1,2,3,…) as a representation of the (continuous) time interval [(k-1)T_ packet, kT_ packet), where T_ packet coincides with the time duration of a power packet. Thus, one power packet is transferred in one time slot. §.§ Routing Method for Upstream Allocation With the above setups, we develop a method to allocate load demand by upstream dispatching of power packets. Here, we focus on the way of power and information transfer as a power packet and ignore how the source of origin is selected for each power packet. The source selection algorithm will be discussed in Section <ref>. For the k-th time slot, the router connected to the load and the routers connected to the power sources exchange power and information in a packetized form. Fig. <ref> presents the overview of router operation throughout a power packet transfer. The details will be explained step by step below. First, the header tag is sent from the load-side router to the source-side router. The sender-receiver relationship is fixed regardless of the direction of the power supply in the bidirectional load case. The direction of tag transfer is determined by hardware setups; the load-side router is equipped with a signal generation module, and the source-side routers are equipped with a signal reading module. The signal generation module is made up of a voltage source dedicated to signal generation and two switches controlled in a complementary manner<cit.>. That is, ( S_high, S_low)=( ON, OFF) generates a voltage for high logic and ( S_high, S_low)=( OFF, ON) pulls down the potential to zero for low logic. The source-side routers observe the potential on the transmission line at every bit through the signal reading module comprising a potential divider and a galvanic-isolation comparator<cit.>. The module converts the potential signal to a logic sequence and passes it to the router controller. Second, power transfer is controlled according to the tag information. After the load-side router finishes sending the header, it turns on the switch for power transfer, S_R1. At the same time, one of the switches of the source-side router, S_R2 or S_R3, specified in the tag, also turns on its switch for power transfer. The conduction path between the specified power source and the load is formed in this way. Third, the footer follows the payload transfer to indicate the end of one power packet. After a predetermined duration of payload passes, the switches of both the load-side and source-side routers are turned off. §.§ Algorithm for Packetization of Bidirectional Load Demand In this subsection, we explain how we determine the power source of origin for each time slot. We apply a technique for signal quantization<cit.> to our discrete power processing, based on previous studies<cit.>. Fig. <ref> shows the overview of the load control we consider. Below, we give descriptions for each signal and block depicted in the figure. The plant P is composed of two elements: P_circ, the electric circuit depicted in Fig. <ref>, and P_comm, the communication between the routers through the information tag. The circuit includes transistors, and the input of P_circ is a vector of their switching states. Since the load control algorithm only concerns the selection of the source, the input is defined as s(k) := [S_R2,S_R3]^𝖳. Then, for the circuit depicted in Fig. <ref>, applying an appropriate discretization to the circuit equation with a time interval equal to the packet duration T_ packet yields the following discrete-time system P_circ: x(k+1) = A x(k) + B V^𝖳s(k) y(k) = C x(k) , where x∈ℝ and y∈ℝ are the state and output, respectively, A∈ℝ, B∈ℝ, and C∈ℝ are the constant coefficients including the circuit parameters, and V is a vector of possible voltage levels, namely V:=[E_1,E_2]^𝖳. For the setup of Fig. <ref>, the state and the output are both the voltage of the load, and thus C=1. The product V^𝖳s(k) represents the voltage of the source that is connected to the load at the time interval k. The switching state s(k) is determined at the source-side router by reading the information tag. This operation maps the selection of the voltage level v∈{E_1,E_2} to s(k), namely P_comm: s(k) = [1,0]^𝖳 if v=E_1 [0,1]^𝖳 if v=E_2 . The series conncetion P_comm and P_circ is called P, the plant in the broad sense. The selection of the power source in each time slot is determined by the quantizer Q, the algorithm implemented in the load-side router's controller. To understand the operation of Q, we first consider a reference system P', which is a replacement of Vs(k) by a continuous-valued input u(k)∈ℝ in the original plant P. u(k) can be recognized as an ideal input, and the quantizer Q seeks to approximate it with a discretized input in the form of power packets. The quantizer generates a sequence {v(k)}:={v(0),v(1),v(2),… v(k)} that makes the output sequence {y(k)} as close as possible to the output sequence of the reference system P' under an input sequence {u(k)}. To provide such a quantizer, we consider a dynamic quantizer in the following form Q: ξ(k+1) = A_Q ξ(k) + B_Q (v(k)-u(k)) v(k) = Vs(k) = q(C_Q ξ(k) + u(k)) , where ξ∈ℝ is the state of the quantizer, and A_Q∈ℝ, B_Q∈ℝ, and C_Q∈ℝ are the design parameters. The function q(·) is a static quantizer that maps its argument to a set of available voltage levels of the system. As recognized in (<ref>), the quantizer's state stores the weighted sum of past errors between the ideal input u(k) and the actual input v(k). The output equation of Q then takes this error into account in the argument of the static quantizer at the second equation of (<ref>). In this way, Q generates {v(k)} that dynamically compensates for the quantization error. The previous study<cit.> provides the analytical design of Q for optimal compensation: A_Q = A, B_Q = B, C_Q = -A/B. The bidirectionality of power delivery is a result of the selection of possible voltage levels of the power sources. Specifically, setting both higher and lower voltage levels than the nominal value of u(k) leads to a bidirectional current flow. For example, when u(k) is in a decreasing trend, the lower voltage source is selected more to draw stored energy from the load, and when it is in an increasing trend, the higher voltage source is selected more to supply energy to the load. Lastly, we explain the connection between the proposed routing method and the load demand packetization. The output of Q can be assigned to one of the switching states of the plant circuit. At the beginning of each power-packet generation, the load-side router calculates v(k). The router then assigns the index corresponding to the selected power source to the header. § EXPERIMENTAL VERIFICATION §.§ Setups We built the routing network depicted in Fig. <ref>. The voltage values of the sources are set at E_1 = 12 V and E_2 = 3.6 V. We assume that source 1 is a pure power source and that source 2 is a battery that accepts a bidirectional current flow. The load consists of a resistor of R_L = 10 Ω and a capacitor of C_L = 9.9 mF. The equivalent resistors, which represent the sum of the on-resistance across the routers other than the explicitly modeled ones, are set at R_eq1 = R_eq2 = 3.3 Ω. The time duration of one bit of a power packet is set at 4.0 μs. The bit length of a payload is set at 240 bit, leading to the duration of 960 μs. Taking into account the 6-bit header and the 4-bit footer, the total bit length of a power packet is 250 bit, and the corresponding time duration is 1.0 ms. Based on these parameters, we derive the optimal design of Q following the optimization procedure in <cit.> as A_Q = 0.9593, B_Q = 0.03060, C_Q = -31.34. We set the reference input {u(k)} as a triangular waveform u(k) = b + a ·tri(2π k/K) where a = 4.0 V, b = 8.0 V, K = 125, and tri(θ) is a triangular wave function of period 2π that takes 0 when θ = 2nπ (n=0,1,2,…) and 1 when θ = mπ/2 (m=1,3,5,…). We adopt the reference waveform that incorporates both increasing and decreasing trends, which creates bidirectional power flow by charging and discharging stored energy of the load. §.§ Results fig:overview shows the result of load voltage regulation. fig:overview (a) depicts the load voltage with and without quantization. The load voltage without quantization was calculated numerically for reference with the model and parameters set above in the load-side router's controller and output through the controller's analog output port. The good agreement of the two lines indicates that the packetized power supply achieved successful regulation of load voltage in the sense of comparison with the continuous counterpart for reference. fig:overview (b) depicts the measured current, where the direction of inflow to the load is defined as positive. The bidirectional power flow is confirmed by observing the current of both positive and negative values. In particular, by observing the current waveform together with changes in load voltage, it can be seen that the direction of power transmission is determined by changes in the stored energy of the load. When the load voltage increases, the higher voltage source is selected more often, resulting in positive power transmission on average. When the load voltage decreases, the power source of lower voltage (battery) is selected more often, resulting in the negative (in regenerative direction) power transmission on avarage. Then, we observe the details of the signal transfer by information tags and the resulting power source selection. Since the time span presentation in Fig. <ref> is not suitable to observe the details per packet, we refer to the enlarged view of the results presented in Fig. <ref>. Fig. <ref> (a) presents the the enlarged view of the load current in t∈[119.5 ms,121.5 ms], where the header tags of two consecutive packets are depicted. The sign of the current indicates that the power packet beginning at around t=120 ms is supplied by source 1, and that the power packet beginning at around t=121 ms is regenerated to source 2. Indeed, Fig. <ref> (b) and (c) indicate that the headers of the former and latter power packets are and , respectively. These results ensure that the router selected the designated power source according to the information tag. § DISCUSSIONS In this paper, we proposed the method for the upstream allocation of the packetized bidirectional power demand of a dynamical load. The proposed method realized the seamless connection in the flow of both information and power and integrated them as power packets. We demonstrated the operation through experiments, presenting the successful allocation of load demand to both the pure source and the battery of source and sink capability. In the remainder of this section, we provide some discussions on the comparison of the proposed method with related works. The comparison highlights the standpoint and novelty of the present paper among related works. The concept of power packetization was first proposed in <cit.> in the 1990s and has been studied by independent groups around the world to date, e.g. <cit.>. Most of them involve information and power processing in separate layers and employ virtual tagging. However, in situations where there are not enough adjustable power sources, the discrepancy of the information from physics at each instance can lead to serious accidents. This is the reason why we adopt the packetization based on physical tagging. For the hardware setups of this paper, we have exploited the previously proposed method of circuit implementation. The original (unidirectional) power packet router was introduced in <cit.>, and then was extended to accommodate bidirectional power packet flow in <cit.>. The original router included the signal reading module, and the signal generation module was first introduced in <cit.>. The bidirectional router proposed in <cit.> employed bidirectional switches, which enabled the bidirectional current flow, although the work did not discuss bidirectional load demand fulfillment by a packetized form of information and power transfer. Based on these hardware setups, the present work proposes a unified system solution for the upstream allocation of a bidirectional load demand. The concept of upstream load flow was introduced in a traditional power system analysis as a means to trace the load flow in terms of the contribution of the sources<cit.>. Active allocation of upstream demand in a packetized form was first discussed in <cit.>. The work was limited to a static (pure resistive) and thus unidirectional load with power demand expressed on a time-average basis. On the other hand, this paper proposes a method that can handle a dynamic load whose demand is expressed as an instantaneous value. Furthermore, the previous work <cit.> analyzed the system using simplified numerical simulations and omitted the details of the routing operation on the physical layer. The proposed method presents how to deal with power and information processing in the physical layer. Regarding the load-demand representation in a packetized form, the present study extends our previous proposal in <cit.>. First, as mentioned in the Introduction, the previous works did not integrate the communication for this representation with the power packet transfer. The present work eliminates the limitation and realizes the full integration by the routing method developed in Section <ref>. Furthermore, the previous works focused on a case where one source supplies one load, and thus the load regulation therein was recognized as a chopping operation of one power source with only two possible states of the circuit: whether the source is connected to the load or not. This setup implicitly limited the flow of current to unidirectional. The framework presented in Section <ref> overcomes the limitations above by extending the selection of power sources to different voltage levels, one for source operation and the other for sink operation. IEEEtran
http://arxiv.org/abs/2409.02252v1
20240903192548
A virtual element method for a convective Brinkman-Forchheimer problem coupled with a heat equation
[ "Danilo Amigo", "Felipe Lepe", "Enrique Otarola", "Gonzalo Rivera" ]
math.NA
[ "math.NA", "cs.NA" ]
1/f Noise in the Heliosphere: A Target for PUNCH Science [ September 9, 2024 ======================================================== § ABSTRACT We develop a virtual element method to solve a convective Brinkman-Forchheimer problem coupled with a heat equation. This coupled model may allow for thermal diffusion and viscosity as a function of temperature. Under standard discretization assumptions, we prove the well posedness of the proposed numerical scheme. We also derive optimal error estimates under appropriate regularity assumptions for the solution. We conclude with a series of numerical tests performed with different mesh families that complement our theoretical findings. non-isothermal flows, nonlinear equations, a convective Brinkman-Forchheimer problem, a heat equation, virtual element methods, stability, a priori error bounds. 35Q30, 35Q35, 65N12, 65N15, 65N30. § INTRODUCTION Let Ø⊂ℝ^2 be an open and bounded domain with Lipschitz boundary ∂Ω. In this work, we are interested in designing and analyzing a divergence-free virtual element method (VEM) for the temperature distribution of a fluid modeled by a convection-diffusion equation coupled with a convective Brinkman–Forchheimer problem. This model can be described by the following nonlinear system of partial differential equations (PDEs): {[ -÷(ν(T)∇)+(·∇) + + ||^r-2 +∇ = f in Ω,; ÷() = 0 in Ω,; -÷(κ(T)∇ T)+·∇ T = g in Ω, ]. together with the Dirichlet boundary conditions = 0 and T=0 on ∂Ω. The unknowns of the system are the velocity field , the pressure , and the temperature T of the fluid. The data are the external force f, the external heat source g, the viscosity coefficient ν, and the thermal diffusion coefficient κ. We note that ν(·) and κ(·) are coefficients that can depend nonlinearly on the temperature. Finally, the parameter r is chosen such that r ∈ [3, 4]. For a discussion of this parameter r, the Forchheimer term in (<ref>), and its physical implications, we refer to the reader to <cit.> and the references therein. Beginning with the pioneering works <cit.>, great efforts have been made to develop and analyze VEMs. These methods are a relatively new family of solution techniques that allow general polytopal meshes, arbitrary polynomial degrees, and yet conforming ^̋1-approximations. The peculiarity of VEMs is that the discrete spaces consist of functions that are not known pointwise, but about which a limited amount of information is available. This limited information is sufficient to construct stiffness matrices and right-hand sides. On the other hand, VEMs allow great flexibility with regard to the shape of the elements; for example, convex and non-convex elements are permitted. This is an advantage over the classical finite element methods (FEMs), which make it possible to deal with domains that are difficult to discretize with triangles or quadrilaterals. For these reasons, the VEM is generally considered as a generalization of the FEM. The analysis of VEMs has been successfully developed for a large number of problems. We refer the interested reader to <cit.> for a recent review and discussion. As for the development of VEMs for various linear and nonlinear flow problems, we refer the reader to the non-exhaustive list <cit.>. An important feature of some methods for the treatment of incompressible fluids is that the divergence-free condition is preserved at the discrete level. The aim of the present work is to develop and analyze a VEM to approximate the velocity, pressure, and temperature variables that solve the system (<ref>). Our analysis is inspired by the recent work <cit.>, in which the authors propose a divergence-free VEM to solve the coupling between the Navier–Stokes equations and a suitable heat equation. Here, the viscosity coefficient depends on the temperature variable. In our work, we complement and extend the results in <cit.> in the following two directions: First, the model we consider allows the dependence of the thermal diffusion coefficient κ(·) on the temperature variable. Second, we consider the so-called convective Brinkman–Forchheimer equations. In contrast to <cit.>, our model also considers the term + ||^r-2, with r ∈ [3,4]. The consideration of this term was suggested by Forchheimer <cit.>, who realized that Darcy’s law is not adequate for moderate Reynolds numbers. Indeed, Forchheimer found that the relationship between the Darcy velocity and the pressure gradient was nonlinear and that this nonlinearity appeared to be quadratic (r=3) for a variety of experimental data <cit.>. This leads to a modification of the Darcy equations, usually referred to as the Darcy–Forchheimer equations <cit.>. In <cit.>, Forchheimer also noted that some data sets could not be described by the quadratic correction so he also postulated that the correction of Darcy’s law could allow a polynomial expression for u, e.g., u + |u|u + |u|^2u and u + |u|^r-2u; see <cit.>, <cit.>, and <cit.>. In practice, r takes the value 3 and 4 in several applications <cit.> and also fractional values such as r = 7/2 <cit.>. This is the reason why we consider r ∈ [3,4]. In view of these considerations, (<ref>) can be seen as an extension of the model in <cit.>. §.§ Contributions To the best of our knowledge, this is the first paper that analyzes a VEM for the nonlinear coupled problem (<ref>). Since several sources of nonlinearity are involved, analyzing a solution technique is far from trivial. In the following, we list what we consider to be the main contributions of our work: ∙ A VEM: Inspired by the scheme proposed in <cit.> and the discrete spaces proposed and analyzed in <cit.>, we propose the VEM (<ref>). ∙ Existence and uniqueness of discrete solutions: We derive an existence result for the discrete problem (<ref>) without restriction on the problem data by using a fixed point strategy; see Theorem <ref>. Moreover, we obtain a global uniqueness result when the problem data is suitably restricted; see Theorem <ref>. ∙ Optimal error estimates: Assuming that the continuous and discrete problems admit unique solutions and under suitable regularity assumptions for the continuous solution, we derive optimal error estimates for the proposed VEM; see Theorem <ref>. The analysis borrows ideas and components from <cit.>, <cit.>, and <cit.>. §.§ Outline The paper is structured as follows: We begin with section <ref>, where we introduce notations and basic assumptions that we will use in our work. In sections <ref> and <ref>, we summarize some results related to the convective Brinkmann–Forchheimer problem and a suitable heat equation, respectively. Section <ref> is devoted to review existence, uniqueness, and stability results for the coupled problem (<ref>). The core of our work begins in section <ref>, where we first introduce the standard techniques for analyzing VEMs and propose our discrete scheme (<ref>). In section <ref>, we prove the existence of discrete solutions without restriction on the problem data, as well as a global uniqueness result when the problem data is appropriately restricted. Section <ref> is devoted to the development of a rigorous analysis of error estimates. Finally, in section <ref>, we report a series of numerical tests in which we evaluate the performance of the proposed method for different configurations and polygonal meshes. § NOTATION AND PRELIMINARY REMARKS Let us establish the notation and the framework within which we will work. §.§ Notation In this paper, Ω is an open and bounded polygonal domain of ℝ^2 with Lipschitz boundary ∂Ω. If 𝒳 and 𝒴 are normed vector spaces, we write 𝒳↪𝒴 to denote that 𝒳 is continuously embedded in 𝒴. We denote by 𝒳' and ·_𝒳 the dual and the norm of 𝒳, respectively. For p ∈ (1,∞), we denote by q ∈ (1,∞) its Hölder conjugate, which is such that 1/p + 1/q = 1. The relation ≲ means that ≤ C, with a positive constant C that is independent of , , and the discretization parameters. The value of C can change at each occurrence. We use the standard notation for Lebesgue and Sobolev spaces. The spaces of vector-valued functions and the vector-valued functions themselves are denoted by bold letters. In particular, we use the following notation: :=_̋0^1(Ø), :=[_̋0^1(Ø)]^2, and :=Ł_0^2(Ø). As usual, the dual of _̋0^1(Ø) is denoted by ^̋-1(Ø). We also introduce the space := {∈ : ÷ = 0}. We conclude this section with the classical and well–known Poincaré inequality: For v ∈, there exists = (Ω) such that v _1,Ø≤| v |_1,Ø ∀ v ∈. For the sake of simplicity, we assume that the vector-valued couterpart of (<ref>), i.e., 𝐯_1,Ø≤| 𝐯 |_1,Ø for all 𝐯∈𝐕, holds with the same constant . §.§ Data assumptions We make the following assumptions on the viscocity ν(·) and the diffusion coefficient κ(·). A0) κ(·) is extrictly positive, bounded, and Lipschitz continuous, i.e., there exist constants κ_*, κ^*, κ_lip > 0 such that 0 < κ_* ≤κ(r) ≤κ^*, |κ(r_1) - κ(r_2)| ≤κ_lip|r_1 - r_2| ∀ r,r_1,r_2 ∈ℝ. A1) ν(·) is extrictly positive, bounded, and Lipschitz continuous, i.e., there exist constants ν_*, ν^*, ν_lip > 0 such that 0<ν_* ≤ν(r) ≤ν^*, |ν(r_1) - ν(r_2)| ≤ν_lip|r_1 - r_2| ∀ r,r_1,r_2 ∈ℝ. § A CONVECTIVE BRINKMAN–FORCHHEIMER PROBLEM We present existence and uniqueness results for the following weak formulation of a convective Brinkman–Forchheimer problem: Given f∈[^̋-1(Ø)]^2, find (,)∈× such that a_L(,)+c_N(;,)+c_F(;,)+ d(,) + b(,) = ⟨f,⟩, b(,) = 0, for all (,)∈×. Here, ⟨· ,·⟩ denotes the duality pairing between [^̋-1(Ø)]^2 and 𝐕. The forms that occur in (<ref>) are defined as follows: a_L:×→ℝ, c_N, c_F :××→ℝ, d:×→ℝ, and b:×→ℝ are such that a_L(,) := ∫_Øν∇ : ∇, c_N(;,) :=∫_Ø (·∇)·, c_F(;,) :=∫_Ø ||^r-2·, d(,) :=∫_Ø·, and b(,):= -∫_Ø÷(). We list some of the most important properties that these forms satisfy: * a_L(·,·) is a coercive and continuous bilinear form: For every ,∈, we have a_L(,) ≥ν_*||_1,Ø^2, a_L(,) ≤ν^*||_1,Ø||_1,Ø. * c_N(·;·,·) is skew-symmetric: For every ∈ and ,∈, we have c_N(;,) + c_N(;,)= 0, c_N(;,) = 0. In addition, c_N(·;·,·) is continuous: For every , ,∈, we have c_N(;,) ≤ C_N||_1,Ø||_1,Ø||_1,Ø. * c_F(·;·,·) is continuous: For every , ,∈, we have c_F(;,) ≤ C_F||_1,Ø^r-2||_1,Ø||_1,Ø. * b(·,·) is a continuous bilinear form: For every ∈ and ∈, we have b(,) ≤ ||_1,Ø_0,Ø. Moreover, b(·,·) satisfies an inf-sup condition: There exists a constant β>0 such that ∈sup b(,)||_1,Ø≥β_0,Ø ∀∈. §.§ Existence, stability, and uniqueness results We present existence, uniqueness, and stability results for solutions of (<ref>). In particular, we present the existence of solutions without restriction on the data and a global uniqueness result when the data is suitably restricted <cit.>. Let Ø⊂ℝ^2 be an open and bounded domain with Lipschitz boundary ∂Ø, and let f∈[^̋-1(Ø)]^2. Then, there exists at least one solution (,)∈× for problem (<ref>) which satisfies ||_1,Ø≤ν_*^-1f_-1, _0,Ø≤β^-1Λ(f)f_-1, where Λ(f) := 1 + ν^*ν_*^-1 + C_Nν_*^-2f_-1+^2ν_*^-1 + C_Fν_*^1-rf_-1^r-2. If, in addition, C_Nf_-1 < ν_*^2, then there is a unique pair (,)∈× that solves (<ref>). § A NONLINEAR HEAT EQUATION We consider the following nonlinear heat equation in weak form: Given g∈^̋-1(Ø) and ∈, find T∈ such that 𝔞(T;T,S)+𝔠(;T,S) = ⟨ g,S⟩ ∀ S∈. Here, ⟨·,·⟩ denotes the duality pairing between ^̋-1(Ø) and ^̋1(Ø). The forms 𝔞 and 𝔟 are defined as follows: 𝔞:××→ℝ, 𝔞(X;R,S) := ∫_Øκ(X)∇ R ·∇ S, 𝔠:××→ℝ, 𝔠(;R,S) := ∫_Ø (·∇ R)S. We list some of the most important properties that these forms satisfy: * Given X ∈, the form 𝔞(X;·,·) is coercive and continuous: 𝔞(X;S,S) ≥κ_*|S|_1,Ø^2, 𝔞(X;R,S) ≤κ^*|R|_1,Ø|S|_1,Ø ∀ R,S ∈. * 𝔠(·;·,·) is skew-symmetric: For every ∈ and R,S ∈, we have 𝔠(;R,S) + 𝔠(;S,R) = 0, 𝔠(;S,S) = 0. In addition, 𝔠(·;·,·) is continuous: For every ∈ and R,S ∈, we have 𝔠(;R,S) ≤ℭ||_1,Ø|R|_1,Ø|S|_1,Ø. §.§ Existence, stability, and uniqueness results We present existence, stability, and uniqueness results for problem (<ref>) <cit.>. Let Ω⊂ℝ^2 be an open and bounded domain with Lipschitz boundary ∂Ω, and let g∈^̋-1(Ø). Then, there exists at least one solution T∈_̋0^1(Ø) of problem (<ref>) which satisfies |T|_1,Ø≤κ_*^-1g_-1. If, in addition, (<ref>) has a solution T_1 ∈_𝔮^1(Ø) ∩_̋0^1(Ω) such that κ_*^-1κ_lipC_𝔭↪ 2|T|__𝔮^1(Ø) < 1, then problem (<ref>) has no other solution T_2 ∈_̋0^1(Ω). Here, C_𝔭↪ 2>0 is the best constant in _̋0^1(Ø)↪Ł^𝔭(Ø), where 𝔭 < ∞, and 𝔮 is chosen such that 1/𝔭+1/𝔮=1/2. § THE COUPLED PROBLEM We now introduce a weak formulation for the system (<ref>) and review existence and uniqueness results. The weak formulation is as follows: Given f∈[^̋-1(Ø)]^2 and g∈^̋-1(Ø), find (,,T)∈×× such that {[ a(T;,) + c_N(;,)+c_F(;,)+ d(,) + b(,) = ⟨f,⟩,; b(,) = 0,; 𝔞(T;T,S)+𝔠(;T,S) = ⟨ g,S⟩, ]. for all (,,S) ∈××. The forms c_N(·; ·,·), c_F(·; ·,·), d(·,·), and b(·,·) are defined in Section <ref> while 𝔞(·; ·,·) and 𝔠(·; ·,·) are defined in Section <ref>. The form a is defined as follows: a:××→ℝ, a(X;,):=∫_Øν(X)∇ : ∇. Since ν satisfies the properties in A1), it is immediate that, given X ∈, a(X;,) ≥ν_*||_1,Ø^2, a(X;,) ≤ν^*||_1,Ø||_1,Ø ∀,∈. Let ∈. Define the forms c_N^skew( ; ·, ·): ×→ℝ and 𝔠^skew( ; ·, ·): ×→ℝ by c_N^skew(;,) := 1/2 [c_N(;,)-c_N(;,)] and 𝔠^skew(;R,S) := 1/2[𝔠(;R,S)-𝔠(;S,R)], respectively. As discussed in Sections <ref> and <ref>, c_N and 𝔠 are skew-symmetric. As a result, c_N(;,) = c_N^skew(;,) for every ,∈ and 𝔠(;R,S) = 𝔠^skew(;R,S) for every R,S ∈. This observation will be important for the development of a virtual element numerical scheme. §.§ Existence, stability, and uniqueness results We review the existence of solutions for the system (<ref>) without restriction on the data and a global uniqueness result when the data is suitably restricted and the solution is slightly smoother <cit.>. Let Ø⊂ℝ^2 be an open and bounded domain with Lipschitz boundary ∂Ø, let f∈[^̋-1(Ø)]^2, and let g ∈^̋-1(Ω). Let ν and κ be as in Section <ref>. Then, the nonlinear system (<ref>) has at least one solution (,,T) ∈××. Moreover, we have ||_1,Ø≤ν_*^-1f_-1, _0,Ø≤β^-1Λ(f)f_-1, |T|_1,Ø≤κ_*^-1g_-1, where Λ(f) is defined in the statement of Proposition <ref>. Furthermore, if the system (<ref>) has a solution (_1,_1,T_1) ∈ (∩_2+ε^1(Ø))××(∩_2+ε^1(Ø)) for some ε > 0 such that C_Nν_*^2f_-1 + ν_lip𝒞_ε𝒞^2_4 → 2g_-1|_1|__2+ε^1(Ø)ν_*κ_*(κ_*-κ_lipC_ϵ|T_1|__2+ε^1(Ø)) < 1, κ_lip𝒞_ϵ|T_1|__2+ε^1(Ø) < κ_*, then the system (<ref>) has no other solution (_2,_2,T_2)∈××. Here, 𝒞_4↪ 2 and 𝒞_ε denote the best constant in ↪Ł^4(Ø) and ↪Ł^2(2+ε)/ε(Ø) respectively. § A VIRTUAL ELEMENT APPROXIMATION In this section, we introduce a virtual element approximation for the nonlinear system (<ref>) and derive error estimates. For this purpose, we first introduce some notions and basic ingredients <cit.>. From now on, we assume that Ω is a polygonal and bounded domain with Lipschitz boundary. Let {_h}_h>0 be a sequence of decompositions of Ω into general polygonal elements E. Here, h:=max{ h_E: E∈_h } and h_E denotes the the diameter of the element E. We denote by |E| the area of the element E ∈_h. §.§ Mesh regularity We make the following assumptions on the sequence {_h}_h>0 <cit.>: There exists ϱ > 0 such that for all h and every E in _h, A2) E is star-shaped with respect to a ball B_ with radius ≥ϱ h_E <cit.> and A3) the distance between any two vertices of E is ℓ≥ϱ h_E <cit.>. §.§ Basic ingredients Let k ∈ℕ, t ∈ℝ^+, and let p ∈ [1,+∞]. Let A be an open and bounded domain in ℝ^2. We introduce some basic spaces that will be useful later. First, we introduce ℙ_k(A) — the set of polynomials on A of degree less than or equal to k — with the convention that ℙ_-1(A) = { 0 }. Secondly, ℙ_k(𝒯_h) = {q ∈Ł^2(Ø) : q|_E∈ℙ_k(E) ∀ E ∈𝒯_h}. Finally, we introduce the space ^t_p(𝒯_h) = { v ∈Ł^2(Ø) : v|_E∈^t_p(E) ∀ E ∈𝒯_h}. We equip the space ^t_p(𝒯_h) with the broken norm ·__p^t(_h) and the broken seminorm | · |__p^t(_h), which are defined by [ v__p^t(_h) := [ ∑_E∈_hv__p^t(E)^p]^1/p, |v|__p^t(_h) := [ ∑_E∈_h |v|__p^t(E)^p]^1/p, p ∈ [1, +∞), ] respectively. If p = + ∞, then v__∞^t(_h) :=max{v__∞^t(E): E∈_h }. We denote the vector-valued counterparts of ℙ_k(A), ℙ_k(𝒯_h), ^t_p(𝒯_h) by [ℙ_k(A)]^2, [ℙ_k(_h)]^2, and ^t_p(𝒯_h), respectively. For an element E∈_h, we denote by x_E the centroid of E. For a multi-index α = (α_1,α_2) ∈ℕ^2, we define |α| = α_1 + α_2. Let n ∈ℕ. A natural basis associated to the space ℙ_n(E) is the set of normalized monomials 𝕄_n(E) := {𝐦_α: α∈ℕ^2, |α| ≤ n }, 𝐦_α(𝐱) := (𝐱-x_Eh_E)^α. Let m ∈ℕ be such that m ≤ n. We also introduce ℙ_n\ m(E) := span{𝐦_α: α∈ℕ^2, m + 1 ≤ |α| ≤ n }. §.§ Projections In this section we introduce some appropriate projections that will be useful for our analysis. Let E ∈_h, and let n ∈ℕ∪{0}. We introduce the Ł^2(E)-orthogonal projection as follows: Π^0,E_n: Ł^2(E) →ℙ_n(E): (q_n, v - Π^0,E_nv )_Ł^2(E) = 0 ∀ q_n∈ℙ_n(E). We denote by Π^0,E_n the vector-valued couterpart of Π^0,E_n. We also introduce the ^̋1(E)-seminorm projection, which is defined as follows: Π^∇,E_n: ^̋1(E) →ℙ_n(E): (∇ q_n, ∇(v - Π^∇,E_nv) )_Ł^2(E) = 0 ∀ q_n∈ℙ_n(E) and ∫_∂ E (v - Π^∇,E_nv) = 0. We denote by Π^∇,E_n the vector-valued couterpart of Π^0,E_n. §.§ Virtual element spaces for the velocity variable Following <cit.>, we introduce a finite-dimensional local virtual space for E ∈_h: (E):= {_h ∈ [^̋1(E)]^2∩ [𝒞^0(∂ E)]^2 -Δ_h - ∇∈𝐱^⊥ℙ_k-1(E), for some ∈Ł_0^2(E), ÷_h ∈ℙ_k-1(E), _h|_e∈ [ℙ_k(e)]^2 ∀ e ⊂∂ E}, 𝐱^⊥ := (x_2,-x_1). With this space at hand, we introduce (E) as a restriction of the space (E): (E) := {_h ∈(E): (_h - ^∇,E_k_h,𝐱^⊥q_k-1)_𝐋^2(E) = 0 ∀q_k-1∈ℙ_k-1\ k-3(E)}. A similar definition of _h(E) can be found in <cit.>; compare with <cit.>. We recall some properties of the virtual space (E) <cit.>, which are based on the description in <cit.> and <cit.>. (P1) Polynomial inclusion: [ℙ_k(E)]^2 ⊆(E); (P2) Degrees of freedom (DoFs): the following linear operators 𝐃_𝐕 constitute a set of DoFs for the virtual element space (E): 𝐃_𝐕1 the values of 𝐯_h at the vertices of E, 𝐃_𝐕2 the values of _h at k-1 distinct points of every edge e ∈∂ E, 𝐃_𝐕3 the moments of _h 1|E|∫_E_h·𝐦^⊥𝐦_α ∀𝐦_α∈𝕄_k-3(E), 𝐦^⊥ = 𝐱^⊥-𝐱_E^⊥h_E. 𝐃_𝐕4 the moments of ÷_h h_E|E|∫_E÷_h 𝐦_α ∀𝐦_α∈𝕄_k-1(E), |α| = α_1+α_2 > 0. As a final ingredient, we introduce _k-1^0,E: ∇(E) → [ℙ_k-1(E)]^2 × 2 such that (q_k-1,∇_h - ^∇,E_k-1∇_h)_𝐋^2(E) = 0 ∀q_k-1∈ [ℙ_k-1(E)]^2 × 2. The following remark is in order. The DoFs 𝐃_𝐕 allow an exact computability of ^0,E_k: (E) → [ℙ_k(E)]^2, _k-1^0,E: ∇(E) → [ℙ_k-1(E)]^2 × 2 in the following sense: Given 𝐯_h ∈(E), ^0,E_k 𝐯_h and _k-1^0,E∇𝐯_h can be computed using only, as unique information, the DoFs values 𝐃_𝐕 of 𝐯_h <cit.>. Finally, we define the global velocity space as follows <cit.>: := {_h ∈ : _h|_E∈(E) ∀ E ∈_h}. §.§ Finite element space for the pressure variable We introduce the following finite element space for approximating a pressure variable: _h := {_h ∈ _h|_E∈ℙ_k-1(E) ∀ E ∈_h}. It is important to note that the pair (,_h) satisfies the following discrete inf-sup condition: there exists a constant β̃>0 independent of h such that _h ∈sup b(_h,_h)|_h|_1,Ø≥β̃_h_0,Ø ∀_h ∈_h; <cit.>. Let us now introduce the discrete kernel := {_h ∈: b(_h,_h) = 0 ∀_h ∈_h}. The following observation is important: Let _h ∈. Given that ÷_h|_E ∈ℙ_k-1(E), we deduce that 𝐙_h ⊆𝐙: the functions in the discrete kernel are exactly divergence-free. §.§ Virtual element spaces for the temperature variable To introduce a virtual element space for approximating a temperature variable, we first introduce, for each E ∈_h, the finite-dimensional local virtual space <cit.> (E):= {S_h ∈^̋1(E) ∩𝒞^0(∂ E) Δ S_h ∈ℙ_k(E), S_h|_e∈ℙ_k(e) ∀ e ∈∂ E}. With this space at hand, on each E ∈_h we introduce <cit.> (E) := {S_h ∈(E): (S_h - Π^∇,E_k S_h, q_k)_Ł^2(E) = 0 ∀ q_k ∈ℙ_k∖ k-2(E) }. We recall some properties of the space (E) based on the presentation in <cit.> and <cit.>: (P3) Polynomial inclusion: ℙ_k(E) ⊆(E); (P4) Degrees of freedom: the following linear operators D_V constitute a set of DoFs for the virtual element space (E): D_V1 the values of S_h at the vertices of E, D_V2 the values of S_h at k-1 distinct points of every edge e ∈∂ E, D_V3 the moments of S_h 1|E|∫_E S_h m_α ∀m_α∈𝕄_k-2(E). The DoFs D_V allow an exact computability of Π^0,E_k: _h(E) →ℙ_k(E), ^0,E_k-1: ∇_h(E) → [ℙ_k-1(E)]^2, <cit.>; compare with Remark <ref>. Finally, we introduce a global virtual space to approximate a temperature variable := {S_h ∈: S_h|_E∈(E) ∀ E ∈_h}. §.§ Virtual element forms With the discrete spaces , _h, and in hand, and following <cit.> we now introduce discrete versions of the continuous forms involved in the weak problem (<ref>). * The discrete counterpart of a(·;·,·) (cf. (<ref>)) is defined by a_h(·;·,·):××→ℝ, a_h(X_h;_h,_h) := ∑_E∈_h a_h^E(X_h;_h,_h), where a_h^E(·;·,·):(E)×(E)×(E) →ℝ is given by a_h^E(X_h;_h,_h) := ∫_E ν(Π_k^0,EX_h)^0,E_k-1∇_h:^0,E_k-1∇_h + ν(Π^0,E_0X_h)S_V^E((𝐈-^0,E_k)_h,(𝐈-^0,E_k)_h). Here, S_V^E(·,·):(E)×(E) →ℝ is a computable symmetric form that satisfies |_h|_1,E^2 ≲ S_V^E(_h,_h) ≲ |_h|_1,E^2 ∀_h ∈(E) ∩(^0,E_k). * The discrete counterpart of 𝔞(·;·,·) (cf. (<ref>)) is defined by 𝔞_h(·;·,·):××→ℝ, 𝔞_h(X_h;R_h,S_h) := ∑_E∈_h𝔞_h^E(X_h;R_h,S_h), where 𝔞_h^E(·;·,·):(E)×(E)×(E) →ℝ is given by 𝔞_h^E(X_h;R_h,S_h) := ∫_E κ(Π^0,E_k X_h)^0,E_k-1∇ R_h·^0,E_k-1∇ S_h + κ(Π^0,E_0 X_h)S_T^E((-Π^0,E_k)R_h,(-Π^0,E_k)S_h). Here, S_T^E(·,·):(E)×(E) →ℝ is a computable symmetric form that satisfies |X_h|_1,E^2 ≲ S_T^E(X_h,X_h) ≲ |X_h|_1,E^2 ∀ X_h ∈(E) ∩(Π^0,E_k). * The discrete counterpart of c_N(·;·,·) (cf. (<ref>)) is defined by c_N,h(·;·,·):××→ℝ, c_N,h(_h;_h,_h) := ∑_E∈_h c_N,h^E(_h;_h,_h), where c_N,h^E(·;·,·):(E)×(E)×(E) →ℝ is given by c_N,h^E(_h;_h,_h) := ∫_E [^0,E_k-1(∇_h) ^0,E_k _h]·^0,E_k _h. Following <cit.> and <cit.>, we define c_N,h^skew,E(_h;_h,_h):=12(c_N,h^E(_h;_h,_h)-c_N,h^E(_h;_h,_h)), and c_N,h^skew(_h;_h,_h) := ∑_E∈_h c_N,h^skew,E(_h;_h,_h). * The discrete counterpart of c_F(·;·,·) (cf. (<ref>)) is defined by c_F,h(·;·,·):××→ℝ, c_F,h(_h;_h,_h) := ∑_E∈_h c_F,h^E(_h;_h,_h), where c_F,h^E(·;·,·):(E)×(E)×(E) →ℝ is given by c_F,h^E(_h;_h,_h) := c_F^E(^0,E_k_h;^0,E_k_h,^0,E_k_h). * The discrete counterpart of 𝔠(·;·,·) (cf. (<ref>)) is defined by 𝔠_h(·;·,·):××→ℝ, 𝔠_h(_h;R_h,S_h) := ∑_E∈_h𝔠_h^E(_h;R_h,S_h), where 𝔠_h^E(·;·,·):(E)×(E)×(E) →ℝ is given by 𝔠_h^E(_h;R_h,S_h) := ∫_E (^0,E_k_h ·^0,E_k-1∇ R_h)Π^0,E_kS_h. Following <cit.>, we define 𝔠_h^skew,E(_h;R_h,S_h):=12(𝔠_h^E(_h;R_h,S_h)-𝔠_h^E(_h;S_h,R_h)), and 𝔠_h^skew(_h;R_h,S_h) := ∑_E∈_h𝔠_h^skew,E(_h;R_h,S_h). * The discrete counterpart of d(·,·) (cf. (<ref>)) is defined by d_h(·,·) : ×→ℝ, d_h(_h,_h) := ∑_E∈_h d_h^E(_h,_h), where d_h^E(·,·):(E)×(E) →ℝ is given by d_h^E(_h,_h) := ∫_E ^0,E_k_h ·^0,E_k_h. We note that c_N,h^skew and 𝔠_h^skew are skew-symmetric by construction, i.e., for _h ∈_h, c_N,h^skew,E(_h;_h,_h) = 0, 𝔠_h^skew(_h;R_h,R_h) = 0, for every _h ∈_h and every R_h ∈_h. As usual, this property simplifies the analysis of the corresponding discrete problem <cit.>. As mentioned in <cit.>, we do not introduce any approximation of the bilinear form b. We note that b(_h,_h) for _h ∈_h and _h∈_h is computable from the DoFs 𝐃_𝐕1, 𝐃_𝐕2, 𝐃_𝐕4 because _h is a polynomial on each element E ∈_h. §.§ Virtual element forcing From now on we will assume that f∈𝐋^2(Ω) and g ∈Ł^2(Ω). Within this framework, we define the discrete sources f_h ∈ [ℙ_k(_h)]^2 and g_h ∈ℙ_k(_h) to be such that, for each E ∈_h, f_h|_E := ^0,E_kf, g_h|_E := Π^0,E_k g. §.§ The virtual element method With all the ingredients and definitions introduced in the previous sections, we finally design a virtual element discretization for the nonlinear system (<ref>): Find (_h,_h,T_h)∈×_h× such that {[ a_h(T_h;_h,_h) + c_N,h^skew(_h;_h,_h) + c_F,h(_h;_h,_h); + d_h(_h,_h) + b(_h,_h) = (f_h,_h)_0,Ø,; b(_h,_h) = 0,; 𝔞_h(T_h;T_h,S_h) + 𝔠_h^skew(_h;T_h,S_h) = (g_h,S_h)_0,Ø, ]. for all (_h,_h,S_h) ∈×_h×. Using the space defined in (<ref>), the problem (<ref>) can be reformulated as follows: Find (_h,T_h)∈× such that {[ a_h(T_h;_h,_h)+c_N,h^skew(_h;_h,_h)+c_F,h(_h;_h,_h); + d_h(_h,_h) = (f_h,_h)_0,Ø,; 𝔞_h(T_h;T_h,S_h)+𝔠_h^skew(_h;T_h,S_h) = (g_h,S_h)_0,Ø, ]. for all (_h,S_h) ∈×. To provide an analysis for the discrete problem (<ref>), we list some properties that the discrete forms satisfy: * For any X_h ∈_h, a_h(X_h;·,·) is a coercive and continuous bilinear form: There exist α_*>0 and α^*>0 such that, for every _h,_h ∈ we have a_h(X_h;_h,_h) ≤α^*ν^*|_h|_1,Ø|_h|_1,Ø, a_h(X_h;_h,_h) ≥α_*ν_*|_h|_1,Ø^2. These inequalities result from standard properties of Π^0,E_n and _k-1^0,E in conjunction with basic inequalities and the properties A1 and (<ref>) that ν and S_V satisfy. * For any X_h ∈_h, 𝔞_h(X_h;·,·) is a coercive and continuous bilinear form: There exist β_*>0 and β^*>0 such that, for every R_h, S_h ∈, we have 𝔞_h(X_h;R_h,S_h) ≤β^*κ^*|R_h|_1,Ø|S_h|_1,Ø, 𝔞_h(X_h;S_h,S_h) ≥β_*κ_*|S_h|_1,Ø^2. These inequalities follow from very similar arguments to those that lead to (<ref>). * c_N,h^skew(·;·,·) is continuous: For every _h,_h,_h ∈, we have c_N,h^skew(_h;_h,_h) ≤C_N|_h|_1,Ø|_h|_1,Ø|_h|_1,Ø. A proof of this estimate is essentially contained in <cit.>. * c_F,h(·;·,·) is continuous: For every _h,_h,_h ∈, we have c_F,h(_h;_h,_h) ≤C_F|_h|_1,Ø^r-2|_h|_1,Ø|_h|_1,Ø. A proof of this estimate can be found in <cit.>. * 𝔠_h^skew(·;·,·) is continuous: For every _h ∈_h and R_h,S_h ∈_h, we have 𝔠_h^skew(_h;R_h,S_h) ≤ℭ|_h|_1,Ø |R_h |_1,Ø|S_h|_1,Ø, This bound can be obtained with arguments similar to those in the proof of <cit.>. * d_h(·,·) is continuous: For every _h, _h ∈_h, we have d_h(_h,_h) ≤_h_0,Ø_h_0,Ø. This bound follows directly from the Cauchy-Schwarz inequality in 𝐋^2(E), the 𝐋^2-stability of _k^0,E, and the Cauchy-Schwarz inequality in ℝ^#_h. * b(·,·) is continuous: For every _h ∈_h and _h ∈_h, we have b(_h,_h) ≤ |_h|_1,Ø_h_0,Ø. § EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR OUR VEM In this section, we derive an existence result for the discrete problem (<ref>) without restriction on the problem data by using a fixed point strategy. Moreover, we obtain a global uniqueness result when the problem data is suitably restricted. Under the data assumptions A0) and A1), the nonlinear system (<ref>) admits at least one solution (_h,_h,T_h) ∈×_h×. Moreover, the following estimates hold uniformly in h: |_h|_1,Ø ≤α_*^-1ν_*^-1f_h_0,Ø, _h_0,Ø≤β̃^-1Γf_h_0,Ø, |T_h|_1,Ø ≤β_*^-1κ_*^-1g_h_0,Ø, Here, Γ := 1 + (α_*ν_*)^-1( α^*ν^* + C_Nα_*^-1ν_*^-1f_h_0,Ø + ^r-2C_Fα_*^2-rν_*^2-rf_h_0,Ø^r-2 + ^2). We proceed in two steps. Step 1. Existence of solutions: Let us first analyze the existence of solutions to the reduced problem (<ref>). To this purpose, we define A_(_h,T_h): ×→ℝ, for (_h,T_h) ∈×, as follows: A_(_h,T_h)(_h,S_h) := a_h(T_h;_h,_h)+c_N,h^skew(_h;_h,_h) + c_F,h(_h;_h,_h) + d_h(_h,_h) - (f_h,_h)_0,Ω + 𝔞_h(T_h;T_h,S_h) + 𝔠_h^skew(_h;T_h,S_h) - (g_h,S_h)_0,Ω. Set (_h,S_h) = (_h,T_h) and use the skew-symmetry of c_N,h^skew(·;·,·) and 𝔠_h^skew(·;·,·) and the coercivity properties in (<ref>) and (<ref>) to obtain A_(_h,T_h)(_h,T_h) ≥α_*ν_*|_h|_1,Ø^2 + β_*κ_*|T_h|_1,Ø^2 + c_F,h(_h;_h,_h) + d_h(_h,_h) - f_h_0,Ø_h_0,Ø-g_h_0,ØT_h_0,Ø, Since d_h(_h,_h)≥ 0 and c_F,h(_h;_h,_h) ≥ 0, we thus obtain that A_(_h,T_h)(_h,T_h) ≥α_*ν_*|_h|_1,Ø^2 + β_*κ_*|T_h|_1,Ø^2 - ( f_h_0,Ø|_h|_1,Ø - g_h_0,Ø|T_h|_1,Ø) ≥ (min{α_*,β_*}min{ν_*,κ_*}) |(_h,T_h)|_1,Ø^2 - (f_h,g_h)_0,Ø |(_h,T_h)|_1,Ø, where |(_h,S_h)|_1,Ø := (|_h|_1,Ø^2 + |S_h|_1,Ø^2)^1/2 and (f,g)_0,Ø := (f_0,Ø^2 + g_0,Ø^2)^1/2 for every (_h,S_h) ∈_h ×_h and (f,g) ∈𝐋^2(Ω) ף^2(Ω). Hence, we obtain that A_(_h,T_h)(_h,T_h) ≥ 0 for every (_h,T_h) ∈× such that |(_h,T_h)|_1,Ø ^2 =^2(f_h,g_h)_0,Ø^2(min{α_*,β_*}min{ν_*,κ_*})^2 =: μ^2. With reference to <cit.>, there thus exists (_h,T_h) ∈_h ×_h such that A_(_h,T_h)(_h,S_h) = 0 for all (_h,S_h) ∈_h ×_h and |(_h,T_h)|_1,Ø≤μ≲(f_h,g_h)_0,Ω; i.e., the nonlinear system (<ref>) has at least one solution (_h,T_h)∈×. Finally, the existence of a solution for the system (<ref>) follows immediately from the inf-sup condition (<ref>). Step 2. Stability bounds: Let us first derive the bound for the velocity field _h in (<ref>). To to this, we set (_h,_h)=(_h,0) in (<ref>) and use the coercivity property in (<ref>), the skew-symmetry of c_N,h^skew(·;·,·), d(_h,_h)≥ 0, and c_F,h(_h;_h,_h)≥ 0 to obtain the following bounds: α_*ν_*|_h|_1,Ø^2 ≤ a_h(T_h;_h,_h)+c_F,h(_h;_h,_h)+d_h(_h,_h) = ( f_h,_h)_0,Ø≤f_h _0,Ω |_h|_1,Ø. This immediately yields the the desire bound for the velocity field. The estimate for the temperature follows similar arguments. Finally, the estimate for the pressure can be obtained using (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). This concludes the proof. In the following, we present two elementary results that are useful to show the uniqueness of the solutions of system (<ref>). Let X_h, S_h ∈, and let _h ∈ such that _h ∈_q^1(_h) for some q>2. Then, there exists C_V>0 depending on Ø, the polynomial degree k, and the shape regularity constant ϱ such that, for any _h ∈, | a_h(X_h;_h,_h) - a_h(S_h;_h,_h)|≤ C_Vν_lip|X_h-S_h|_1,Ø|_h|__q^1(_h)|_h|_1,Ø. See <cit.>. Let X_h, S_h ∈, and let R_h ∈ such that R_h ∈_q^1(_h) for some q>2. Then, there exists C_T>0 depending on Ø, the polynomial degree k, and the shape regularity constant ϱ such that, for any P_h ∈, |𝔞_h(X_h;R_h,P_h) - 𝔞_h(S_h;R_h,P_h)|≤ C_Tκ_lip|X_h-S_h|_1,Ø|R_h|__q^1(_h)|P_h|_1,Ø. The proof is similar to that of Lemma <ref>, so we skip the details. We are now in a position to show the uniqueness of solutions for system (<ref>). Let us assume that assumptions A0) and A1) hold. If C_Vν_lipℭg_h_0,Ø |_h|__q^1(_h)α_*ν_*β_*κ_*(β_*κ_*-C_Tκ_lip|T_h|__q^1(_h)) + C_Nα_*^2ν_*^2f_h_0,Ø < 1, C_Tκ_lip|T_h|__q^1(_h) < β_*κ_*, and the nonlinear system (<ref>) admits a solution (_h,_h,T_h) ∈×_h ×_h such that _h ∈_q^1(_h) and T_h ∈_q^1(_h) for some q>2, then this solution is unique. We begin the proof with the assumption that there is another solution (_h,_h,T_h) ∈×_h× of system (<ref>). Define _h := _h-_h ∈, T_h := T_h-T_h ∈, _h := _h-_h ∈_h. Step 1. A bound for |T_h|_1,Ø. We begin by noting that it follows directly from the definition of 𝔠_h^skew(·;·,·) that 𝔠_h^skew(_h;T_h,T_h) = 0 𝔠_h^skew(_h;T_h,T_h) = 𝔠_h^skew(_h;T_h,T_h). We now use the fact that (_h,T_h) and (_h,T_h) solve the system (<ref>) to set _h = 0 and S_h = T_h ∈_h in the corresponding systems and obtain 𝔞_h(T_h;T_h,T_h) + 𝔠_h^skew(_h;T_h,T_h) = 𝔞_h(T_h;T_h,T_h) + 𝔠_h^skew(_h;T_h,T_h). We add and subtract the term 𝔞_h(T_h;T_h,T_h) and use (<ref>) to obtain 𝔞_h(T_h;T_h,T_h) = [𝔞_h(T_h;T_h,T_h)-𝔞_h(T_h;T_h,T_h)] - 𝔠_h^skew(_h;T_h,T_h). We then use the coercivity property in (<ref>), the bound (<ref>), and the estimate of Lemma <ref> to obtain β_*κ_*|T_h|_1,Ø^2≤ C_Tκ_lip|T_h|_1,Ø^2|T_h|__q^1(_h) + ℭ|_h|_1,Ø|T_h|_1,Ø|T_h|_1,Ø. This yields (β_*κ_* - C_Tκ_lip|T_h|__q^1(_h))|T_h|_1,Ø≤ℭ|_h|_1,Ø|T_h|_1,Ø. Note that β_*κ_* - C_Tκ_lip|T_h|__q^1(_h)>0. We now invoke the stability bound for T_h in (<ref>) to obtain |T_h|_1,Ø≤ℭg_h_0,Ωβ_*κ_*(β_*κ_* - C_Tκ_lip|T_h|__q^1(_h))|_h|_1,Ø. Step 2. We now obtain a bound for _h. To do this, we first note that, due to the skew-symmetry of c_N,h^skew(·;·,·), we have the following c_N,h^skew(_h;_h,_h) = 0 c_N,h^skew(_h;_h,_h) = c_N,h^skew(_h;_h,_h). We now use the fact that (_h,T_h) and (_h,T_h) solve system (<ref>) to set _h = _h ∈_h in the first equation of the corresponding systems and obtain a_h(T_h;_h,_h) + c_N,h^skew(_h;_h,_h) + c_F,h(_h;_h,_h) + d_h(_h,_h) = a_h(T_h;_h,_h) + c_N,h^skew(_h;_h,_h) + c_F,h(_h;_h,_h) + d_h(_h,_h). We add and subtract the term a_h(T_h;_h,_h) and use (<ref>) to obtain a_h(T_h;_h,_h) + [c_F,h(_h;_h,_h) - c_F,h(_h;_h,_h)] + d_h(_h,_h) = [a_h(T_h;_h,_h) - a_h(T_h;_h,_h)] - c_N,h^skew(_h;_h,_h). We then use the left-hand side bound in (<ref>), <cit.>, the bound in Lemma <ref>, (<ref>), and the fact that d_h(_h,_h)≥ 0 to obtain α_*ν_*|_h|_1,Ø^2≤ C_Vν_lip|T_h|_1,Ø|_h|__q^1(_h)|_h|_1,Ø + C_N|_h|_1,Ø^2|_h|_1,Ø. We now use the bound for _h in Theorem <ref> and (<ref>) to arrive at α_*ν_*|_h|^2_1,Ø ≤C_Vν_lipℭg_h_0,Ø|_h|__q^1(_h)β_*κ_*(β_*κ_*-C_Tκ_lip|T_h|__q^1(_h))|_h|^2_1,Ø + C_Nα_*ν_*f_h_0,Ø|_h|^2_1,Ø. The previous inequality allows us to conclude that (1-C_Vν_lipℭg_h_0,Ø |_h|__q^1(_h)α_*ν_*β_*κ_*(β_*κ_*-C_Tκ_lip|T_h|__q^1(_h)) - C_Nα_*^2ν_*^2f_h_0,Ø)|_h|^2_1,Ø≤ 0. Step 3. In view of (<ref>), we immediately conclude that _h = 0. We now invoke (<ref>) to obtain that T_h = 0. Finally, the inf-sup condition (<ref>) and the arguments developed in Step 2 show that _h = 0. This concludes the proof. § ERROR ESTIMATES FOR OUR VEM In the following, we derive error bounds for the formulation with virtual elements (<ref>). For this purpose, we will make the following regularity assumptions: A4) The solutions (,,T) ∈×× and the data f,g,κ,ν of system (<ref>) satisfy the following regularity properties for some 0<s≤ k: i) ∈𝐇^s+1(Ø), ∈^̋s(Ø), and T∈^̋s+1(Ø). ii) f∈𝐇^s+1(Ø) and g ∈^̋s+1(Ø). iii) ν(T) and κ(T) belong to ^s_∞(Ø). Before deriving a priori error estimates, we recall some preliminary approximation properties. Assume that A2) and A3) hold. Let ∈∩𝐇^s+1(Ø). Then, there exists _I∈ such that - _I_0,E + h| - _I|_1,E≲ h_E^s+1||_s+1,D(E). for all E ∈_h. Here, D(E) denote the union of the polygons in _h intersecting E and 0<s≤ k. Moreover, if ∈, then _I ∈. A proof of the error bound follows from the arguments in the proof of <cit.> in combination with the error bound of <cit.>. Assume that A2) and A3) hold. Let S ∈∩^̋s+1(Ø). Then, there exists S_I∈ such that S - S_I_0,E + h|S - S_I|_1,E≲ h_E^s+1|S|_s+1,D(E). for all E ∈_h. Here, D(E) denote the union of the polygons in _h intersecting E and 0<s≤ k. A proof of this error bound can be found in <cit.>. We will also make use of a Bramble–Hilbert Lemma <cit.>: Let 0 ≤ t ≤ s ≤ℓ+1 and 1 ≤ q,p ≤∞ such that s-2/p > t-2/q. Then, for any E∈_h, q_ℓ∈ℙ_ℓ(E) inf |S - q_ℓ |_W_q^t(E)≲ h_E^s-t+2/q-2/p|S|_W_p^s(E) ∀ S ∈ W_p^s(E). To simplify the presentation of the material, we will write the continuous and discrete stability bounds for (,T) and (_h,T_h) as in <cit.>. This is ||^2_1,Ø+|T|^2_1,Ø≤ C^2_estC^-2_data, |_h|^2_1,Ø+|T_h|^2_1,Ø≤C^2_estC^-2_data. As in <cit.>, we will also use the following notation here: We denote by 𝔄(·), 𝔅(·), ℭ(·), 𝔇(·), etc. generic constants that are independent of h, but may depend on appropriate norms of , p, T, ν, κ, f, g, Ω, or the polynomial degree k. We are now ready to state and prove the main result of this section. Let us assume that the assumptions A0), A1), A2), A3), and A4) hold. Let us also assume that the smallness assumptions of Proposition <ref> and Theorem <ref> hold. Let (,,T) ∈×× and (_h,_h,T_h) ∈_h×_h×_h be the unique solutions of problems (<ref>) and (<ref>), respectively. If C_final,T := CC_est/C_data( β_*κ_* - C_lip(T)/C_sol(T) )^-1 > 0, C_final,^-1:= ( α_*ν_* - C_N C_est/C_data - C_for - ^2 - C_lip()/C_sol()C_final,T) > 0, then the following a priori error estimates hold |-_h|_1,Ø+|T-T_h|_1,Ø ≤𝔐(ν,κ,,T)h^s + 𝔑(f,g)h^s+2 - _h_0,Ø ≤𝔒(ν,κ,,T,p)h^s + 𝔑(f,g)h^s+2. We follow the proof of <cit.> and proceed in several steps. Step 1. Interpolation error estimates. We introduce _I ∈_h and T_I ∈_h as the interpolants of ∈ and T ∈ given by Lemmas <ref> and <ref>, respectively. We also introduce _I ∈_h as follows: for each E ∈_h, _I|_E := Π^0,E_k-1. As a direct consequence of Lemmas <ref> and <ref> and the Bramble-Hilbert bound (<ref>), we obtain |-_I|_1,Ø≲ h^s||_s+1,Ø, |T-T_I|_1,Ø≲ h^s|T|_s+1,Ø, - _I_0,Ø≲ h^s| |_s,Ø. Define e_h := _I-_h, E_h := T_I-T_h, and _h := _I- _h. Since ∈, we have that _I ∈_h and thus that e_h ∈_h. In the following, we bound e_h, E_h, and _h. Step 2. An estimate for E_h. We start with the coercivity bound in (<ref>), the definition of E_h, namely E_h := T_I-T_h, and add and subtract 𝔞(T;T,E_h) to obtain β_*κ_*|E_h|_1,Ø^2 ≤𝔞_h(T_h;E_h,E_h) = 𝔞_h(T_h;T_I,E_h) - 𝔞_h(T_h;T_h,E_h) = 𝔞_h(T_h;T_I,E_h) - 𝔞(T;T,E_h) + 𝔞(T;T,E_h) - 𝔞_h(T_h;T_h,E_h). We now use the third equations of the continuous and discrete systems, (<ref>) and (<ref>), respectively, to arrive at the following estimate: β_*κ_*|E_h|_1,Ø^2≤[𝔞_h(T_h;T_I,E_h) - 𝔞(T;T,E_h)] + [ 𝔠_h^skew(_h;T_h,E_h) - 𝔠(;T,E_h)] + ( g-g_h,E_h)_0,Ω =: + + . Step 2.1. We estimate . To do so, we first rewrite using the definitions of 𝔞 and 𝔞_h, given in (<ref>) and (<ref>), respectively, and a localization argument and obtain = ∑_E∈_h{∫_Eκ(Π^0,E_k T_h) (^0,E_k-1∇ T_I - ∇ T) ·^0,E_k-1∇ E_h - ∫_Eκ(T)∇ T·∇ E_h + ∫_Eκ(Π^0,E_kT_h)∇ T ·^0,E_k-1∇ E_h + κ(Π^0,E_0 T_h) S_T^E((-Π^0,E_k)T_I,(-Π^0,E_k) E_h)}. We construct further differences as follows: = ∑_E∈_h{∫_Eκ(Π^0,E_k T_h)(^0,E_k-1∇ T_I - ∇ T)·^0,E_k-1∇ E_h + ∫_E ( κ(Π^0,E_k T_h) - κ(T) )∇ T ·^0,E_k-1∇ E_h - ∫_E κ(T)∇ T · ( ∇ E_h - ^0,E_k-1∇ E_h) + κ(Π^0,E_0 T_h) S_T^E((-Π^0,E_k)T_I,(-Π^0,E_k) E_h) } =: ∑_E∈_h ( _1^E + _2^E - _3^E + _4^E). Using the definition of ^0,E_k-1, _3^E can be rewritten as _3^E = ∫_E (𝐈 - ^0,E_k-1)(κ(T)∇ T) ·∇ E_h. As in <cit.>, the terms _1^E, _3^E, and _4^E can be controlled simultaneously under the assumption A0) and (<ref>). In fact, we have ∑_E∈_h (_1^E - _3^E + _4^E) ≤ C ∑_E∈_h( κ^*^0,E_k-1∇ T_I - ∇ T _0,E. . + (𝐈 - ^0,E_k-1)(κ(T)∇ T) _0,E + κ^*∇ ( - Π^0,E_k)T_I _0,E) ∇ E_h _0,E, where we used ^0,E_k-1∇ E_h _0,E≤∇ E_h _0,E and ∇ ( - Π^0,E_k) E_h _0,E≲∇ E_h _0,E. We now use the triangle inequality, Lemma <ref>, and a basic error bound for the 𝐋^2(E)-projection (see, e.g. <cit.>) to obtain ^0,E_k-1∇ T_I - ∇ T _0,E≤∇ (T - T_I) _0,E + (𝐈 - ^0,E_k-1) ∇ T _0,E≲ h_E^s |T|_1+s,D(E). An estimate for ∇ ( - Π^0,E_k)T_I _0,E can be derived in view of similar arguments: ∇ ( - Π^0,E_k)T_I _0,E≤∇ ( - Π^0,E_k) (T_I-T) _0,E + ∇ ( - Π^0,E_k) T _0,E ≲∇ (T_I-T) _0,E + ∇ ( - Π^0,E_k) T _0,E≲ h_E^s |T|_1+s,D(E). Finally, we bound (𝐈 - ^0,E_k-1)(κ(T)∇ T) _0,E as follows: (𝐈 - ^0,E_k-1)(κ(T)∇ T) _0,E≲ h_E^s | κ(T)∇ T |_s,E≲ h_E^s κ(T) _W_∞^s(E) | T |_s+1,E. If we substitute the estimates (<ref>), (<ref>), and (<ref>) into (<ref>) we arrive at the bound ∑_E∈_h (_1^E - _3^E + _4^E) ≤𝔄(κ,T) h^s ∇ E_h _0,Ω, where 𝔄 = 𝔄(κ,T) is a constant that depends on κ and T. We now turn to the derivation of a bound for _2^E. For this purpose, we invoke Hölder's inequality (1/p + 1/q +1/2 =1, where q = 2 + ε >2 is given by the uniqueness assumptions of Proposition <ref>) and the stability of the Ł^2(E)-projection in Ł^2(E) and Ł^p(E) to obtain _2^E ≤κ_lipT-Π^0,E_kT_h_Ł^p(E)|T|__q^1(E)∇ E_h_0,E≤κ_lip ( T- Π^0,E_kT _Ł^p(E) + Π^0,E_k(T - T_h) _Ł^p(E))|T|__q^1(E)∇ E_h_0,E≲κ_lip ( T- Π^0,E_kT _Ł^p(E) + T - T_h _Ł^p(E))|T|__q^1(E)∇ E_h_0,E. A standard error bound yields T- Π^0,E_kT _Ł^p(E)≲ h_E^s |T|__s^p(E). On the other hand, we have T - T_h _Ł^p(E)≤ T - T_I _Ł^p(E) + T_I - T_h _Ł^p(E). Substituting these estimates into (<ref>), we obtain _2^E ≲κ_lip ( h_E^s |T|__s^p(E) + T - T_I _Ł^p(E) + E_h _Ł^p(E) )|T|__q^1(E)∇ E_h_0,E, where we also used that E_h = T_I - T_h. If we sum over all the elements in E in _h and apply Hölder's inequality (1/p + 1/q +1/2 =1), we obtain _2 := ∑_E∈_h_2^E ≲κ_lip ( h^s |T|__s^p(Ω) + h^s| T |_s+1,Ω+ E_h _Ł^p(Ω) )|T|__q^1(Ω)∇ E_h_0,Ω where we have used that T - T_I _Ł^p(Ω)≲∇(T - T_I) _0,Ω≲ h^s | T |_s+1,Ω, which follows from a basic Sobolev embedding and the error bound in Lemma <ref>. As a result, _2 ≤C_lip(T)/C_sol(T)∇ E_h^2_0,Ω + 𝔄(κ,T)h^s∇ E_h_0,Ω. C_sol(T) comes from Prop. <ref>: |T|__q^1(Ω) < C_sol(T)^-1 := (κ_lip𝒞_q/κ_*)^-1, and C_lip(T)>0 is a suitable constant. In a final step, we combine (<ref>) with (<ref>) and obtain = ∑_E∈_h (_1^E + _2^E - _3^E + _4^E) ≤C_lip(T)/C_sol(T)∇ E_h^2_0,Ω + 𝔄(κ,T)h^s∇ E_h_0,Ω. Step 2.2. Let us now control in (<ref>). As a first step, we note that according to Remark <ref>, 𝔠(,T;E_h) = 𝔠^skew(,T;E_h) because ∈. We use this property, the fact that 𝔠_h^skew(_h;T_h,E_h) = 𝔠_h^skew(_h;T_h -T_I,E_h) + 𝔠_h^skew(_h;T_I,E_h) = 𝔠_h^skew(_h;T_I,E_h), and add subtract 𝔠_h^skew(;T,E_h) to rewrite as follows: = 𝔠_h^skew(_h;T_I - T,E_h) + 𝔠_h^skew(_h - ;T,E_h) + 𝔠_h^skew(;T,E_h) - 𝔠^skew(;T,E_h). Define _1:= 𝔠_h^skew(_h;T_I - T,E_h) + 𝔠_h^skew(_h - ;T,E_h). To bound _1, we first use the estimate (<ref>) and obtain II_1 ≤ℭ( |_h|_1,Ø|T-T_I|_1,Ø + |-_h|_1,Ø|T|_1,Ø) |E_h|_1,Ø. We now use the interpolation error bounds of Lemmas <ref> and <ref> to arrive at II_1 ≤ℭ C_est/C_data |e_h|_1,Ø|E_h|_1,Ø + 𝔅(u,T)h^s|E_h|_1,Ø, where we also used (<ref>). Define _2^E:= 𝔠_h^skew,E(;T,E_h)-𝔠^skew,E(;T,E_h), for E ∈_h, and _2:= 𝔠_h^skew(;T,E_h) - 𝔠^skew(;T,E_h). We note that _2^E = 1/2[ ∫_E ( ^0,E_k ·^0,E_k-1∇ T ) Π^0,E_k E_h - ∫_E ( ·∇ T ) E_h ] - 1/2[ ∫_E ( ^0,E_k ·^0,E_k-1∇ E_h ) Π^0,E_k T - ∫_E ( ·∇ E_h ) T ] =: 1/2_2,a^E - 1/2_2,b^E. The control of _2,a^E and _2,b^E follow from the arguments given in the proof of <cit.>. If we sum the obtained bounds over all elements in E in _h and apply a suitable Hölder's inequality, we arrive at II_2 ≤𝔅(,T)h^s|E_h|_1,Ø. A collection of the bounds (<ref>) and (<ref>) shows that II≤ℭ C_est/C_data |e_h|_1,Ø|E_h|_1,Ø + 𝔅(u,T)h^s|E_h|_1,Ø Step 2.3. We have now arrived at the estimation of term . The bound for this term follows from the arguments in <cit.>: ≲ 𝔊(g) h^s+2|E_h|_1,Ø, Step 2.4. Finally, if we substitute the bounds for , , and obtained in (<ref>), (<ref>) and (<ref>), respectively, into (<ref>) we obtain |E_h|_1,Ø≤ C_final,T |e_h|_1,Ø + ℭ(κ,,T)h^s + 𝔊(g) h^s+2. Step 3. An estimate for e_h. We start with the coercivity bound in (<ref>) for a_h(·;·,·), the definition of e_h, namely e_h = u_I - u_h, and add and subtract the term a(T;,e_h) to obtain α_*ν_*|e_h|_1,Ø^2 ≤ a_h(T_h;e_h,e_h) = a_h(T_h;_I,e_h)-a_h(T_h;_h,e_h) = a_h(T_h;_I,e_h) - a(T;,e_h) + a(T;,e_h) - a_h(T_h;_h,e_h). We now use the first equations of the continuous and discrete systems, (<ref>) and (<ref>), respectively, to obtain α_*ν_*|e_h|_1,Ø^2 ≤ [ a_h(T_h;_I,e_h) - a(T;,e_h) ] + [ c_N,h^skew(_h;_h,e_h) - c_N(;,e_h) ] + [ c_F,h(_h;_h,e_h) - c_F(;,e_h) ] + [d_h(_h,e_h)-d(,e_h)] + (f-f_h,e_h)_0,Ω, where we have also used that e_h = u_I - u_h ∈_h ⊆𝐙; see Remark <ref>. Step 3.1. Define ℑ_1 := [ a_h(T_h;_I,e_h) - a(T;,e_h) ]. The control of ℑ_1 follows from <cit.>: ℑ_1 ≤C_lip()/C_sol() |E_h|_1,Ø|e_h|_1,Ø + 𝔇(ν,,T)h^s|e_h|_1,Ø. Here, the constant C_sol() comes from Prop. <ref>: C_sol()|u|_𝐖_q^1(Ω) < 1, and C_lip() denotes a suitable positive constant. Step 3.2. Define ℑ_2:= c_N,h^skew(_h;_h,e_h) - c_N(;,e_h). Note that c^skew_N(;,e_h) = c_N(;,e_h) because ∈; see Remark <ref>. So we rewrite the term ℑ_2 as follows: ℑ_2 = [c_N,h^skew(_h;_h,e_h) - c_N,h^skew(;,e_h)] + [c_N,h^skew(;,e_h) - c^skew_N(;,e_h)]. Since ∈∩𝐇^s+1(Ω) and e_h ∈, we can apply <cit.> to obtain ℑ_2,b := c_N,h^skew(;,e_h) - c^skew_N(;,e_h) ≲𝔈()h^s|e_h|_1,Ø. Define ℑ_2,a:= c_N,h^skew(_h;_h,e_h) - c_N,h^skew(;,e_h). Apply <cit.> to obtain ℑ_2,a≤C_N C_est/C_data|e_h|^2_1,Ø + C_N ( C_est/C_data + C_est/C_data) 𝔈()h^s|e_h|_1,Ø. If we combine the previously derived bounds, we arrive at ℑ_2 ≤C_N C_est/C_data |e_h|^2_1,Ø + 𝔈()h^s|e_h|_1,Ø. Step 3.3. Define ℑ_3:= c_F,h(_h;_h,e_h) - c_F(;,e_h). We add and subtract c_F,h(;,e_h) to obtain ℑ_3 = [c_F,h(_h;_h,e_h)- c_F,h(;,e_h)] + [c_F,h(;,e_h) - c_F(;,e_h)] =: ℑ_3,a + ℑ_3,b. We bound ℑ_3,a with the help of <cit.>: ℑ_3,a≤ C_for(𝔈()h^s|e_h|_1,Ø + |e_h|^2_1,Ø), where C_for = C [ (C_est/C_data)^r-2 + (C_est/C_data)^r-2] and C is the constant in <cit.>. Since u∈∩𝐇^s+1(Ω), a bound for ℑ_3,b follows from <cit.>: ℑ_3,b≤ Ch^s (|u|_s+1,Ω + |u|_s,Ω) |u|^r-2_1,Ø|e_h|_1,Ø≤𝔈()h^s|e_h|_1,Ø. If we combine the bounds (<ref>) and (<ref>), we obtain ℑ_3≤ C_for |e_h|^2_1,Ø + 𝔈()h^s|e_h|_1,Ø. Step 3.4. Define ℑ_4 := d_h(_h,e_h)-d(,e_h). The control of ℑ_4 is standard. To derive an estimate, we first analyze the local term ℑ_4^E := d_h^E(_h,e_h)-d^E(,e_h). In view of (^0,E_k_h, ^0,E_ke_h - e_h)_0,E = 0, we rewrite ℑ_4^E as follows: ℑ_4^E = (^0,E_k_h - , ^0,E_ke_h)_0,E + (, ^0,E_ke_h - e_h)_0,E = (^0,E_k_h - , ^0,E_ke_h)_0,E + ( - ^0,E_k_h, ^0,E_ke_h - e_h)_0,E = (^0,E_k_h - ,e_h)_0,E. As a result, ℑ_4^E ≤^0,E_k_h - _0,Ee_h _0,E. We control ^0,E_k_h - _0,E as follows: ^0,E_k_h - _0,E≤_h - _0,E + ^0,E_k - _0,E ≤ - _I _0,E + e_h _0,E + ^0,E_k - _0,E. A basic bound for ^0,E_k - _0,E and an application of Lemma <ref> thus show that ℑ_4^E ≤e_h ^2_0,E + Ch_E^s+1 |u |_s+1,D(E)e_h _0,E. If we sum over all the elements in E in _h and apply Hölder's inequality for sums, we obtain an estimate for ℑ_4: ℑ_4 ≤^2 |e_h|_1,Ω^2 + 𝔈()h^s+1|e_h|_1,Ω, where we have also used the Poincaré inequality (<ref>). Step 3.5. Define ℑ_5 := (f-f_h,e_h)_0,Ω. An estimate for ℑ_5 is direct: ℑ_5 ≲𝔉(f) h^s+2 | e_h |_1,Ω. Step 3.6. A final estimate for |e_h|_1,Ω. Replace the estimates (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) into the bounded derived for |e_h|_1,Ø^2 to obtain ( α_*ν_* - C_N C_est/C_data - C_for - ^2 ) |e_h|^2_1,Ø≤C_lip()/C_sol() |E_h|_1,Ø|e_h|_1,Ø + 𝔇(ν,,T) h^s|e_h|_1,Ø + 𝔉(f)h^s+2 |e_h|_1,Ø, We now replace the bound (<ref>) for |E_h|_1,Ø into the previous estimate to obtain |e_h|_1,Ø≤ C_final,( 𝔐(ν,κ,,T) h^s + 𝔑(f,g)h^s+2). Step 4. A final estimate for |E_h|_1,Ω. We replace (<ref>) into the bound (<ref>) to finally arrive at |E_h|_1,Ω≤ C_final,TC_final,u( 𝔐(ν,κ,,T) h^s + 𝔑(f,g)h^s+2) + ℭ(κ,,T)h^s + 𝔊(g) h^s+2≤𝔐(ν,κ,,T) h^s + 𝔑(f,g)h^s+2. Step 5. An estimate for _h. In this last step, we derive an error estimate for the pressure error _h_L^2(Ω). To do so, we let _h ∈_h and use the definition of _h, namely _h = _I - _h and the first equations of the continuous and discrete systems (<ref>) and (<ref>), respectively, to obtain b(_h,_h)=b(_h, _I)-b(_h, _h) = b(_h, _I -) + b(_h, )-b(_h, _h) = b(_h, _I- ) + [a_h(T_h;_h,_h)-a(T;,_h)] + [c_N,h^skew(_h;_h,_h)-c_N^skew(;,_h)] + [c_F,h(_h;_h,_h)-c_F(;,_h)] + [d_h(_h,_h)-d(,_h)] + (f-f_h),_h)_0,Ø. Step 5.1. Define 𝔎_1:=[a_h(T_h;_h,_h)-a(T;,_h)]. The control of the term 𝔎_1 can be found in <cit.>: 𝔎_1 ≲ |-_h|_1,Ø|_h|_1,Ø + |T-T_h|_1,Ø|_h|_1,Ø + 𝔇(ν,u,T)h^s|_h|_1,Ø ≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø, where we have used (<ref>) and (<ref>) to obtain the last bound. Step 5.2. Define 𝔎_2:= c_N,h^skew(_h;_h,_h)-c_N^skew(;,_h). An estimate for the term 𝔎_2 can be obtained as follows. First, we rewrite 𝔎_2 as 𝔎_2 = [ c_N,h^skew(_h;_h,_h) - c_N,h^skew(;,_h) ] + [ c_N,h^skew(;,_h) - c_N^skew(;,_h)] = [ c_N,h^skew(_h - ;_h,_h) - c_N,h^skew(; - _h,_h) ] + [ c_N,h^skew(;,_h) - c_N^skew(;,_h)]. Define 𝔎_2,a:= c_N,h^skew(_h - ;_h,_h) - c_N,h^skew(; - _h,_h). A bound for 𝔎_2,a follows from the use of the bound (<ref>) and the error estimate (<ref>). In fact, we have 𝔎_2,a ≤C_N( |_h|_1,Ø + ||_1,Ø ) | - _h|_1,Ø |_h|_1,Ø ≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø. It thus suffices to bound 𝔎_2,b:= c_N,h^skew(;,_h) - c_N^skew(;,_h). Since ∈𝐇^s+1(Ω) ∩, we can use the bound in <cit.> and obtain 𝔎_2,b≤𝔈()h^s|_h|_1,Ø. This bound and (<ref>) yield the control of 𝔎_2: 𝔎_2≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø. Step 5.3. Define 𝔎_3:= c_F,h(_h;_h,_h)-c_F(;,_h). We add and subtract the term c_F,h(;,_h) and rewrite 𝔎_3 as follows: 𝔎_3 = [c_F,h(_h;_h,_h)-c_F,h(;,_h)] + [c_F,h(;,_h)-c_F(;,_h)]. Define 𝔎_3,a = c_F,h(_h;_h,_h)-c_F,h(;,_h). A direct application of the bound in <cit.> shows that 𝔎_3,a≤ C(|_h|_1,Ø^r-2 + ||_1,Ø^r-2)|-_h|_1,Ø|_h|_1,Ø≤ C_for|-_h|_1,Ø|_h|_1,Ø ≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø. Define 𝔎_3,b :=c_F,h(;,_h)-c_F(;,_h). Since ∈∩𝐇^s+1(Ω), we can directly apply the bound from <cit.> to obtain 𝔎_3,b≤ C h^s(||_s,Ø+||_1+s,Ø)||_1,Ø^r-2|_h|_1,Ø≤𝔈() h^s |_h|_1,Ø. The two estimates derived above allow us to control the term 𝔎_3: 𝔎_3 ≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø. Step 5.4. Define 𝔎_4 := d_h(_h,_h)-d(,_h) and 𝔎_4^E := d_h^E(_h,_h)-d^E(,_h) for E ∈_h. Following the arguments that lead to (<ref>), we deduce that 𝔎_4^E = (^0,E_k_h - ,v_h)_0,E ≤^0,E_k_h - _0,E |v_h|_1,E ≤ ( _h - _0,E + ^0,E_k - _0,E ) |v_h|_1,E. If we sum over all the elements E ∈_h and apply Hölder's inequality, we obtain 𝔎_4 ≲ ( | _h - |_1,Ω + ^0,E_k - _0,Ω ) |v_h|_1,Ω ≤𝔐(ν,κ,,T)h^s|_h|_1,Ø + 𝔑(f,g)h^s+2|_h|_1,Ø. Step 5.5. Define 𝔎_5 := (f-f_h,_h)_0,Ø and 𝔎_6 := b(_h, _I - ). The term 𝔎_5 was already estimated in (<ref>). A bound for the term 𝔎_6 follows from the definition of _I and the Bramble–Hilbert bound (<ref>): 𝔎_6 ≲𝔓() h^s |_h|_1,Ø. Step 5.6. In view of the discrete inf-sup condition (<ref>), the bounds (<ref>), (<ref>), (<ref>), and (<ref>), we conclude - _h_0,Ø≤𝔒(ν,κ,,T,)h^s + 𝔑(f,g)h^s+2. This concludes the proof. § NUMERICAL EXPERIMENTS In this section, we report some numerical experiments to evaluate the performance of the proposed VEM. Our goal is to compute the experimental convergence rates in the norms used in the theoretical analysis. The results of this section were obtained using a MATLAB code with k=2, where the nonlinear problem (<ref>) was solved using the fixed-point iteration described in Algorithm 1. The refinement parameter N used to characterize each mesh is the number of elements on each edge of Ω. In Figure <ref> we present examples of the meshes we will use for our tests. §.§ Fixed-point iteration Let us describe the fixed-point iteration used to solve the coupled problem (<ref>). Algorithm 1: Fixed-point iteration 13cm0.4pt Input: Initial mesh _h, initial guess (_h^0,_h^0,T_h^0)∈_h ×_h×_h, the coefficients ν(·) and κ(·), f_h ∈ [Ł^2(Ø)]^2, g_h∈Ł^2(Ø), r∈[3,4], and =10^-6. 1: For n≥0, find (_h^n+1,_h^n+1)∈_h×_h such that a_h(T_h^n;_h^n+1,_h) + c_N,h^skew(_h^n;_h^n+1,_h) + c_F,h(_h^n;_h^n+1,_h) + d_h(_h^n+1,_h) + b(_h,_h^n+1) = (f_h,_h)_0,Ø, b(_h^n+1,_h) = 0, for every (_h,_h)∈_h×_h. Then, T_h^n+1∈_h is found as the solution of 𝔞_h(T_h^n;T_h^n+1,S_h)+𝔠_h^skew(_h^n+1;T_h^n+1,S_h)=(g_h,S_h)_0,Ø, for every S_h∈_h. 2: If |(_h^n+1,_h^n+1,T_h^n+1)-(_h^n,_h^n,T_h^n)|>, set n← n+1 and go to step 1. Otherwise, return (_h,_h,T_h)=(_h^n+1,_h^n+1,T_h^n+1). Here, |·| denotes the Euclidean norm. 13cm0.4pt To complete the proposed VEM (<ref>), we need to describe the bilinear forms S_V^E(·,·) and S_T^E(·,·) that satisfy (<ref>) and (<ref>), respectively. For this purpose, we proceed as in <cit.> and use the so-called dofi–dofi stabilization, which is defined as follows: for each E, we denote by _h, _h and T⃗_h, R⃗_h, the real-valued vectors containg the values of the local degrees of freedom associated with _h, _h in and with T_h, R_h in _h, respectively. Then, we set S_V^E(_h,_h):=_h·_⃗h⃗, S_T^E(T_h,R_h):=T⃗_⃗h⃗·R⃗_⃗h⃗. To calculate the error between the exact velocity component of the solution and the corresponding approximation obtained with our VEM, namely, _h, we will use the following computable quantity |e_|_1,Ø:=(∑_E∈_h|-_2^0,E_h|_1,E^2)^1/2. A similar expression is used to calculate the error between T and T_h. Finally, to calculate the pressure error we consider e__0,Ø:=-_h_0,Ø. §.§ Unit square Let us first test our method on the unit square Ø=(0,1)^2. The viscosity coefficient ν, the thermal diffusivity coefficient κ, and the parameter r are as follows <cit.>: * Test 1: ν(T)=1+T, κ(T)=1+sin(T), and r=3. * Test 2: ν(T)=1+e^-T, κ(T)=2+sin(T), and r=4. The data f and g are chosen so that the exact solution to problem (<ref>) is (x_1,x_2)=(-x_1^2(x_1-1)^2x_2(x_2-1)(2x_2-1),x_2^2(x_2-1)^2x_1(x_1-1)(2x_1-1)), (x_1,x_2)=x_1x_2(1-x_1)(1-x_2)-1/36, and T(x_1,x_2)=x_1^2x_2^2(1-x_1)^2(1-x_2)^2.¨ In Figures <ref> and <ref>, we present the experimental convergence rates in the semi-norm for the velocity error and the temperature error and in the Ł^2-norm for the pressure error for tests 1 and 2. We have calculated these errors for the six families of meshes shown in Figure <ref> with different levels of refinement. The plots also include a reference line with slope -2, which indicates the optimal convergence rate of the method. Figures <ref> and <ref> show that the theoretical predictions of Theorem <ref> are confirmed. We also note that optimal experimental convergence rates are obtained for each of the mesh families considered. This is computational evidence of the robustness of the VEM with respect to the geometry of the mesh. Figure <ref> shows the solution obtained with our method using a particular family of polygonal meshes. The next goal is to prove computationally that our VEM is divergence-free. To accomplish this task, we calculate the value of the Ł^2-norm of the discrete divergence of the velocity field _h. In Table <ref>, we present the values of ÷_h_0,Ø that we computed for five mesh families at different refinement levels. As the results in Table <ref> show, the method is indeed divergence-free, and this again is independent of the polygonal mesh used. §.§ Influence of the viscosity In this test, we investigate the influence of the viscosity coefficient ν(·) on the calculation of the error in the approximation of the velocity, pressure, and temperature variables. It is well-known that most standard approximation techniques are not robust with respect to this coefficient. To perform this study, we choose different values of ν and investigate the experimental convergence rates on different polygonal meshes. We use a similar configuration as in the previous tests: Ø=(0,1)^2, s=3, and the solution (, p, T) as previously described. For simplicity, we assume that κ=1 and consider constant values for the viscosity parameter: ν=10^-1, 10^-4, 10^-8. Our results are shown in Figures <ref>, <ref> and <ref>. In Figure <ref>, we observe that the optimal experimental convergence rate for the velocity error is no longer achieved when the viscosity coefficient becomes smaller. This phenomenon occurs for different meshes, which leads to the conclusion that the loss of convergence for the velocity does not depend on the geometry of the mesh, but only on the physical parameter. A different behavior is observed for the temperature error and the pressure error: the optimal convergence rate is also attained when the viscosity becomes smaller. We would like to note that this is independent of the considered polygonal mesh. The results obtained for the velocity error and the pressure error are in agreement with the numerical tests reported in <cit.>. Our results also show that the method works for the temperature error in the same way as for the pressure error. We emphasize that the derivation of an optimal error estimate that is robust with respect to the coefficients ν(·) and κ(·) is not analyzed in our work and is a valuable option for future extensions of the study presented here. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request. siam
http://arxiv.org/abs/2409.02545v1
20240904090201
UniTT-Stereo: Unified Training of Transformer for Enhanced Stereo Matching
[ "Soomin Kim", "Hyesong Choi", "Jihye Ahn", "Dongbo Min" ]
cs.CV
[ "cs.CV" ]
Low-characteristic-impedance superconducting tadpole resonators in the sub-gigahertz regime Mikko Möttönen September 9, 2024 =========================================================================================== § ABSTRACT Unlike other vision tasks where Transformer-based approaches are becoming increasingly common, stereo depth estimation is still dominated by convolution-based approaches. This is mainly due to the limited availability of real-world ground truth for stereo matching, which is a limiting factor in improving the performance of Transformer-based stereo approaches. In this paper, we propose UniTT-Stereo, a method to maximize the potential of Transformer-based stereo architectures by unifying self-supervised learning used for pre-training with stereo matching framework based on supervised learning. To be specific, we explore the effectiveness of reconstructing features of masked portions in an input image and at the same time predicting corresponding points in another image from the perspective of locality inductive bias, which is crucial in training models with limited training data. Moreover, to address these challenging tasks of reconstruction-and-prediction, we present a new strategy to vary a masking ratio when training the stereo model with stereo-tailored losses. State-of-the-art performance of UniTT-Stereo is validated on various benchmarks such as ETH3D, KITTI 2012, and KITTI 2015 datasets. Lastly, to investigate the advantages of the proposed approach, we provide a frequency analysis of feature maps and the analysis of locality inductive bias based on attention maps. § INTRODUCTION Stereo matching remains fundamental for various computer vision applications, including autonomous driving, 3D reconstruction, and the recognition of objects <cit.>. The goal is to estimate a pixel-wise disparity map from two (or more) images capturing the same scene from distinct viewpoints, typically achieved by computing disparity from corresponding pixels. The process of stereo matching is divided into two main parts: (1) feature matching and (2) disparity refinement. The key is to calculate the matching cost from two image pairs for feature matching and refine it accurately to obtain a reliable disparity map, considering challenges such as low-texture areas and occlusions. While most approaches <cit.> adopt convolutional neural networks (CNNs) for extracting stereo feature and aggregating cost volume, recent studies <cit.> have attempted to utilize the Transformer architecture, which is known to have superior representation capabilities and larger receptive fields compared to traditional CNNs. It is reported that attention mechanisms within the Transformer framework can effectively replace the traditional cost volume approaches. This enables for the dense computation of correlation between two high-resolution features without being constrained by pre-defined disparity search range unlike cost volume approaches. Nevertheless, the performance of Transformer-based stereo approaches is at best comparable or even inferior to that of convolution-based approaches, which means that the Transformer architecture is not yet fully utilized in the context of stereo matching. To maximize the advantages of the Transformer while addressing its under-utilization in the stereo matching, the characteristics of the Transformer and stereo task need to be examined thoroughly. Recent research on the Transformer <cit.> suggests that it demands more training data for ensuring convergence due to the lack of inductive bias compared to CNNs, which benefit from structural characteristics like local receptive fields. In contrast, a deep-seated challenge of the stereo task is the limited availability of real-world ground truth, primarily due to the requirements of specialized equipment such as active range sensors (e.g., LiDAR), leading to increased cost and complexity of collecting large-scale labeled training data. Thus, it is crucial to resolve the inductive bias deficit and effectively use stereo information from limited stereo training data when utilizing the Transformer in the stereo matching task. In this context, we propose a novel approach, UniTT-Stereo, which stands for Unified Training of Transformer for enhanced stereo matching. Our model unifies self-supervised learning methods <cit.>, traditionally used for pre-training, with stereo matching framework based on supervised learning, enabling an effective learning tailored to Transformer based stereo matching. Our approach partially masks a left image and uses its remaining portions along with a right image to simultaneously reconstruct the original left image and estimate depth values for all pixels. It is important to note that our experiments revealed that simply introducing the masking-and-reconstruction methodology does not necessarily improve performance. Therefore, our approach leverages a Variable Masking Ratio within the unified network, enabling the model to learn richer and more diverse information. Higher masking ratios facilitate the learning process during reconstruction, while lower masking ratios are advantageous for precise and detailed depth prediction. This balance ensures that our model can effectively capture both broad and fine-grained details, further enhancing its performance. We also adapt stereo-tailored losses to fully utilize the limited stereo information as mentioned earlier. We employ three synergistic loss functions at the final output level, RGB level, and feature level. Remarkably, we achieve improvements without the need for additional parameters or frameworks, relying solely on these well-designed approaches and stereo-tailored losses. We also provided detailed analysis on the proposed approach at various aspects. First, the locality inductive bias of our approach using the reconstruction-and-prediction task is examined and compared with existing Transformer based stereo matching methods by computing attention distances (Fig. <ref>). This will be further validated by visualizing the attention maps of cross attention module between left and right features. Additionally, the ability to capture local patterns effectively may be related to exploiting high-frequency spatial information, which offers advantages in stereo tasks by improving the accuracy of depth estimation, particularly at object boundaries and fine details. To investigate how our method effectively amplifies high-frequency information, we perform Fourier analysis on the decoder's feature map used in the disparity head, in Fig. <ref>. Detailed explanations are provided in Analysis section. We achieve state-of-the-art results on ETH3D <cit.>, KITTI 2012 <cit.> and KITTI 2015 <cit.> datasets, demonstrating the effectiveness of our method. Our key contributions are as follows: * We examine the impact of the reconstruction-and-prediction approach on stereo depth estimation and propose a unified training approach based on these insights. * We enhanced performance by introducing a stereo-tailored combination of loss functions from multiple perspectives: feature, RGB, and disparity. * Through extensive analyses and experiments, we validate that our model effectively leverages Transformer for stereo matching. § RELATED WORKS §.§ Dense prediction with Transformer Fully convolutional networks <cit.> serve as the backbone for dense prediction, with various adaptations proposed over time. These architectures commonly depend on convolution and downsampling as fundamental components for acquiring multiscale representations, enabling the incorporation of a substantial contextual understanding. However, the low resolution in the deeper layer causes difficulty in dense prediction, so there have been many researches to maintain high resolution. Transformers <cit.>, based on the self-attention mechanism, demonstrate success with high-capacity architectures trained on extensive datasets. Since the Vision Transformer <cit.> adapts this mechanism to the image domain successfully but not in dense prediction, two main approaches have appeared. One is to design a specialized Transformer fitted to the dense prediction task <cit.>, and the other is to use a plain Vision Transformer and the customized decoder for dense prediction. Dense Prediction Transformer <cit.> used the latter method and achieved state-of-the-art performance in 2021. We propose an approach to fully utilize transformer-based architecture for stereo depth estimation. §.§ Masked image modeling Masked image modeling (MIM) is a technique for self-supervised representation learning <cit.> using images that have masked parts. In this approach, some of the tokenized input sequence is replaced with trainable mask tokens, and the model is trained to predict the missing context based solely on the visible context. This approach, which does not require labels, is widely used for pre-training. SimMIM <cit.> and MAE <cit.> suggest that random masking with a higher mask ratio (e.g. 90%) or size can perform well for self-supervised pretraining from image data. Recently, MTO <cit.> has improved pre-training efficiency by optimizing masked tokens, while SBAM <cit.> has introduced a dynamic approach to the process with a saliency-based adaptive masking strategy that adjusts masking ratios according to the salience of the tokens. CroCo <cit.> and CroCo v2 <cit.> introduced a novel self-supervised pretraining approach exclusively designed for 3D tasks, reconstructing the masked image using the reference image. One of the advantages of Transformers is the abundance of these well-pretrained models available for use. Several studies <cit.> have investigated the effects and what the model learns from MIM as a pretraining method compared to other approaches like contrastive learning. Meanwhile, through experimentation, we have identified how MIM can impact stereo tasks and the strategies to actively leverage MIM for the specific task of stereo depth estimation. §.§ Stereo depth estimation Stereo depth estimation is extensively used in fields such as autonomous driving <cit.>, robotics <cit.>, where accurate depth data is essential for navigation and object detection, and it is also increasingly employed as ground truth labels in monocular depth estimation tasks <cit.>. Stereo depth estimation requires predicting a pixel-wise dense disparity map, capturing detailed and fine information, especially for boundary regions. In traditional deep stereo matching methods, the primary steps involve four components: feature extraction, cost volume creation, feature matching, and disparity regression. To enhance either accuracy or speed, researchers have proposed several strategies to improve these four components. 3D correlation cost volume <cit.> or 4D concatenation cost volume <cit.> can be constructed to measure the similarity between two views. Several studies <cit.> have adopted iterative methods to construct disparity maps, and concurrent work <cit.> has also improved performance using this approach. Recent studies <cit.> have utilized cross-attention mechanisms to enable the exchange of information between different images instead of cost volume. We improve the performance by applying optimized approaches from an analytical perspective on the compatibility between Transformer architecture and stereo depth estimation task. § PROPOSED METHOD We introduce MIM for effective learning by utilizing pairs of a masked left image and an unmasked right image. Unlike pre-training that focuses solely on reconstruction, our goal is to improve the performance of specific downstream tasks, and thus we consider the need for a more suitable masking method. To this end, we introduce variable ratio masking through a truncated normal distribution. After both image tokens pass through the Transformer encoder, we use cross-attention modules for inter-image information exchange. Finally, the model outputs a disparity map and a reconstructed image through respective heads. Our training process involves three losses: feature consistency loss, image reconstruction loss, and disparity loss. Fig. <ref> shows the overall architecture of the proposed method. Additionally, we provide an analysis of how our approach impacts the stereo task and enhances performance using attention distance, attention map, and Fourier Transform. §.§ Architecture Given left and right images I_l and I_r, each of which captures the same scene from different viewpoints, both are split into N non-overlapping patches, denoted as p_l = {p_l^1, ..., p_l^N} and p_r = {p_r^1, ..., p_r^N}. n= ⌊ rN ⌋ tokens are randomly masked only in the left image, where r∈ [0,1] is a selected masking ratio. Siamese encoders deal with visible tokens from a left image and whole tokens from a right image independently. The encoder consists of 12 blocks with a dimension of 768 for ViT-Base and 24 blocks with a dimension of 1024 for ViT-Large. The left image tokens from the encoder are padded with masked tokens, resulting in F_l with N tokens, which is the same number as the tokens from the right image feature F_r. The encoded left feature is then utilized by a decoder, conditional on the encoded right feature. The model constructs the query, key, and value in a self-attention block from the left token sequence in order to compute attention scores and identify relationships between tokens in the same sequence. In contrast, the model generates a cross-attention block by using the left token sequence to set up the query and the right token sequence to generate the key and value in order to find correspondences between the two images. It is composed of 12 blocks, each with a dimension of 768. For generating pixel-wise depth predictions, RefineNet-based fusion module is adapted to reshape and merge four features from different transformer decoder blocks. We utilize features from {2, 5, 8, 12}^th layers in the decoder. A linear head is used to get the reconstructed image output. §.§ Variable Ratio Masking Inspired from MIM pre-trained models, we leverage masked input to capture local and high-frequency patterns effectively. However, masking too much information can make it excessively difficult for the model to directly predict disparity maps, potentially hindering the model's ability to learn from raw RGB images. Conversely, when masking with a low ratio, there is no significant change in performance. To address this, we introduce a variable mask ratio, as shown in Algorithm <ref>, to ensure the model learns effectively across a range of information scales. We use random masking with mask size 16, similar to SimMIM or MAE <cit.> with variable ratio. The masking ratio is determined from a truncated normal distribution with specified upper and lower bounds, which is generated by the given mean and standard deviation. A new masking ratio is then randomly sampled from this distribution for each batch and it is rounded to one decimal place. For example, 0.32 is rounded to 0.3, resulting in 30% masking. We confirmed the effectiveness of variable ratio masking through experiments in various settings. By default, we use a truncated normal probability distribution with a lower bound of 0.0, an upper bound of 0.5, a mean of 0.25, and a standard deviation of 1.0. §.§ Losses §.§.§ Feature Consistency Loss To the output feature maps from each encoder, we introduce the consistency loss which aims to enhance the alignment between two corresponding features from a stereo pair. This makes the model improve matching performance at a feature level. This is achieved by warping the feature from the right image to the feature from the left image, using the ground truth disparity information. F̃_̃l̃ means reconstructed left feature from the right feature with disparity. L_consist=∑_i|F_l,i-F̃_̃l̃,̃ĩ| §.§.§ Disparity Loss The disparity loss is common and plain, but the most powerful matching loss which can be supervised by ground truth. We minimize negative log-likelihood with a Laplacian distribution to train the proposed model, following <cit.>: L_disp=∑_i[|D _i-D̅_̅i̅|/s_i-2log s_i] where D_i and D̅_̅i̅ are an estimated disparity and the ground truth disparity at pixel i, respectively. The scale parameter s_i is also outputted from a model. It can be understood as an uncertainty score for predictions. §.§.§ Image Reconstruction Loss The reconstruction loss evaluates reconstruction accuracy by the Mean Squared Error (MSE) only for masked patches p_l∖p̃_̃l̃ where p_l denotes a set of patches from the first image, p̃_̃l̃ is a set of visible patches from the first image, and p̂_̂l̂ is the reconstructed first image. Notably, the left image undergoes reconstruction through image completion, utilizing corresponding information from the right image. Consequently, this loss can be considered as a form of matching loss from an RGB perspective. L_recon=1/|p_l∖p̃_̃l̃|∑_p_l,i∈ p_l∖p̃_̃l̃p̂_̂l̂,̂î-p_l,i^2 §.§.§ Total Loss We supervise the model with a linear combination of three synergistic losses which are disparity loss from final output, reconstruction loss from reconstructed image, and consistency loss from feature map as L_total=λ*L_disp+L_recon+L_consist where λ is set to 3. Since our objective is to attain improved performance on stereo depth estimation, the disparity loss takes on more weight than other losses. Moreover, to mitigate the potential learning of erroneous information during the early stages of training due to masked images, we calculate an uncertainty score using the reconstruction loss, allowing us to assess how effectively the model has adjusted to masked images. We used a fixed value τ =0.4 as a threshold for generalized reconstruction error ϕ (L_recon) = tanh(L_recon) to decide whether to impose the disparity loss or not. As estimated disparity from unsteady reconstructed feature makes the model unstable, if ϕ>τ, only L_recon + L_consist is used. §.§ Analysis Locality Inductive Bias: Fig. <ref> (a) illustrates the attention distances calculated from the attention scores obtained after passing the entire KITTI 2015 <cit.> test dataset through the cross attention module. Each point represents the average attention distance across 12 heads for each layer. Our method encourages the model to focus on these local patterns to reconstruct the masked parts using locality inductive bias via MIM. This harmonizes well with Transformers, which adept at learning global information. To further investigate the effect of the locality inductive bias in our method, we visualize the attention map in Figure. <ref> (b). It visualizes cross attention maps for an example query patch from an left image, divided into a set of 16×16 patches, in KITTI 2015 training dataset. Ideally, the attention score should be highest at the location of the corresponding patch in the left image, as identified by the ground truth disparity. Our approach, which tends to focus more on local information, helps prevent incorrect attention values, when compared with the plain method that has a higher risk of occasionally concentrating attention on patches located far away, as shown in the right part of the example map. Fourier Analysis: In Fig. <ref> (a), we additionally conducted Fourier analysis on feature maps of blocks in the decoder of our model in Fig. <ref>, following <cit.>. This experiment conducted on ETH3D dataset <cit.> reveals that our approach tends to generate feature maps with a higher proportion of high-frequency components and indicates that the model is capturing detailed and sharp features, which can be advantageous for tasks like stereo depth estimation. High frequency components of the decoder features used in the depth estimation head enable for more precise and detailed representations of object edges and fine structures in the disparity map. As illustrated in Fig. <ref> (b), by effectively utilizing high-frequency information, the proposed method achieved sharper boundaries on the disparity maps. Attention Map by Varying Losses: To demonstrate the synergy when the losses are used together, Fig. <ref> visualizes the attention maps when the model is separately trained using each loss. Masking was applied only when the model was trained with the reconstruction loss or the total loss. We averaged the attention scores from every head and self-attention layer in the left encoder for self-attention visualization, and applied the same approach using cross-attention layer in the decoder to cross-attention visualization. In the self-attention map, the score should be highest at the location of the query patch, whereas the score should peak at the location of the corresponding patch in the cross-attention map. As shown in Fig. <ref> (b), when trained with each loss individually, incorrect attention values appear at various locations in each case. Even the disparity loss case, which is supervised with ground truth, has its limitations. However, when these losses are combined (L_total), they can correct each other's errors and emphasize common attention patterns, working synergistically to guide the model towards more accurate attention placement. This mutual reinforcement allows the model to learn more effectively, facilitating improved stereo matching performance, as also validated in the ablation study of Table <ref>. § EXPERIMENTS §.§.§ Implementation Details We train our UniTT-Stereo for 32 epochs using batches of 6 pairs initializing the encoder and decoder from the pre-trained weights by CroCov2 <cit.>. For optimization, we employ the AdamW optimizer <cit.> with a weight decay of 0.05. The learning rate of 3×10^-5 follows a cosine schedule with a single warm-up epoch. We utilize SceneFlow <cit.>, CREStereo <cit.>, ETH3D <cit.>, Booster <cit.>, Middlebury (2005, 2006, 2014, 2021 and v3) <cit.> with crop size of 704 × 352 to train UniTT-Stereo. Afterward, we trained our model on KITTI 2012 <cit.> and KITTI 2015 <cit.> with crop size of 1216 × 352 for 100 epochs using effective batches of 6 pairs. We use a learning rate of 3×10^-5. For inference, we use a tiling-based strategy in which we sample overlapping tiles with the same size as the training crops, following <cit.>. §.§ Stereo Depth Estimation Performance We evaluate UniTT-Stereo on representative stereo datasets with their metrics and compare with the published state-of-the-art methods. §.§.§ ETH3D UniTT-Stereo sets a new state-of-the-art on ETH3D. Table <ref> compares UniTT-Stereo with HITNet <cit.>, RAFT-Stereo <cit.>, GMStereo <cit.>, IGEV-Stereo <cit.>, CREStereo <cit.>, and CroCo-Stereo <cit.>. §.§.§ KITTI 2012 & 2015 We also achieve state-of-the-art performance compared to other published methods, aside from concurrent work, on both KITTI 2012 and 2015. Table <ref> compares UniTT-Stereo with HITNet <cit.>, PCWNet <cit.>, ACVNet <cit.>, LEAStereo <cit.>, CREStereo <cit.>, IGEV-Stereo <cit.>, and CroCo-Stereo <cit.>. The qualitative comparison is shown in Fig. <ref>. §.§.§ Middlebury We also conducted performance evaluations on the Middlebury evaluation dataset. Table <ref> compares UniTT-Stereo with LeaStereo <cit.>, HITNet <cit.>, RAFT-Stereo <cit.>, CREStereo <cit.>, GMStereo <cit.>, and CroCo-Stereo <cit.>. As shown in Fig. <ref>, our method delivered high performance on data that requires precise estimation. §.§.§ Limitations While our method achieved comparable results to other methods in Middlebury, there was a limitation in handling certain large maximum disparities leading to bad performance on several sequences. This is likely due to the constraints of tiling-based inference, which can restrict the ability to capture long-range correspondences. To address this, it may be necessary to use larger tile sizes or increase the overlap ratio. §.§ Zero-shot Generalization Generalizing from synthetic to real data is crucial due to the challenge of gathering real-world datasets. The result suggests that our approach help the model learn invariant features across different domains. Table <ref> compares UniTT-Stereo with GANet <cit.>, RAFT-Stereo <cit.>, and DSMNet <cit.>. §.§ Ablation Study §.§.§ Key Components We conducted an ablation study on the effectiveness of each key component, including masking and three loss functions. As listed in Table <ref>, introducing each additional loss function led to performance improvement in a sequential order. Optimal performance was achieved when employing every key component. §.§.§ Masking Ratio We also evaluate the effectiveness of the variable ratio masking. As shown in Fig. <ref>, high ratio fixed masking and variable ratio masking effectively amplifies the high frequency information. But interestingly, the experimental results suggest that a high ratio may hinder learning, as performance actually deteriorated, while the use of variable ratio masking resulted in a significant improvement. The low ratio did not lead to any dramatic changes in performance. Table <ref> shows that masking with a modest level of r_max proved beneficial for performance by imparting inductive bias without compromising depth information. § CONCLUSION We proposed UniTT-Stereo to maximize the strengths of Transformer-based architecture, which have traditionally lagged behind in stereo matching task. We enhance performance in a simple yet effective manner by employing reconstruction-and-prediction strategy and a combination of losses specifically designed to learn stereo information. Our approach achieves state-of-the-art performance on prominent stereo datasets and demonstrates strong zero-shot generalization capabilities. Throughout this process, we have analyzed the specific advantages our approach brings to stereo depth estimation.
http://arxiv.org/abs/2409.03573v1
20240905142645
Indication of rapid magnetic field decay in X-ray Dim Isolated Neutron Star RX J0720.4-3125
[ "Andrei P. Igoshev", "Sergei B. Popov" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage A multi-scale analysis of the CzrA transcription repressor highlights the allosteric changes induced by metal ion binding Roberto Menichetti September 9, 2024 ========================================================================================================================== § ABSTRACT Magnetic field evolution of neutron stars is a long-standing debate. The rate of magnetic field decay for isolated, non-accreting neutron stars can be quantified by measuring the negative second derivative of the spin period. Alternatively, this rate can be estimated by observing an excess of thermal emission with respect to the standard cooling without additional heating mechanisms involved. One of the nearby cooling isolated neutron stars – RX J0720.4-3125, – offers a unique opportunity to probe the field decay as for this source there are independent measurements of the surface X-ray luminosity, the second spin period derivative, and magnetic field. We demonstrate that the evolution rate of the spin period derivative is in correspondence with the rate of dissipation of magnetic energy of the dipolar field if a significant part of the released energy is emitted in X-rays. The instantaneous time scale for the magnetic field decay is ∼ 10^4 years. stars: neutron – magnetic fields – X-rays: individual: RX J0720.4-3125 § INTRODUCTION X-ray Dim Isolated Neutron Stars (also known as the Magnificent Seven, hereafter M7) are a group of nearby thermally emitting neutron stars (NSs) observed in X-rays and optics with no detected radio emission (for recent reviews see ). These objects do not demonstrate transient behaviour. Their thermal X-ray spectra show no robust evidence of a power-law tail. These features set M7 apart from Galactic magnetars which have similar spin periods, and severely limit the possible role of the magnetosphere in producing X-ray emission and timing irregularities in the case of M7. These sources have strong dipolar magnetic fields with typical value ≳ 10^13 G. Surface temperatures of M7 are higher than expected based on their ages without additional heating (e.g., ). Their surface thermal luminosities are also higher than their spin-down luminosities. Thus, it is often assumed that they are heated due to magnetic field decay and represent descendants of magnetars (e.g. ). Recently, <cit.> provided the first measurement of the second derivative of the spin period, P̈=-4.1× 10^-25 s s^-2, for one of the M7 sources – RX J0720.4-3125 (RX0720 hereafter). This allows the authors to derive the braking index n ≡ 2 - P̈ P / (Ṗ)^2 ≈ 680. This is much larger than the value n=3 expected for the magnetic dipole decelerating in the vacuum. <cit.> propose that this large value is due to irregularities of the spin behaviour. Here we study an alternative explanation that this large braking index is due to a rapid magnetic field decay in this NS similar to our early analysis of large braking indices of isolated radio pulsars <cit.>. This letter aims to show, based on independent observational evidence, that the external dipolar magnetic field of RX0720 presently decays on a time scale ≈ 10^4 years. Following the same logic we predict that the remaining M7 objects have braking indices around hundreds. § TIMESCALES OF MAGNETIC FIELD DECAY In this section, at first we provide an estimate for instantaneous magnetic field decay based on the measured braking index. Second, we estimate the required magnetic field decay timescale to support thermal X-ray emission via the Ohmic heating of the crust. Then, we compare these scales. §.§ Decay as evidenced by the braking index For our purposes, the magneto-rotational evolution of NSs can be described following a simplified version of the magneto-dipole equation, see e.g., <cit.>: Iωω̇∝ B^2 ω^4. Here ω=2π/P=2πν is the cyclic frequency, B is the surface magnetic field of a NS, and I – its moment of inertia. It is convenient to introduce the braking index as a characteristic of spin evolution. It is defined as: n=νν̈/ν̇^2. For the spin-down with a constant magnetic field and other parameters of the NS, from eqs. (1,2) one obtains n=3. A decaying magnetic field results in n>3. If the magnetic field has a non-zero first derivative but other parameters (the moment of inertia and magnetic inclination) are constant, after a simple algebra, one obtains: ν̇^2/ν^4(n-3)∝ - BḂ. Large positive braking index means negative magnetic field derivative thus translating into decay of magnetic field. In the case of RX0720 we are in the limit of n≫ 3, thus nν̇^2/ν^4∝ - BḂ. In order to get an estimate for magnetic field decay timescale, we assume that the field decays exponentially: B=B_0 exp(-t/τ), where B_0 is the initial field, and τ is some characteristic time scale of decay. Note, that τ by itself may depend on B or other parameters, but we are interested in an instantaneous value of τ. Then we derive: Ḃ= - B/τ. With eqs.(<ref>, <ref>) we have: τ = - 2 ν/nν̇≈ 10^4 n_1000ν_-1ν̇_-15^-1 yrs. Here and below we use the convention A_x=A/10^x. §.§ Ohmic heating and X-ray radiation Now let us assume that all the magnetic energy released due to decay is via the Ohmic heating of the crust and is consequently emitted in X-rays to produce the luminosity L_X. This is a simplification as some heat can be transported inwards to the core and then emitted by neutrinos (see e.g. ). For relatively low energy release, expected for M7 sources, the fraction of the energy emitted from the surface is relatively high if magnetic energy is released not very deep in the crust and direct URCA processes are not activated in the core. The existence of superfluidity in the crust helps to increase the surface temperature for the given energy release and depth <cit.>. In addition, there might be some luminosity contribution from the remaining heat emitted from the surface. The magnetic energy can be roughly estimated as: E_mag=(B^2/8π) (4/3π R^3 ) = B^2R^3/6. Here R is the NS radius. For our estimates, we assume R_6=R/10^6 cm = 1. If we assume that the magnetic field is confined in the crust with volume of V≈ 4 π h R^2 and thickness h ∼ 0.3 km then the estimate of the total magnetic energy is reduced by an order of magnitude. On the other hand, we neglect here contributions from non-dipolar (and non-poloidal) field components. Note, that for RX0720 there is evidence for a strong non-dipolar external magnetic field <cit.>. Then the energy release is: Ė_mag=BḂ R^3/3. Let us assume that Ė_mag≡ L_X and similarly use eq. (<ref>): L_X = B^2 R^3/3τ_B which results in time scale estimate: τ_B=B^2R^3/3L_X≈ 10^5 B_13^2 L_X31^-1 yrs. It is worth noting that the magnetic field in the case of M7 objects can be estimated by two different methods. First, based on spin period and period derivative. Second, from the proton cyclotron line properties. Both estimates are in good correspondence with each other (e.g., ). For our estimates below we use the field values derived from the spin-down rate. § THE CASE OF RX J0720.4-3125 Measurement of the second derivative of the spin period for an NS which is supposed to be powered by magnetic field decay and for which there is an independent measurement of the magnetic field via spectral data, provides a unique opportunity to probe if its spin evolution and thermal surface emission are in correspondence with each other. RX0720 has B_13=2.5 and ν̇_-15 = -1.02. The value of the magnetic field is derived from the magneto-dipole formula and is in good correspondence with the value B_13≈ 5 obtained from the spectral feature <cit.> interpreted as caused by cyclotron resonance scattering of protons by magnetic field. For RX0720 from eq. (<ref>) we obtain τ_0720=-(2 ν)/(n ν̇)=1.1× 10^4 yrs. If we use this value to calculate the expected luminosity via eq. (<ref>) then we obtain L_X≈ 6× 10^32 erg s^-1 or L_X≈ 0.6× 10^32 erg s^-1 if magnetic field is present in the crust only. Surprisingly, this is very close to the luminosity of RX0720 which is equal to ∼ (1-3)× 10^32 erg s^-1 <cit.>. This means that the time scale derived from the spin period evolution is in good correspondence with the scale τ_B from eq. (<ref>). If we account for the fact that in this estimate we use the total volume of the NS and part of the energy can be emitted by neutrinos, then the correspondence is still good as the observed luminosity is smaller than the one derived from eq. (<ref>) assuming τ_0720=τ_B. § DISCUSSION The time scale derived for RX0720 is much shorter than the kinematic age of RX0720 found to be ∼ 0.4 - 0.5 Myr <cit.>. Also, this scale is not in correspondence with the Hall one τ_Hall∼(10^3-10^4) B_15^-1 yrs <cit.> if we substitute the external dipolar field values derived for the M7 objects. Numerical simulations of the magneto-thermal evolution of NSs predict that the M7 sources have large braking indices (see their nearly vertical paths in P – Ṗ diagram Fig. 10 by ). Reading numerical values for Ṗ and ages for track with initial magnetic field B = 3× 10^14 G from that plot we estimate the P̈≈ 4× 10^-26 s s^-2 which translates to braking index of n≈ 100. To explain values n∼ 1000 it seems to be necessary to assume an episode of a faster magnetic energy dissipation, maybe due to some instability. §.§ Predictions for other M7 sources Recently, new timing data was obtained for another of the M7 objects — RX J0806.4-4123 <cit.>. In this case, we can obtain τ_B≈ 10^5 yrs. We can use it to predict the values of n and ν̈ assuming τ=τ_B. We obtain n_0806≈ 1000 and ν̈_0806≈ 6× 10^-29 Hz s^-2. A similar procedure can be done for other M7 sources using recent data from <cit.>: n=6ν L_X/B^2 R^3ν̇= 60 ν_-1 L_X 31 B_13^-2 R_6^-3ν̇_-15^-1. Thus, we predict that for the rest of the M7 sources braking indices are about a few hundreds. Similarly, we can predict values of ν̈: ν̈= -6L_Xν̇/B^2R^3= 6× 10^-28 L_X31ν̇_-15 B_13^-2 Hz s^-2. Results are presented in Table 1. §.§ Excluding the fallback disk model <cit.> proposed that the M7 objects are neutron stars surrounded by a fallback disk. In particular, these authors provide a model describing spin period, period derivative, and associated X-ray luminosities. The measurement of the second spin period derivative provides an opportunity to check the model by Ertan et al. In Fig. 3 of their paper, <cit.> presented plots for the evolution of spin period and period derivative. Unfortunately, their code is not publicly available. Thus, we cannot obtain the exact numbers. Instead, we extract the numerical values from their Fig. 3 and model time evolution of Ṗ assuming that it approximately follows a power-law at the moment when their model can reproduce the RX0720 timing properties: Ṗ = C t^α. Here the time is measured in seconds and we estimate α = -3.8 and C = 8× 10^34 to be compatible with Fig. 3 of <cit.>. For this evolution of the period derivative we obtain: P̈ = α C t^α - 1. Similarly we can compute the braking index as: n = 2 - P̈ P/(Ṗ)^2. Substituting numerical values suggested by <cit.> for the age of RX0720 which is taken to be equal to 1.45× 10^5 years (2-3 times smaller than the kinematic age mentioned above), we obtain n≈ 80. Although it has a correct sign, the braking index estimate is nearly an order of magnitude below the actual value presented by <cit.>. Finally, we note that the values of the magnetic field proposed for the M7 objects by <cit.> are incompatible with the values estimated using the spectral features following the assumption of proton cyclotron lines. §.§ Possible physical mechanisms for delayed fast decay of magnetic field If we accept the evidence presented in the previous sections, it is worth discussing possible physical mechanisms that could cause magnetic field evolution on such an extraordinarily short time scale of 10^4 years after hundreds of kyr of slower evolution. Known estimates for the magnetic field decay due to the crust resistivity vary between 0.1-1 Myr for pasta layer in magnetars <cit.> and ≳ 10 Myr <cit.> for normal radio pulsars with some indications of even ≈ 30 Myr scale <cit.>. The Hall time scale for the crust-confined magnetic field is τ_Hall∼ (10^5-10^6) yrs B_13^-1 for magnetic field comparable to dipole estimates. These mechanisms for magnetic field evolution in the crust do not allow for an episode of delayed field decay. Therefore, we must look elsewhere to identify a source of this evolution. The NS core has long been suggested as a potential site of delayed magnetic field evolution. For example, it is long known that many magnetic field configurations are not stable in the core, see e.g. <cit.> and references therein. It was shown that a purely poloidal magnetic field is unstable under the influence of single fluid ambipolar diffusion <cit.>. This configuration induces new electric currents in the crust and could thus release thermal energy on time scales comparable to the ambipolar diffusion timescale. This time scale is sensitive to temperature and magnetic field strength. The ambipolar diffusion requires time for NS to cool down and instabilities to grow which might explain the late onset of the decay. The decay due to the ambipolar diffusion was already briefly discussed exactly for RX0720 by <cit.>. However, these authors considered a different time scale of evolution with just a mild decay. An alternative could be a fast evolution due to the core transition to superconductor/superfluid state <cit.>. A very recent research by <cit.> suggests that the internal crustal magnetic fields could be amplified by orders of magnitude during the flux expulsion which should inevitably lead to enhanced thermal emission and accelerated evolution. Superconductor transition is sensitive to the temperature which can explain the late onset. However, detailed studies of the magnetic field evolution in the NS core are still in their early phase with few robust results. Therefore, we cannot conclusively prove that the core evolution is responsible for the enhanced X-ray luminosity of M7. § CONCLUSIONS In the case of one of the Magnificent Seven objects RX J0720.4-3125, we demonstrate a peculiar coincidence between the time scales of magnetic field dissipation obtained from the braking index and from the X-ray luminosity. In both derivations, we assume that the field decay is the main process responsible for the anomalous spin evolution and the observed surface emission. We suggest that the observed thermal X-ray luminosity can be an indicator for the dipole field evolution. Thus, we make predictions for braking indices and ν̈ of other M7 sources. Braking indices are expected to be ∼ a few hundred and ν̈∼ 10^-28-10^-27 Hz s^-2. § DATA AVAILABILITY No new data is generated in the article beside numbers presented in Table 1. § ACKNOWLEDGEMENTS Authors thank anonymous referee for their comments which helped to improve the manuscript. The work of A.I. was supported by STFC grant no. ST/W000873/1. S.P. thanks the Simons Foundation for the opportunity to work at ICTP. mnras
http://arxiv.org/abs/2409.02239v2
20240903191115
Temporal Order Preserved Optimal Transport-based Cross-modal Knowledge Transfer Learning for ASR
[ "Xugang Lu", "Peng Shen", "Yu Tsao", "Hisashi Kawai" ]
cs.SD
[ "cs.SD", "cs.AI", "cs.CL", "eess.AS" ]
A Novel Audio-Visual Information Fusion System for Mental Disorders Detection Yichun Li, Shuanglin Li, Syed Mohsen Naqvi Intelligent Sensing and Communications Research Group, Newcastle University, UK September 9, 2024 =============================================================================================================================== § ABSTRACT Transferring linguistic knowledge from a pretrained language model (PLM) to an acoustic model has been shown to greatly improve the performance of automatic speech recognition (ASR). However, due to the heterogeneous feature distributions in cross-modalities, designing an effective model for feature alignment and knowledge transfer between linguistic and acoustic sequences remains a challenging task. Optimal transport (OT), which efficiently measures probability distribution discrepancies, holds great potential for aligning and transferring knowledge between acoustic and linguistic modalities. Nonetheless, the original OT treats acoustic and linguistic feature sequences as two unordered sets in alignment and neglects temporal order information during OT coupling estimation. Consequently, a time-consuming pretraining stage is required to learn a good alignment between the acoustic and linguistic representations. In this paper, we propose a Temporal Order Preserved OT (TOT)-based Cross-modal Alignment and Knowledge Transfer (CAKT) (TOT-CAKT) for ASR. In the TOT-CAKT, local neighboring frames of acoustic sequences are smoothly mapped to neighboring regions of linguistic sequences, preserving their temporal order relationship in feature alignment and matching. With the TOT-CAKT model framework, we conduct Mandarin ASR experiments with a pretrained Chinese PLM for linguistic knowledge transfer. Our results demonstrate that the proposed TOT-CAKT significantly improves ASR performance compared to several state-of-the-art models employing linguistic knowledge transfer, and addresses the weaknesses of the original OT-based method in sequential feature alignment for ASR. Optimal transport, Cross-modal knowledge transfer, automatic speech recognition § INTRODUCTION The combination of a pretrained language model (PLM) with an end-to-end (E2E)-based acoustical model for automatic speech recognition (ASR) has made significant progress in recent years <cit.>. The advantage of incorporating a PLM in ASR lies in the availability of unpaired large text corpora for training the PLM. Moreover, the linguistic knowledge encoded in the PLM can be utilized in ASR decoding. In most studies, the PLM is employed as an external language model (LM) for post-processing tasks such as beam search or rescoring in ASR <cit.>. However, using an external LM for post-processing compromises the speed and sometimes parallel decoding capabilities of ASR. Addressing how to transfer linguistic knowledge to acoustic encoding during model training, and subsequently conducting speech recognition without relying on any external LM post-training, is an intriguing research topic. In this study, our focus is on transferring linguistic knowledge from a PLM to a temporal connectionist temporal classification (CTC)-based ASR <cit.>. While there are several advanced end-to-end (E2E)-based ASR approaches that incorporate linguistic knowledge in acoustic model learning <cit.>, using a PLM, such as bidirectional encoder representation from transformers (BERT) <cit.>), facilitates linguistic knowledge transfer in ASR <cit.>, This knowledge transfer can also occur with a pretrained acoustic encoder, such as wav2vec2 <cit.>, for both linguistic and acoustic knowledge transfer <cit.>. However, due to the heterogeneous feature distributions in acoustic and linguistic spaces, it remains a challenging task to efficiently align feature representations between linguistic and acoustic modalities to facilitate knowledge transfer. In most studies, a cross-attention module is designed to integrate acoustic and text representations within a transformer decoder framework for combining acoustic and linguistic knowledge in ASR <cit.>. Yet, in the decoding stage, true text representations are unavailable, leading to the adoption of predicted text representations in decoding. This mismatch between training and testing phases weakens the benefits of linguistic information in ASR. For efficient alignment and matching, an effective distance metric is needed to measure the difference between acoustic and linguistic feature representations. Considering this requirement, optimal transport (OT) emerges as a suitable tool for cross-modal alignment and linguistic knowledge transfer. OT, originally proposed for optimal allocating resources and later as a measure of discrepancies between probability distributions <cit.>, has found widespread applications in machine learning, particularly in domain adaptation <cit.>. In the field of speech, it has been employed for cross-domain spoken language recognition and speech enhancement <cit.>, as well as in speech translation and understanding <cit.>. The OT has been initially proposed for linguistic knowledge transfer learning for ASR in <cit.>. While OT is applicable for cross-domain alignment, its use in speech encounters a limitation. The original OT treats acoustic and linguistic feature sequences as two unordered sets in alignment and neglects temporal order information during OT coupling estimation. While acoustic and text speech exhibit a strong temporal order structure, requiring preservation of their temporal order relationship during alignment between an acoustic sequence and a linguistic sequence. Therefore, in <cit.>, a well pretrained acoustic model was applied in order to efficiently explore matched acoustic features to those linguistic features during cross-modal learning. And the performance was strongly depended on the goodness of the pretrained acoustic model. In this paper, we propose a Temporal Order Preserved OT (TOT)-based Cross-modal Alignment and Knowledge Transfer (CAKT) model (TOT-CAKT) for CTC-based ASR. With the TOT-CAKT, the temporal order relationship is explicitly maintained during feature alignment and matching. It is hypothesized that through this alignment and matching, linguistic knowledge can be efficiently transferred to acoustic encoding, thereby enhancing ASR performance. The rest of this paper is organized as follows: the proposed method is introduced in Section <ref>, where a cross-modal alignment module based on TOT and a neural adapter module for efficient linguistic feature transfer are designed. In Section <ref>, we conduct experiments to evaluate TOT-CAKT, comparing the results with several knowledge transfer learning algorithms for ASR, and provide a visualization of the learned transport coupling in OT. Finally, the conclusion is presented in Section <ref>. § PROPOSED METHOD The model framework of the proposed TOT-CAKT method is illustrated in Fig. <ref>. This model framework is modified based on a conformer-CTC-based ASR model, incorporating two key modifications. First, an `Adapter' module is added as shown in the gray blocks in Fig. <ref>. Second, an temporal order preserved OT-based cross-modal matching module is introduced in the right branch of Fig. <ref>. Both the acoustic features extracted from the conformer encoder and linguistic features derived from a PLM (with BERT utilized in this paper) are involved in the cross-modal matching process. Further details are provided in the following sections. §.§ Acoustic and linguistic feature representations The acoustic feature is extracted from the acoustic encoder where a conformer-based encoder <cit.>) is adopted. The process in the `Subsampling' module involves a two-layer convolution process with a downsampling operation (a downsampling rate of 4 was used in this paper). By incorporating a positional encoding from PE_ A, the initial input to conformer blocks is obtained as H_0. The output of the conformer encoder is represented as an acoustic representation H_ ca. [ H_0 = Subsampling( X) + PE_ A; H_ ca = Conformer( H_0 ) ∈ R^l_a × d_a ,; ] where l_a and d_a are temporal length and dimension of the acoustic feature vectors, respectively. Before engaging in cross-modal feature alignment, a linear projection termed `FC2' in the `Adapter' module is utilized to perform a feature dimension matching transform: H_ A = FC_ 2( H_ ca) ∈ R^l_a × d_t In this equation, d_t corresponds to linguistic feature dimension. In the right branch of Fig. <ref>, the context-dependent linguistic feature representation is explored from a pretrained BERT model. The process is formulated as: [ y_ token = BERTTokenizer( y); Z_0 = [ CLS, y_ token , SEP]; Z_i = BERT_i ( Z_i - 1),; ] where `BERT_i' is the i-th transformer encoder layer of BERT model, i takes values from 1 to L, with L representing the total number of BERT encoder layers. `BERTTokenizer' is a process to convert standard text to word piece based tokens <cit.>. Token symbols `CLS' and `SEP' represent the start and end of an input sequence. Z_L ∈ℝ^l_t × d_t is the final text representation which encodes context dependent linguistic information, l_t denotes the sequence length, and d_t represents feature dimension of text encoding representation. §.§ Sinkhorn algorithm for cross-modal alignment The original OT was formulated to transform from one probability distribution to another with minimum transport cost <cit.>. In this study, we applied OT for feature alignment on two sets. Given acoustic and linguistic feature sequences H_A and Z_L respectively, as: [ H_A = [ h_1 , h_2 ,..., h_i ,..., h_l_a ]; Z_L = [ z_1 , z_2 ,..., z_j ,..., z_l_t ],; ] where l_a and l_t are lengths of the two sequences. Suppose the two sequences in Eq. (<ref>) are sampled from two probability distributions with weight vectors a = [ a_1 ,a_2 ,...,a_i ,...,a_l_a ] and b = [ b_1 ,b_2 ,...,b_j ,...,b_l_t ]. (a_i = 1 / . -l_a ,b_j = 1 / . -l_t as uniform distributions if no prior information is available). The OT distance between the two sequences is defined as: L_ OT = ^Δmin_γ∈∏( H_A , Z_L )⟨γ , C⟩, where γ is a transport coupling set defined as: ∏( H_A , Z_L ) = ^Δ{γ∈ R_ + ^l_a × l_t | γ 1_l_t = a,γ ^T 1_l_a = b.} In Eq. (<ref>), 1_l_a and 1_l_t are vectors of ones with dimensions l_a and l_t, respectively. In Eq. (<ref>), C is a distance matrix (or ground metric) with element c_i,j defined as pair-wised cosine distance: c_i,j = C( h_i , z_j ) = ^Δ 1 - cos( h_i , z_j ) A fast estimation of OT has been introduced through the celebrated entropy-regularized OT (EOT) <cit.> where the EOT loss is defined as: L_ EOT( H_A , Z_L ) = ^Δmin_γ∈∏( H_A , Z_L )⟨γ , C⟩ - α _1 H( γ), where α _1 is a regularization coefficient, and H( γ) is entropy of coupling matrix defined as: H( γ) = - ∑_i,jγ _i,jlogγ _i,j . The solution of Eq. (<ref>) can be implemented with Sinkhorn algorithm as <cit.>: γ _α _1 = diag( u_1 )* G*diag( u_2 ) where G = exp( - C/α _1 ), u_1 and u_2 are two scaling (or re-normalization) vectors. §.§ Temporal order preserved OT In the original estimation of OT in Eq. (<ref>), the two sequences in Eq. (<ref>) are treated as two sets without considering their temporal order relationship. In speech, temporal order information is crucial in OT coupling during cross-modal alignment, meaning that neighboring frames in an acoustic sequence should be progressively coupled with the neighboring tokens in a linguistic sequence. Therefore, as showed in Fig. <ref>, the temporal order information is input to the OT matching block. For the sake of clarity, the two sequences in Eq. (<ref>) can be further represented with temporal order information as: [ H_A = [ ( h_1 ,1),( h_2 ,2),...,( h_i ,i),...,( h_l_a ,l_a )]; Z_L = [ ( z_1 ,1),( z_2 ,2),...,( z_j ,j),...,( z_l_t ,l_t )]; ] During the alignment of the two sequences for knowledge transfer, it is crucial to consider that elements with significant cross temporal distances might not be likely to be coupled. In other words, the coupling pairs with high probabilities between the two sequences should be distributed along the diagonal line of the temporal coherence positions. Based on this consideration, the temporal coupling prior could be defined as a two dimensional Gaussian distribution <cit.>. The fundamental concept is that the coupled pairs should not deviate significantly from the diagonal line of temporal coherence positions between the two sequences, which can be defined as: p_i,j = ^Δ1/σ√(2π)exp( - d_i,j^2 /2σ ^2 ), where σ is a variation variable controlling the impact of the cross-temporal distance d_i,j as defined in Eq. (<ref>). d_i,j = | 0.5exi-0.1em/-0.15em 0.25exl_a - 0.5exj-0.1em/-0.15em 0.25exl_t|/√(0.5ex1-0.1em/-0.15em 0.25exl_a^2 + 0.5ex1-0.1em/-0.15em 0.25exl_t^2) In Eq. (<ref>), the cross-temporal distance is defined on the normalized sequence lengths in acoustic and linguistic spaces. In this definition, it is evident that the farther the distance between a paired position and the temporal diagonal line, the lower possibility of their correspondence in transport coupling. By incorporating this temporal coherence prior as regularization, the new OT is defined as: L_TOT ( H_A , Z_L ) = ^Δmin_γ∈∏( H_A , Z_L ) < γ , C > - α _1 H(γ ) + α _2 KL(γ ||P), where α _1 and α _2 are two trade off parameters. In Eq. (<ref>), KL(γ||P) is the Kullback-Leibler (KL) divergence between the transport coupling matrix γ and temporal prior correspondence matrix P with elements defined in Eq. (<ref>). Building upon the definitions of KL-divergence and entropy, Eq. (<ref>) can be further expressed to: L_TOT ( H_A , Z_L ) = ^Δmin_γ∈∏( H_A , Z_L ) < γ ,C̃ > - α̃H(γ ), where α̃= α _1 + α _2, and combined ground cost matrix as C̃ = C - α _2 log P Following the procedures outlined in <cit.>, the solution of Eq. (<ref>) is obtained using the Sinkhorn algorithm as: γ _α̃ = diag( u_1 )*G̃*diag( u_2 ), where G̃ = exp( - C̃/α̃). Substituting variables in Eq. (<ref>) to G̃, we can obtain: G̃ = P^α _2/α_1 + α_2 exp ( - C/α_1 + α_2 ) From this equation, we can see that the transport coupling between the two sequences is further constrained by their temporal order correspondence. TOT involves several hyper-parameters that can be challenging to control. For the sake of simplification, we consolidate their effects into a reduced number of hyper-parameters. For example, considering Eq. (<ref>), the impact of variation σ in Eq. (<ref>) and α_2 in Eq. (<ref>) can be combined into a single control parameter β, defined as: C̃ = C + β d_i,j^2 And the Sinkhorn algorithm is applied on the cost function matrix C̃ for OT in real implementations. §.§ Loss function The proposed TOT-CAKT involves two loss functions: the cross-modal alignment and matching loss (in the right branch of Fig. <ref>) and the CTC loss (in the left branch of Fig. <ref>). In cross-modal alignment, the acoustic feature can be projected onto the linguistic space using OT as: [ Z̃_L = ^Δ OT( H_A→ Z_L) = γ ^* × H_A∈ R^l_t × d_t , ] where γ ^* is the optimal transport coupling based on OT. Subsequently, the alignment loss is defined as: L_ align = ∑_j = 2^l_t - 11 - cos( z̃_L^j , z_L^j ), where z̃_L^j and z_L^j are row vectors of feature matrices Z̃_L and Z_L (matching on temporal dimensions), respectively. In Eq. (<ref>), the usage of indices from 2 to l_t -1 is for handling special symbols `CLS' and `SEP'. For efficient linguistic knowledge transfer to acoustic encoding, the following transforms are designed as indicated in Fig. <ref>: [ Ĥ_ ca = FC_ 3( LN( H_A )) ∈ R^l_a × d_a; H^a,t = H_ ca + s · LN(Ĥ_ ca ),; ] where s is a scaling parameter to adjust the importance of transferring linguistic projected feature. Based on this new representation H^a,t which is intended to encode both acoustic and linguistic information, the final probability prediction for ASR is formulated as: P̃ = Softmax( FC1( H^a,t)), where `FC1' is a linear full-connected transform. The total loss in model learning is defined as: L = ^Δλ .L_ CTC (P̃, y_ token ) + (1 - λ ).w.(L_ align + L_ TOT ), where L_ CTC (P̃, y_ token ) is CTC loss, L_ align and L_ TOT are cross-modality alignment loss and TOT loss, respectively. After the model is trained, only the left branch of Fig. <ref> is retained for ASR inference. § EXPERIMENTS ASR experiments were conducted on the open-source Mandarin speech corpus AISHELL-1 <cit.> to evaluate the proposed algorithm. The data corpus comprises three datasets: a training set with 340 speakers (150 hours), a development (or validation) set with 40 speakers (10 hours), and a test set with 20 speakers (5 hours). Data augmentation as used in <cit.> was applied. Given the tonal nature of the Mandarin language in the ASR task, in addition to using 80-dimensional log Mel-filter bank features, three extra acoustic features related to fundamental frequency, i.e., F0, delta F0 and delta delta F0, were utilized as raw input features. These features were extracted with a 25ms window size and a 10ms shift. §.§ Model architecture In Fig. <ref>, the `Subsampling' module consists of two CNN blocks with 256 channels, kernel size 3, stride 2, and ReLU activation function in each. The acoustic encoder is formed by stacking 16 conformer blocks <cit.>, with each having a kernel size of 15, attention dimension d_a=256, 4 attention heads, and a 2048-dimensional FFN layer. The `bert-base-chinese' from huggingface is used as the pretrained PLM for linguistic knowledge transfer <cit.>. In this Chinese BERT model, 12 transformer encoders are applied, the token (or vocabulary) size is 21128, and the dimension of linguistic feature representation is d_t=768. §.§ Hyper-parameters in model learning Several hyper-parameters are associated with the proposed model, and these parameters may have a joint (or correlated) effect in efficient linguistic knowledge transfer learning. In our preliminary experiments, for easy implementation, they were fixed as β=0.5 in Eq. (<ref>), alignment trade off parameter λ=0.3 and scale parameter w=1.0 in Eq. (<ref>). α̃ in Eq. (<ref>) and s in Eq. (<ref>) were varied in experiments. For optimization, Adam optimizer <cit.> is used with a learning rate (initially set to 0.001) schedule with 20,000 warm-up steps. The model with cross-modal transfer was trained for 130 epochs, and the final model used for evaluation was obtained by averaging models from the last 10 epochs (the original conformer-CTC model without the adapter module was pretrained or trained from scratch in different experimental settings when jointly combined with cross-modal learning). The performance was evaluated based on character error rate (CER). §.§ Results In inference stage, only the left branch (blocks in dashed red box in Fig. <ref>) is utilized, maintaining the decoding speed similar to that of the CTC-based decoding. In our experiments, only CTC greedy search-based decoding was employed, and the results are presented in table <ref>. The results of the baseline system and several state-of-the-art systems that integrate BERT for linguistic knowledge transfer are also provided for comparison. In this table, `Conformer+CTC' is the baseline system, trained without linguistic knowledge transfer. `Conformer+CTC/AED' denotes a hybrid CTC/AED ASR system <cit.> which used a transformer decoder with attention to text representation during model training. `NAR-BERT-ASR', `KT-RL-ATT', and `Wav2vec-BERT' are all based on integrating acoustic and linguistic features from BERT for ASR <cit.>, and even used a pretrained acoustic model (from wav2vec2.0 <cit.>) and PLM for knowledge transfer. In the OT based cross-modal learning, two experimental conditions were examined. One is that models with cross-modal learning were trained from scratch, i.e., without pretraining condition. The other is that the models were initialized with a pretrained acoustic model, then were further trained with cross-modal learning, i.e., with pretraining condition. Correspondingly, in table <ref>, `OT-BERT(w/o)' and `TOT-CAKT(w/o)' denote results of cross-modal linguistic knowledge transfer learning based on OT in <cit.> and the proposed TOT-CAKT method for without pretraining condition, respectively, `OT-BERT(w/)' and `TOT-CAKT(w/)' are results with pretraining condition. From this table, we can observe that linguistic knowledge significantly enhances the ASR performance (not in a conventional way like LM rescoring). From the results in `OT-BERT(w/o)' and `TOT-CAKT(w/o)', both the methods in <cit.> and the proposed cross-modal learning could efficiently transfer linguistic knowledge in acoustic encoding, yielding promising results. Moreover, when comparing results in `OT-BERT(w/o)' and `TOT-CAKT(w/o)', a significant performance improvement was observed when all models with cross-modal learning were trained from scratch (without pretraining condition). Furthermore, the performance of our proposed TOT-CAKT, even without pretraining could reach a level comparable to the original OT-based method, which requires a time-consuming pretraining stage (as in `OT-BERT(w/)). Finally, we further examined that when pretraining was utilized before cross-modal learning, the improvement of our proposed method (as shown in `TOT-CAKT(w/)' in table <ref>) compared to the `OT-BERT(w/)' in <cit.> was reduced. This suggests that the pretraining stage could implicitly provide temporal order information for cross-modal feature alignment and knowledge transfer. In comparison, our proposed method explicitly incorporates temporal order information in the mathematical modeling with flexible parameters for control in experiments. Based on our formulation, our future work will focus on finding optimal temporal order parameter settings. §.§ Visualization of transport coupling In the proposed TOT-CAKT, the coupled pairs between the acoustic and linguistic feature sequences are explicitly designed to correspond to their temporal coherence, i.e., acoustic segments should match well with their linguistic tokens sequentially. Two examples of the coupling matrices are shown in Fig. <ref>. In Fig. <ref>-a, the coupling matrix is learned based on OT without temporal order constraint, and Fig. <ref>-b is the coupling matrix learned with temporal order constraint. From this figure, it is evident that clear temporal correspondences exist between the acoustic feature sequence and linguistic token sequence in both transport couplings. Moreover, several positions with incorrect couplings in Fig. <ref>-a were corrected in Fig. <ref>-b by our proposed method, which explicitly adds a temporal order constraint. § CONCLUSION Acoustic and linguistic features belonging to different modalities require feature alignment as a crucial step in transferring linguistic knowledge from a PLM to acoustic encoding. In this paper, we proposed a novel TOT-CAKT. In the TOT-CAKT, a transfer coupling or mapping preserving temporal order information between acoustic sequence and linguistic sequence was first estimated. Subsequently, acoustic features were mapped to the linguistic space based on the transport coupling, allowing direct comparison of the mapped features to match the information encoded in the PLM. Our ASR experimental results confirmed the effectiveness of the proposed TOT-CAKT. Additionally, based on the visualization of transport coupling, we verified that TOT could eliminate unreasonable matches between the acoustic and linguistic sequences. In the proposed TOT-CAKT, several hyper-parameters involved in model learning are challenging to control. Additionally, some of these hyper-parameters are sensitive in implementation and may lead to stability problems in the Sinkhorn algorithm. In the current paper, the combined effects of those hyper-parameters have not been clearly explored, and a clear understanding of their combined rules has not been established, potentially hindering the identification of optimal solutions for improving ASR performance. Figuring out a set of optimal hyper-parameters in the proposed TOT-CAKT remains as our future work. IEEEtran 1 Li2022 J. Li, “Recent advances in end-to-end automatic speech recognition," APSIPA Transactions on Signal and Information Processing, DOI 10.1561/116.00000050, 2022. Chan2016 W. Chan, N. Jaitly, Q. Le and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Proc. of ICASSP, 2016, pp. 4960-4964. Kim2017 S. Kim, T. Hori, and S. Watanabe, “Joint CTC-attention based end-to-end speech recognition using multi-task learning," in Proc. of ICASSP, 2017, pp. 4835–4839. Hori2017 T. Hori, S. Watanabe, and J. R. Hershey, “Joint ctc/attention decoding for end-to-end speech recognition," in Proc. of ACL, 2017, vol. 1, pp. 518–529. Watanabe2017 S. Watanabe, T. Hori, S. Kim, J. R. Hershey and T. Hayashi, “Hybrid CTC/Attention Architecture for End-to-End Speech Recognition," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240-1253, 2017. RNNTASR A. Graves, “Sequence transduction with recurrent neural networks," arXiv preprint, arXiv:1211.3711, 2012. BERTScore J. Shin, Y. Lee, and K. Jung, “Effective sentence scoring method using BERT for speech recognition," in Proc. of ACML, 2019, pp. 1081-1093. MLMScore J. Salazar, D. Liang, T. Nguyen, K. Kirchhoff, “Masked Language Model Scoring," in Proc. of ACL, 2020, pp. 2699-2712. CTCASR A. Graves, and N. Jaitly, “Towards end to-end speech recognition with recurrent neural networks," in Proc. ICML, 2014, pp. 1764–1772. HierarchicalCTC Higuchi, K. Karube, T. Ogawa, et al., “Hierarchical conditional end-to-end asr with ctc and multi-granular subword units," in Proc. of ICASSP, 2022, pp. 7797-7801. intermediateCTC Y. Fujita, T. Komatsu, and Y. Kida, “Multi-sequence intermediate conditioning for ctc-based asr," arXiv preprint, arXiv:2204.00175, 2022. BERT J. Devlin, M. Chang, K. Lee, and K. Toutanova, “Bert: Pretraining of deep bidirectional transformers for language understanding," arXiv preprint, arXiv:1810.04805, 2018. FNAR-BERT Y. Bai, J. Yi, J. Tao, Z. Tian, Z. Wen and S. Zhang, "Fast End-to-End Speech Recognition Via Non-Autoregressive Models and Cross-Modal Knowledge Transferring From BERT," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 1897-1911, 2021. NARBERT F. Yu, K. Chen, and K. Lu, “Non-autoregressive ASR Modeling using Pre-trained Language Models for Chinese Speech Recognition," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 1474-1482, 2022 KuboICASSP2022 Y. Kubo, S. Karita, M. Bacchiani, “Knowledge Transfer from Large-Scale Pretrained Language Models to End-To-End Speech Recognizers," in Proc. of ICASSP, 2022, pp. 8512-8516. Choi2022 K. Choi, H. Park, “Distilling a Pretrained Language Model to a Multilingual ASR Model," in Proc. of INTERSPEECH, 2022, pp. 2203-2207. wav2vec2.0 A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, “Wav2vec 2.0: A framework for self-supervised learning of speech representations," in Proc. of NeurIPS, 2020. Futami2022 H. Futami, H. Inaguma, M. Mimura, S. Sakai, T. Kawahara, “Distilling the Knowledge of BERT for CTC-based ASR," CoRR abs/2209.02030, 2022. Higuchi2023 Y. Higuchi, T. Ogawa, T. Kobayashi, S. Watanabe, “BECTRA: Transducer-Based End-To-End ASR with Bert-Enhanced Encoder," in Proc. of ICASSP, 2023, pp. 1-5. CIFBERT1 M. Han, F. Chen, J. Shi, S. Xu, B. Xu, “Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation," arXiv preprint, arXiv:2301.13003, 2023. wav2vecBERTSLT2022 K. Lu and K. Chen, “A Context-aware Knowledge Transferring Strategy for CTC-based ASR," in Proc. of SLT, 2022, pp. 60-67. CTCBERT1 K. Deng, S. Cao, Y. Zhang, L. Ma, G. Cheng, J. Xu, P. Zhang, “Improving CTC-Based Speech Recognition Via Knowledge Transferring from Pre-Trained Language Models," in Proc. of ICASSP, 2022, pp. 8517-8521. CTCBERT2 K. Deng, Z. Yang, S. Watanabe, Y. Higuchi, G. Cheng, P. Zhang, “Improving Non-Autoregressive End-to-End Speech Recognition with Pre-Trained Acoustic and Language Models," in Proc. of ICASSP, 2022, pp. 8522-8526. DengICASSP2024 K. Deng, Z, P. Woodland, “FastInject: Injecting Unpaired Text Data into CTC-Based ASR Training," in Proc. of ICASSP, 2024, pp. 11836-11840. CIFBERT2 M. Han, L. Dong, Z. Liang, M. Cai, S. Zhou, Z. Ma, B. Xu, “Improving End-to-End Contextual Speech Recognition with Fine-Grained Contextual Knowledge Selection," in Proc. of ICASSP, 2022, pp. 8532-8536. Transformer A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need," in Proc. of NIPS, 2017, pp. 5998-6008. VillanoBook C. Villani, Optimal transport: old and new, volume 338. Springer, 2009 CourtyNIPS2017 N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, “Joint distribution optimal transportation for domain adaptation," in Proc. of NIPS, 2017, pp. 3733-3742. LuICASSP2021 X. Lu, P. Shen, Y. Tsao, H. Kawai, “Unsupervised Neural Adaptation Model Based on Optimal Transport for Spoken Language Identification," in Proc. of ICASSP, 2021, pp. 7213-7217. Lin2021 H. Lin, H. Tseng, X. Lu, Y. Tsao, “Unsupervised Noise Adaptive Speech Enhancement by Discriminator-Constrained Optimal Transport," in Proc. of NeurIPS, 2021, pp. 19935-19946. ICLR2023 H. Tseng, H. Lin, H. Hsuan, and Y. Tsao, “Interpretations of Domain Adaptations via Layer Variational Analysis," arXiv preprint, CoRR abs/2302.01798, 2023. Cho2020 W. Cho, D. Kwak, J. Yoon, N. Kim, “Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation," in Proc. of INTERSPEECH, 2020, pp. 896-900. Cross2021 W. Wang, S. Ren, Y. Qian, S. Liu, Y. Shi, Y. Qian, M. Zeng, “Optimizing Alignment of Speech and Language Latent Spaces for End-To-End Speech Recognition and Understanding," in Proc. of ICASSP, 2021, pp. 7802-7806. ACL2023 Y. Zhou, Q. Fang, Y. Feng, “CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation," arXiv preprint, arXiv:2305.14635, 2023. ICML2023 P. Le, H. Gong, C. Wang, J. Pino, B. Lecouteux, D. Schwab, “Pre-training for Speech Translation: CTC Meets Optimal Transport," arXiv preprint, CoRR abs/2301.11716, 2023. ASRU2023Lu X. Lu, P. Shen, Y. Tsao, H. Kawai, “Cross-Modal Alignment with Optimal Transport for CTC-Based ASR," IEEE-ASRU, 2023, Dec.16-20,Taipei, Taiwan. conformer2020 A. Gulati, J. Qin, C. Chiu, et al., “Conformer: Convolution augmented transformer for speech recognition," arXiv preprint, arXiv:2005.08100, 2020 Cuturi2013 M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport," in Proc. of NIPS, 2013, vol. 26. Su2017 B. Su, G. Hua, “Order-Preserving Wasserstein Distance for Sequence Matching," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. AISHELL1 Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng, “AIShell-1: An open-source mandarin speech corpus and a speech recognition baseline,” in Proc. of COCOSDA, 2017, pp. 1-5. Huggingface https://huggingface.co/ Adam Diederik P. Kingma, Jimmy Ba, “Adam: A Method for Stochastic Optimization," in Proc. of ICLR, 2015. wenet2.0 B. Zhang, D. Wu, Z. Peng, X. Song, Z. Yao, H. Lv, L. Xie, C. Yang, F. Pan, J. Niu, “WeNet 2.0: More Productive End-to-End Speech Recognition Toolkit," in Proc. of INTERSPEECH, 2022, pp. 1661-1665.
http://arxiv.org/abs/2409.03411v1
20240905105413
Mid-order wavefront control for exoplanet imaging: preliminary characterization of the segmented deformable mirror and Zernike wavefront sensor on HiCAT
[ "B. Buralli", "M. N'Diaye", "R. Pourcelot", "M. Carbillet", "E. H. Por", "I. Laginja", "L. Canas", "S. Steiger", "P. Petrone", "M. M. Nguyen", "B. Nickson", "S. F. Redmond", "A. Sahoo", "L. Pueyo", "M. D. Perrin", "R. Soummer" ]
astro-ph.IM
[ "astro-ph.IM" ]
Retrieving stellar parameters and dynamics of AGB stars with Gaia parallax measurements and CO^5BOLD RHD simulations E. Béguin 1 A. Chiavassa1 A. Ahmad2 B. Freytag2 S. Uttenthaler3 Received April 4, 2024; accepted July 16, 2024 ====================================================================================================================================================== § ABSTRACT We study a mid-order wavefront sensor (MOWFS) to address fine cophasing errors in exoplanet imaging with future large segmented aperture space telescopes. Observing Earth analogs around Sun-like stars requires contrasts down to 10^-10 in visible light. One promising solution consists of producing a high-contrast dark zone in the image of an observed star. In a space observatory, this dark region will be altered by several effects, and among them, the small misalignments of the telescope mirror segments due to fine thermo-mechanical drifts. To correct for these errors in real time, we investigate a wavefront control loop based on a MOWFS with a Zernike sensor. Such a MOWFS was installed on the high-contrast imager for complex aperture telescopes (HiCAT) testbed in Baltimore in June 2023. The bench uses a 37-segment Iris-AO deformable mirror to mimic telescope segmentation and some wavefront control strategies to produce a dark zone with such an aperture. In this contribution, we first use the MOWFS to characterize the Iris-AO segment discretization steps. For the central segment, we find a minimal step of 125 ±31 pm. This result will help us to assess the contribution of the Iris-AO DM on the contrast in HiCAT. We then determine the detection limits of the MOWFS, estimating wavefront error amplitudes of 119 and 102 pm for 10 s and 1 min exposure time with a SNR of 3. These values inform us about the measurement capabilities of our wavefront sensor on the testbed. These preliminary results will be useful to provide insights on metrology and stability for exo-Earth observations with the Habitable Worlds Observatory. § INTRODUCTION High-contrast imaging is a promising method to gather information about the physical and chemical properties of planetary companions, i.e. the luminosity, distance to their host star, radius, orbital period, mass, and atmospheric features. On the current facilities, this technique allows us to observe young or massive gaseous exoplanets with a contrast down to 10^-6 down to 200 mas from their host star in the near infrared. With future observatories, the community aims to image mature or light rocky planets with a contrast down to 10^-10 at angular separations shorter than 50 mas in the visible. To achieve such contrast levels, one exciting solution consists in using a large space telescope with high-contrast capabilities. Missions with large segmented primary mirrors, such as the James Webb Space Telescope (JWST), allow us to obtain the required sensitivity and resolution to observe Earth-like planets. High-contrast capabilities such as the combination of coronagraphy and wavefront control strategies are expected to achieve and stabilize the required contrast. In the past few years, several mission concepts have been studied in the community to enable the observation of terrestrial planets in visible and near-infrared light, such as LUVOIR <cit.> and HabEx <cit.>. The 2020 NASA Decadal Survey <cit.> has proposed the study of Habitable Worlds Observatory (HWO) as a future general astrophysics observatory with high-contrast capabilities to observe a large sample of Earth twins. Such an observatory will see its exoplanet imaging capabilities quickly degraded by thermo-mechanical drifts. They will induce different effects and among them, fine misalignments of the telescope primary mirror. These segment cophasing errors will lead to wavefront aberrations and therefore, contrast degradation. To address these telescope segment misalignments in real time, one encouraging solution consists in implementing a dedicated wavefront sensing and control loop. In this paper, we consider a wavefront control with a mid-order wavefront sensor (MOWFS) to address the aberrations with mid-order spatial frequency content due to segment misalignments. We first present the MOWFS using a Zernike wavefront sensor and we detail its implementation on HiCAT, the high-contrast testbed for segmented aperture telescopes in Baltimore. This bench is equipped with an Iris-AO segmented deformable mirror to mimic the segmentation of the telescope primary mirror. We perform experimental tests with the MOWFS to characterize the Iris-AO. Finally, the capabilities of the MOWFS are assessed to determine its detection limit in wavefront error amplitude. § MID-ORDER WAVEFRONT SENSOR ON HICAT TESTBED §.§ Zernike wavefront sensor For the MOWFS, we consider the use of the Zernike wavefront sensor (ZWFS), since we are interested in controlling small phase aberrations with segmented aperture telescopes. This concept is based on the phase contrast method developed by F. Zernike <cit.> for microscopy. Its interest in astronomy is much more recent <cit.>. Since then, its use has widely spread among the astronomy community <cit.> for adaptive optics, high-contrast imaging, and picometric metrology. Similarly to the pyramid wavefront sensor (PWFS) <cit.>, the ZWFS is a Fourier-filtering wavefront sensor (FFWFS) <cit.> which class of sensors is known for their very high sensitivity. This wavefront sensor uses a focal plane mask with a phase shift of θ and a diameter of about a resolution element λ/D, in which λ and D denote the wavelength of observation and the diameter of the telescope aperture. Recent studies have showed that the mask diameter can be adjusted to increase the sensor sensitivity <cit.>. We briefly recall the principle of this sensor. A point-like source is observed with a telescope aperture located in the entrance pupil plane A. We assume an electric field in this plane with a phase error φ. In the following focal plane B, the source image is formed and we introduce the Zernike phase mask. The light going through the mask will be phase-shifted and interfere with the light surrounding the mask. This will lead to a pupil intensity I_C in the re-imaged pupil plane C, that is directly related to φ. From the literature <cit.>, the terms φ and I_C are related as follows φ = arcsin( I_C - P_A^2 - 2b^2 (1- cosθ)/4 P_A b sinθ/2) + θ/2, with b the wave diffracted by the mask in the re-imaged pupil plane and P_A the amplitude of the electric field in the entrance pupil plane. In the regime of very small aberrations (φ≪ 1 rad) and with a typical phase shift θ = π /2, the previous equation leads to a linear relation between φ and I_C with φ = I_C/2 P_A b - P_A/2b + 1 - b/P_A. With the ZWFS, we consider two methods for the phase reconstruction, the analytical method and the interaction matrix method. The analytical method uses the equations described above. The interaction method is based on the procedure used in adaptive optics. For a segmented pupil, we introduce a set of piston, tip and tilt for each segment, and we measure the response in the re-imaged pupil plane. We gather the resulting intensities for all the modes and all the segments into an interaction matrix. Using a singular value decomposition, we pseudo-invert this matrix to obtain the command matrix. This resulting matrix is then used to reconstruct the phase error for a given intensity measured on the Zernike detector. During preliminary simulations, we tested both methods and obtained similar and consistent results. In the following, we will consider the phase reconstruction based on the interaction matrix method for our test with the MOWFS on the HiCAT testbed. §.§ High-contrast imager for Complex Aperture Telescopes testbed The high-contrast imager for Complex Aperture Telescopes (HiCAT) testbed is an optical bench located at the STScI in Baltimore <cit.>. It aims to develop high contrast coronagraphic techniques for segmented telescopes, providing an integrated solution for wavefront control and starlight suppression on segmented aperture geometries. The testbed can operate in different coronagraphic modes: Classical Lyot Coronagraph (CLC), Apodized Lyot Coronagraph (APLC) <cit.>, or Phase-Apodized-Pupil Lyot Coronagraph (PAPLC) <cit.>. HiCAT also includes an Iris-AO DM to mimic the pupil segmentation and two continuous kilo-DMs from Boston Micromachines for wavefront control. Different algorithms are available to produce a high-contrast region in the coronagraphic image of the light source. There is also a low-order wavefront control loop with a ZWFS using the light going through the reflective focal plane mask to control low-order aberrations <cit.>. The latest results show contrast levels down to a few 10^-8 in narrowband and boardband <cit.>. In this contribution, we briefly recall the main features of the testbed that are relevant for our study with the MOWFS. The star is simulated with a laser going through an optical fiber at λ= 638 nm. The primary mirror is represented using an outline frame defining the edge of the telescope and a 37-segment PTT111L deformable mirror (DM) from Iris-AO. The later has an inscribed circular diameter of 7 mm with segments of 1.4 mm. Each segment is controlled with 3 actuators to generate piston, tip, and tilt (PTT) modes. The maximum amplitude of the actuators is around 5 μm, and the system is driven with a 14-bit electronic device. On HiCAT, in the current setup, a pick-off mirror has been installed between the Iris-AO DM and the apodizer to send the light either to the science path or to the MOWFS. In the MOWFS configuration, the beam is going through a combination of lenses to form the source image on the ZWFS and the sensor signal on a CCD camera, see Figure <ref>. The MOWFS uses a Zernike mask with a diameter of 41 μm, which corresponds to 2.5 λ f / D with f = 500 mm, and D =19.2 mm. The CCD camera is a ZWO ASI178MM with 824x824 pixels. A digital twin of HiCAT using the CATKit2 library <cit.> is available to numerically reproduce the testbed behaviour and the light propagation along the different elements of the bench. This simulator also enables the development of scripts of experiments before their implementation and exploitation on hardware. In the case of the MOWFS, we have implemented the different parts to drive the pick-off mirror, the Zernike mask alignment and the camera for image acquisition. § MOWFS SENSITIVITY STUDY §.§ Preliminary characterization of the Iris-AO segmented DM Our main objective is to develop a wavefront control loop to stabilize a dark zone in the presence of drift on the segmented primary mirror. To perform such a study, the characterization of the Iris-AO DM on the HiCAT testbed is required. In this section, we present the preliminary tests and results to measure the quantization of the discretization steps of the DM actuators. As a first experiment, we start with a flat map introduced on the DM to measure a reference signal on the MOWFS. We then apply a global piston with a given value for all the segments of the DM and we determine the response of the MOWFS by performing a differential intensity measurement. In our procedure, we take measurements by switching from the reference image to the image for a global piston in an alternative way for two reasons. First, we want to increase the signal to noise ratio (SNR) by stacking the differential images over 10 iterations. The second reason is related to a mechanical instability of the mount of the pick-off mirror and some turbulence associated to the motor of this mount. Figure <ref> shows the differential intensity for a global piston of 0.2 and 0.6 nm RMS. For each global piston, the intensity map shows a different response for each of the segments. This behaviour might be related to the difference in terms of actuator response for a given command. Qualitatively, the overall shape is not the same for both global piston commands, showing possible different dynamic from one segment to another. It is therefore important to characterize the response of each actuator accurately. As a preliminary calibration, we analyze the MOWFS response in PTT for a single mode introduced on the central segment, by using a ramp of tip from 0 to 1500 pm, with a sampling of 1.5 pm, see Figure <ref>. The acquisition procedure is the same as previously, i.e differential image between a reference image and an image with the introduced mode. The integration time for each data point is 35 ms. At first sight, we notice two things. First, the response curves exhibit the presence of steps, instead of a slope as expected. This shape might be due to the 14-bit voltage resolution of the electronics, leading to the discretization of the DM command. Secondly, for an introduced tip, we measure a combination of PTT modes. This effect is most likely related to the non-uniform response of the actuators for a given mode. These effects can represent a limiting factor for the introduction and correction of PTT in the context of the dark zone stability in the presence of drift on the telescope segments. For a given tip with an amplitude ranging from 250 to 630 pm and from 680 to 1129 pm, the measured piston and tilt are approximately null. In this configuration, the trend of the measured tip does not follow the unitary slope. Work is in progress to understand the origin of this discrepancy. Possible explanations are: calibration errors due to the inaccurate alignment of the ZWFS mask with respect to the source position at focus, the presence of turbulence due to the heating coming from the motor mount of the pick-off mirror, and the non-uniform response of the DM actuators. Some information can still be extracted from these steps, for instance, the noise level and the SNR for the characterization of the Iris-AO DM response. As the values of the steps are very stable, we can associate them with the temporal evolution of a segment mode with a specific amplitude. We can then derive the mean value, but also the RMS of the steps, which corresponds to the noise level of the measurements. This information allows us to determine the SNR at which the discretization steps are measured with the MOWFS, see Figure <ref>. The left plot shows that the MOWFS allows us to measure a minimal step of 125±31 pm. For this discretization step, the right plot shows that we reach a SNR = 4, showing the ability of our MOWFS to characterize the Iris-AO DM discretization steps at a sub-nanometric level with a high SNR. In the same plot, we also notice an increase of the SNR with the introduced amplitude. This result is consistent with our expectations as we are working in a high source flux regime and we are not limited by photon noise or read-out noise. §.§ Characterization of the MOWFS capabilities The ability to measure wavefront errors down to a few tens of picometers is a key aspect for exo-Earth observations with future large space observatories. To probe wavefront errors at different spatial frequencies, HiCAT uses several wavefront sensors such as the MOWFS. This section aims to characterize the wavefront error detection limit of this sensor on the testbed in air for a source signal with a given SNR. To reach this goal, we rely on the smallest Iris-AO discretization steps that have been measured by the MOWFS in the previous section. For the steps with an amplitude of 125 and 292 pm, the MOWFS shows a measurement RMS of 31 and 39 pm. Figure <ref> shows the wavefront detection limit for a SNR of 3 with the previous discretization steps. For the first and second discretization steps and assuming a SNR of 3, our MOWFS is able to measure a wavefront error with an amplitude down to 93 pm and 117 pm RMS. Since the second step shows a larger sequence of data points than the first step, we consider these data to determine the temporal evolution of the MOWFS detection limit. We recall that the exposure time for each data point is 35 ms. We determine the wavefront error detectability of the MOWFS for different integration times. The wavefront error amplitude is computed for an associated number of exposures and scaled with a given SNR to achieve the associated detection limit. In our study we set the SNR to 3. Figure <ref> shows the temporal evolution of the MOWFS detection limit. The results show a decrease in the mean value of the wavefront error amplitude as we increase the exposure time. We fit the data points with a power law, assuming that we are mostly limited by photon noise. Our resulting model is 87/ √(t)+91 in which t denotes the exposure time. Using this model, we can have an averaged estimate of the wavefront error amplitude at longer exposure time and determine the MOWFS detection limit. For an exposure time of 10 s and 1 min, the MOWFS reaches a wavefront error amplitude of 119 and 102 pm. Obviously, these results are not direct measurements and present their own limits. However, they provide a good estimate of the MOWFS detection limit on HiCAT in air and in the presence of several calibration errors, as mentioned in section <ref>. They will be useful to model and extrapolate the metrology capabilities of the wavefront sensors in the context of ultra-stable space telescopes for exo-Earth observations. § CONCLUSION In the context of a segmented aperture space telescope with high-contrast capabilities, a control loop based on a MOWFS is a promising option to ensure the image contrast stability in the presence of fine segment cophasing errors. In June 2023, we installed the MOWFS setup on HiCAT and implemented its numerical version in the digital twin of the bench to run experiments in simulation and on hardware. Our work has led to preliminary studies on the segmented mirror on HiCAT and the MOWFS itself. In our first study, we have characterized the Iris-AO segmented DM on HiCAT, assessing the amplitude of the DM discretization steps at sub-nanometric levels. Our preliminary study on the central segment showed an estimate of the minimal step for the Iris-AO DM of 125±31 pm with a SNR=4. In the forthcoming studies, we will extend our work to all the segments of the Iris-AO DM. Such an analysis will allow us to estimate the minimal level of segment drifts that we could emulate on HiCAT and its impact on the image contrast. In a second study, we have determined the detection limit of the MOWFS by analyzing and scaling the measurements of the DM discretization steps. Our results with the data from the second smallest discretization step have showed that our MOWFS should be able to detect aberrations down to 119 pm and 102 pm for an exposure time of 10 s and 1 min at SNR=3 on the HiCAT testbed in air. These first results on sensor capabilities prove very promising to investigate our ability to control the wavefront errors on HiCAT and therefore, determine the achievable contrast levels with our tools. Work is in progress to address some calibration issues related to the Zernike mask misalignment, the turbulence induced by the motor of a pick-off mirror on the beam, and the DM actuator responses. The expected improvements will allow for a better quality, accuracy and reproductibility of the results for the DM characterization and the MOWFS capabilities. One of the possible contrast limitation on HiCAT might be related to the Iris-AO behavior. We plan to study the DM stability by setting the component to a flat position and performing long-time runs with the MOWFS measurements. This will help us to determine the contrast level due to the Iris-AO and investigate our ability to dig a dark hole with a contrast better than 10^-8 on HiCAT. Such a sensitivity analysis will help us to assess the stability and metrology requirements further for HWO with its segmented primary mirror for exo-Earth observations. This work was supported by the Action Spécifique Haute Résolution Angulaire (ASHRA) of CNRS/INSU co-funded by CNES. B.B. acknowledges PhD scholarship funding from Région Provence-Alpes-Côte d’Azur and Thales Alenia Space. B.B. also acknowledges support from Laboratoire Lagrange through the 2024 BQR Lagrange program and from the Lagrange MPO team for the mission to the SPIE conference in Japan. The HiCAT testbed has been developed over the past 10 years and benefited from the work of an extended collaboration of over 50 people. This work was supported in part by the National Aeronautics and Space Administration under Grant 80NSSC19K0120 issued through the Strategic Astrophysics Technology/Technology Demonstration for Exo-planet Missions Program (SAT-TDEM; PI: R. Soummer), and under Grant 80NSSC22K0372 issued through the Astrophysics Research and Analysis Program (APRA; PI: L. Pueyo). E.H.P. was supported in part by the NASA Hubble Fellowship grant HST-HF2-51467.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Sarah Steiger acknowledges support by STScI Postdoctoral Fellowship and Iva Laginja acknowledges partial support from a postdoctoral fellowship issued by the Centre National d’Etudes Spatiales (CNES) in France. spiebib_v1
http://arxiv.org/abs/2409.03185v1
20240905022332
DasAtom: A Divide-and-Shuttle Atom Approach to Quantum Circuit Transformation
[ "Yunqi Huang", "Dingchao Gao", "Shenggang Ying", "Sanjiang Li" ]
quant-ph
[ "quant-ph", "cs.ET" ]
DasAtom: A Divide-and-Shuttle Atom Approach to Quantum Circuit Transformation Yunqi Huang, Dingchao Gao, Shenggang Ying, and Sanjiang Li^* Yunqi Huang and Sanjiang Li are with Centre for Quantum Software and Information (QSI), Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia. Dingchao Gao and Shenggang Ying are with Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Beijing, China Corresponding author (E-mail: [email protected]) September 5, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Neutral atom (NA) quantum systems are emerging as a leading platform for quantum computation, offering superior or competitive qubit count and gate fidelity compared to superconducting circuits and ion traps. However, the unique features of NA devices, such as long-range interactions, long qubit coherence time, and the ability to physically move qubits, present distinct challenges for quantum circuit compilation. In this paper, we introduce DasAtom, a novel divide-and-shuttle atom approach designed to optimise quantum circuit transformation for NA devices by leveraging these capabilities. DasAtom partitions circuits into subcircuits, each associated with a qubit mapping that allows all gates within the subcircuit to be directly executed. The algorithm then shuttles atoms to transition seamlessly from one mapping to the next, enhancing both execution efficiency and overall fidelity. For a 30-qubit Quantum Fourier Transform (QFT), DasAtom achieves a 414x improvement in fidelity over the move-based algorithm Enola and a 10.6x improvement over the SWAP-based algorithm Tetris. Notably, this improvement is expected to increase exponentially with the number of qubits, positioning DasAtom as a highly promising solution for scaling quantum computation on NA platforms. Keywords: neutral atom quantum computing, quantum circuit transformation, divide-and-conquer (DAC), subgraph isomorphism § INTRODUCTION Quantum computing has the potential to revolutionise various fields, including cryptography <cit.>, chemistry <cit.>, and machine learning <cit.>. Among the diverse quantum hardware platforms under development, neutral atom (NA) quantum systems have garnered significant attention due to their inherent advantages in scalability, qubit connectivity, and gate fidelity <cit.>. Unlike other quantum platforms such as superconducting circuits, NA systems can leverage long-range interactions and native multi-qubit gates, which allow for the execution of complex quantum operations with fewer resources. Additionally, the ability to physically move qubits within NA systems introduces a new dimension in quantum circuit optimisation that is not present in more rigid architectures <cit.>. Despite these advantages, the unique characteristics of NA devices present novel challenges in quantum circuit compilation. Specifically, the need to efficiently map qubits and execute operations while improving overall fidelity is a non-trivial problem. Existing quantum circuit compilation methods, such as SWAP-based <cit.> and move-based <cit.> algorithms, are not fully optimised for the capabilities of NA systems, often leading to suboptimal performance in terms of overall fidelity. The SWAP-based method, exemplified by Tetris <cit.>, employs a fixed-atom array and transforms quantum circuits in a manner similar to compilers for superconducting devices <cit.>. This approach leverages the long-range interactions of NA systems to achieve denser qubit connectivity, which contrasts with state-of-the-art IBM superconducting quantum computers that typically have sparse qubit connections, with an average degree between 2 and 3. Tetris operates as a heuristic greedy algorithm, addressing both qubit connection constraints and parallel execution constraints. On the other hand, Enola <cit.>, the most recent and advanced move-based compiler, assumes sufficient separation between atoms to eliminate parallel execution constraints. As <cit.>, Enola exploits gate commutability in QAOA <cit.> circuits, aiming to scheduling gates from a commutation group in near-optimal number of Rydberg stages. For generic quantum circuits like Quantum Fourier Transform (QFT), CZ gates are often blocked by single-qubit gates (cf. Fig. <ref>). Enola executes each layer of CZ gates collectively. Empirical results on QFT circuits indicate that the slow atom movement (partially due to the large atom distance) is the primary factor contributing to overall fidelity loss. However, even these advanced methods do not fully capitalise on the strengths of the NA platform, particularly the synergistic use of long-range interactions and atom shuttling, which can significantly enhance the efficiency and fidelity of quantum circuit execution. This gap highlights the need for a more comprehensive approach, leading to the development of DasAtom—a divide-and-shuttle atom algorithm specifically designed for NA systems. DasAtom combines the advantages of both Tetris and Enola while avoiding their shortcomings. Tetris effectively uses long-range interactions to enable dense qubit connectivity, but it does not utilise the ability to move atoms, limiting its flexibility. On the other hand, Enola leverages atom shuttling to adapt qubit mappings dynamically, yet it cannot take advantage of long-range interactions as atoms are far apart. DasAtom integrates both of these key capabilities: it partitions circuits into subcircuits, assigns an optimal qubit mapping for each subcircuit, and then shuttles atoms to smoothly transition between mappings. By doing so, it ensures that every gate in a subcircuit is directly executable, thereby enhancing overall fidelity and efficiency. We conducted empirical comparisons of DasAtom's performance against Tetris and Enola using the benchmark circuits used in <cit.>. This set of 33 benchmark circuits includes RevLib circuits as well as the quantum circuits Bernstein–Vazirani (BV), Quantum Volume (QV), and QFT, which range from 5 to 16 qubits with up to 3,089 CZ gates. In addition, we evaluated Deutsch-Jozsa (DJ), 3-regular MaxCut QAOA, Greenberger–Horne–Zeilinger (GHZ), QFT, QV, two-local ansatz, and W-state circuits with qubit counts ranging from 5 to 50. Our experiments demonstrate that DasAtom consistently delivers significant performance gains in both overall fidelity and compiler runtime. For instance, DasAtom achieves a 414x improvement in fidelity over Enola and a 10.6x improvement over Tetris in a 30-qubit QFT, while the runtimes of Enola and Tetris are 9851x and 384x longer than DasAtom. Moreover, these improvements are expected to scale exponentially with the number of qubits, making DasAtom a highly promising solution for quantum computation on NA platforms. The remainder of this paper is organised as follows: Section II provides a brief background and discusses related work. Sections III and IV offer a detailed description of the DasAtom algorithm, including its implementation and a comprehensive performance analysis across various quantum circuits. In Section V, we explain why DasAtom outperforms other algorithms and explore potential avenues for optimisation. The last section concludes the paper. § BACKGROUND AND RELATED WORK §.§ Neutral atom quantum hardware In neutral atom quantum hardware, neutral atoms are trapped in arrays of tweezers <cit.> and the computational states |0⟩ and |1⟩ are encoded in the hyperfine ground states of an alkali or alkaline-earth-like atom. These atoms can be arranged in one, two, or even three-dimensional configurations <cit.>. In this work, we focus on a b× b regular grid G(b,b) with constant distance d>0. Fig. <ref> shows a 3× 3 grid G(3,3). In the following, we write q,q',q_i for program qubits in a circuit, and write p,p',p_i for nodes or their coordinates in G(b,b). Single-qubit gates are implemented through individual or global optical addressing of the atoms, and two-qubit gates are realised by exciting the atoms into a Rydberg state using laser beams. The excitation to Rydberg states induces a strong dipole-dipole interaction between the atoms <cit.>. This interaction is governed by an interaction radius R_int, within which a CZ gate on two atoms p_i, p_j can be performed if the distance D(p_i, p_j) ≤ R_int, where D represents the Euclidean distance. One significant advantage of NA platform is its ability for long-range interaction. Two qubits can interact even if they are not neighbours in the grid. For instance, the interaction radius R_int can be r_int× d for 1≤ r_int≤ 3. Qubit connectivity of a quantum architecture is often captured by its architecture graph AG = (V, E), where the nodes in V correspond to the physical qubits (i.e., trapped atoms here), while the edges indicate the qubits capable of interacting with each other. For NA platform, each physical qubit p in V is assigned to the trap coordinates (x,y), and the edges E can be defined as: E = {(p_i, p_j) | p_i, p_j ∈ V, D(p_i, p_j) ≤ R_int}. In the 3× 3 grid shown in Fig. <ref>, the black, red dotted, and blue edges are, respectively, those with distances d, √(2)d, 2d. To minimise crosstalk between gates, parallel gate execution is feasible only if qubits corresponding to different CZ gates maintain a distance of at least the restriction radius R_restr≥ R_int from all qubits involved in other simultaneously executed two-qubit gates. Specifically, for two CZ gates g on p_i, p_j and g' on p_a, p_b to be executed in parallel, the conditions D(p_u, p_v) > R_restr must be satisfied for any u∈i,j and any v∈a,b. For example, suppose R_int=R_restr=1× d in Fig. <ref>. Let p_i,j be a physical qubit located at (i,j). Then CZ(p_0,2,p_1,2) can be executed in parallel with CZ(p_0,0,p_1,0), but not in parallel with CZ(p_1,1,p_1,0). The ability to move qubits is the most distinctive advantage of NA platforms <cit.>. In NA systems, qubits are captured in two types of traps. A spatial light modulator (SLM) generates an array of static traps, while a 2D acousto-optic deflector (AOD) creates mobile traps that can move within the plane. The AOD traps are formed at the intersection of a set of rows and columns. Each row/column coordinate can be activated, moved, and then deactivated, allowing for arbitrary rearrangements of the atoms, subject to the constraint that different columns (rows) must not cross each other. Atom movement is a high-fidelity operation and an atom can traverse a region for 2,000 qubits with only 0.1% coherence time <cit.>. §.§ Quantum circuit compilation In quantum computing, a quantum circuit serves as a model that represents the flow of information and operations within a quantum algorithm. A quantum circuit is composed of qubits and quantum gates that manipulate the states of these qubits. In this work, we denote a quantum circuit by C, which consists of a sequence of quantum gates {g_1, …,g_m} acting on qubits in Q={q_1,…,q_n}. Fig. <ref> shows the well-known QFT circuit on five qubits.[For clarity, we have omitted the SWAP gates at the end of the circuit.] Circuits like QFT-5 in Fig. <ref> often contain gates that are not native in a target quantum device. This means we need to decompose non-native gates in C into native gates. For example, let 𝒢=R_x,R_y,R_z, CZ be the set of native gates of the target quantum device. For QFT-5 in Fig. <ref>, we need decompose H and control phase gate CP(θ) in gates in 𝒢. The result is shown in Fig. <ref>. Due to the limited qubit connectivity, it is not often that we can execute the synthesised circuit directly: some CZ gates may act on two far away device qubits. For example, suppose the target quantum device has architecture graph as the graph in Fig. <ref> but with all blue edges being removed. Assume furthermore that we initially map the program qubits q_i (0≤ i≤ 4) as in Fig. <ref>(a). Then all but the last four CZ gates are directly executable, because they act on physic qubits that are connected by a black or red-dashed edge in Fig. <ref>. Note that the fourth last CZ gate in C acts on q_0,q_2, which are mapped to p_0,2,p_0,0. Because p_0,2 is not connected to p_0,0 in the architecture graph, we cannot execute the CZ gate. We next outline the general procedures for quantum circuit transformation. To execute a decomposed circuit C on an NA device, each program qubit q in C is first mapped to a physical atom, represented by a location p_i,j in the SLM layer of the device. With this initial mapping, as illustrated in Fig. <ref>(a), not all CZ gates are directly executable. If a CZ gate g is not directly executable, we need to bring the two qubits of g close together. In NA platforms, there are two methods for achieving this: SWAP gate insertion and atom shuttling (cf. <cit.>). One approach is to swap q_0 and q_4, or swap q_2 and q_4, by applying the corresponding SWAP gates. Alternatively, we can move the atom carrying q_0 from p_0,2 to one of the three neighbours of p_0,0. If a neighbouring position is already occupied, the occupying atom must be moved away. §.§ Related work Quantum circuit transformation (QCT) is the process of converting a program circuit into a form that is executable on a target quantum device, whether it be an IBM superconducting device or an NA quantum device. QCT is a crucial component of quantum circuit compilation. Since IBM launched its cloud-based quantum computing services, numerous QCT algorithms have been proposed in the literature, including <cit.>. The unique features of NA devices introduce distinct challenges for quantum circuit compilation. Early efforts sought to leverage these features within fixed atom arrays. Baker et al. proposed the first compiler for NA devices, which accounted for long-range interaction <cit.>. Multi-qubit gate support was subsequently incorporated in <cit.>. Inspired by the popular block puzzle game, Li et al. <cit.> proposed the heuristic algorithm Tetris, which effectively reduces qubit idle time while exploiting the rich qubit connections in NA device and adhering to parallel execution constraints. Different from the above SWAP-gate based shuttling method, Tan et al. <cit.> developed an SMT solver-based compiler, called OLSQ-DPQA, which utilises atom shuttling. Nottingham et al. <cit.> also explored the potential of using atom movement as an alternative to the costly SWAP-gate shuttling. Subsequently, several follow-up compilers were introduced to address the scalability challenge of OLSQ-DPQA. These include Atomique <cit.>, Q-Pilot <cit.>, and Enola <cit.>, all of which consider the dynamically field-programmable qubit arrays (DPQA) architecture <cit.>. In Enola, the compilation process is divided into scheduling, placement, and routing. It schedules a commutation group of CZ gates in a near-optimal number of Rydberg stages and, for a generic quantum circuit, it schedules each layer of parallel CZ gates as a Rydberg stage. Enola offers two placement methods: dynamic placement, which generates a new qubit mapping for each Rydberg stage, and static placement, which uses the same mapping throughout. When comparing with OLSQ-DPQA, Atomique, and Q-Pilot, Enola demonstrates superior fidelity improvement <cit.>. Combining the two methods described above is natural. Brandhofer et al. <cit.> proposed a compiler that integrates SWAP gates with a special atom movement technique called one-dimensional displacements, aimed at reducing circuit depth and improve circuit fidelity. Another hybrid compiler was proposed in <cit.>, where the NA hardware is grid-based, and atom can be moved from one grid point to another only if the target grid point is unoccupied. Experiments in <cit.> show that, for shuttling-favoured NA devices, the hybrid approach is essentially the same as the atom shuttling method in performance. Interested reader may consult the recent review paper <cit.> for more information. Divide-and-conquer (DAC) is a natural approach for quantum circuit transformation, and several researchers have proposed DAC-based QCT algorithms, such as those in <cit.>. Siraichi et al. <cit.> introduced the BMT algorithm, which transforms circuits by combining subgraph isomorphism with token swapping. The algorithm partitions the gate list into maximal isomorphic sublists, constructs multiple embeddings for each sublist, and then uses token swapping to combine embeddings of consecutive sublists. The optimal transformation path is found using dynamic programming. Similarly, Wu et al. <cit.> proposed a DAC approach that uses an SMT-based checker to verify if a sublist can be transformed without SWAP gate insertion. Their method partitions the circuit into sublists with a bounded number of 2-qubit gates, generates multiple embeddings for each sublist, and links pairs of embeddings with SWAP gates. The optimal transformation is selected based on minimising a heuristic distance cost. However, these algorithms, which rely on exhaustive search, struggle with circuits involving 20 or more qubits. § DASATOM TRANSFORMATION FRAMEWORK In this section, we give a detailed description of our algorithm DasAtom. §.§ Overview Let 𝒢 be an NA-native gate set, including the two-qubit CZ gate and local addressing rotation gates R_z, as well as R_x, R_y (which can be locally or globally addressed). Given an input circuit C, we first synthesise C using gates from 𝒢. In this work, we assume that single-qubit gates and CZ gates are executed in distinct stages; in each stage, either single-qubit gates or CZ gates are executed, but not both. Fig. <ref> outlines the general compilation flow on NA devices. Note that whenever there is a single-qubit gate in the front layer, we execute (and remove) it and generate the new front layer. This implies that 2Q gates can, in principle, be executed layerwise. Given an input circuit C on n qubits, an NA-native gate set 𝒢, an interaction radius R_int≥ d, and a restriction radius R_restr≥ R_int, where d is the distance between atoms, DasAtom operates as follows: * Synthesise C using gates from 𝒢 and remove single-qubit gates. The resulting circuit consists of only CZ gates. We still write C for this CZ circuit. * Partition the remaining circuit into layers L_1,…,L_m, where each L_i contains only CZ gates. * Divide the CZ circuit C into subcircuits such that (i) each layer L_i is fully contained within a subcircuit; (ii) the interaction graph of each subcircuit is embeddable in Grid(b,b), where b = √(n). The precise meaning of the interaction graph and embeddable graph will be defined in the next subsection. * For each subcircuit C_i, construct a mapping f_i that embeds the interaction graph IG(C_i) to Grid(b,b). For each CZ gate g=CZ(q,q') in C_i, we execute g by interacting f_i(q) and f_i(q'). Since D(f_i(q),f_i(q')) ≤ R_int, the gate can be executed directly. However, due to parallel execution constraints, not all gates in the same layer can be executed simultaneously. * Perform routing based on atom shuttling (instead of inserting SWAP gates). This can be achieved by modifying the routing algorithm provided in <cit.> or that in <cit.> or <cit.>. This process transforms f_i into f_i+1. The flowchart of DasAtom is shown in Fig. <ref>. In addition, we assume that CZ gates are executed by applying Rydberg laser individually. Note that to exploit long-range interaction, we always assume in DasAtom that the atom distance d ≤ R_int. This is different from DPQA <cit.>, where the atom distance is 2.5× R_int. In the following, we discuss the implementation of the above procedures in detail. §.§ Circuit division Qubit interaction in a quantum circuit C can be simply represented by a graph. For a quantum circuit C with qubit set Q, its interaction graph, denoted as IG(C), is an undirected graph (Q,E_int), where: * each node represents a qubit of C. * Two nodes q,q'∈ Q are connected if there is a two-qubit gate in C acting on q,q'. If the interaction graph of C matches well with the architecture graph AG=(V,E), there is no need to route the qubits: what we need is an embedding f from IG(C) to AG. Here a 1-1 mapping f:Q→ V is an embedding if (f(q),f(q')) is an edge in AG for any edge (q,q') in IG(C). In this case, f is also called a subgraph isomorphism. After decomposing gates in the circuit C into NA-native gates and removing single-qubit gates, we partition C into CZ layers L_1,⋯,L_m. It is straightforward to see that each CZ layer corresponds to an executable front layer (i.e., the `Execute 2Q gates' node) in Fig. <ref>. Furthermore, we divide C into subcircuits C_1,…,C_k so that each C_i is composed of consecutive CZ layers of C and the interaction graph of C_i is embeddable in AG with some embedding f_i. For convenience, we write C_i = C[ℓ_i : m_i] if C_i is composed of layers L_ℓ_i,…,L_m_i. Apparently, we have 1=ℓ_1≤ m_1<ℓ_2≤ m_2 < … < ℓ_k ≤ m_k=m. Unlike the methods in <cit.> and <cit.>, our division scheme is coarse-grained, requiring significantly fewer calls to the costly subgraph isomorphism check. For circuit division, we employ the Rustworkx implementation of VF2 <cit.>. When partitioning the circuit into layers, the number of VF2 calls is at most O(depth), where depth is the number of CZ layers in C. For our running example, if we set R_int=√(2)d, then the QFT-5 circuit can be partitioned into two subcircuits, see Fig. <ref>. The interaction graphs and their embeddings to AG=G(3,3) of these two subcircuits are shown in Fig. <ref>. To minimise qubit idling time, we aim to execute as many CZ gates in parallel as possible, ensuring they satisfy the previously mentioned restriction constraint. In our example, R_restr = 2× R_ini = 2√(2)d. The first sub-circuit includes, for instance, a CZ layer consisting of gates CZ(q_1, q_3) and CZ(q_0, q_4), with the qubit mapping f shown in Fig. <ref>(a). While these gates can be executed in parallel on superconducting devices (albeit with potential crosstalk noise), on an NA device, D(f(q_3), f(q_4)) = d < R_restr, meaning CZ(q_1, q_3) and CZ(q_0, q_4) cannot be executed in parallel due to the restriction constraint. §.§ Atom shuttling Suppose we have divided the CZ circuit C into subcircuits C_1,…,C_m with corresponding embeddingspa f_1,…,f_m. How do we transition from one mapping to the next? In superconducting quantum devices, this is typically done by inserting SWAP gates. As we have seen in Section <ref>, atom shuttling is more desirable in NA quantum devices. Let Q=q_1,…,q_n be the set of logical qubits in the input circuit. Suppose f,f' are two 1-1 mappings from Q to grid points in G(b,b) (b = √(n)). Our task is to transition the old mapping f into the new mapping f'. To this end, we need to move each q_i∈ Q from its current position f(q_i)=(x_i,y_i) in G(b,b) to a new position f'(q_i)=(x'_i,y'_i), also in G(b,b).[In contrast, Enola only requires moving the atom at (x_i,y_i) closer to the atom at (x'_i,y'_i), allowing for bidirectional movement.] Let M={m_i (x_i,y_i,x'_i,y'_i) | 1≤ i≤ n}, where each m_i represents a target movement. Two movements m_i,m_j are said to conflict if the following condition is violated: * (x_i * x_j) ⟺ (x'_i * x'_j) and (y_i * y_j) ⟺ (y'_i * y'_j) for all * in <,=,>. If m_i and m_j do not conflict, they are considered compatible. A set of movements is compatible if every pair within the set is compatible. We aim to find a sequence of parallel and compatible movements (not necessarily all from M) that move each (x_i,y_i) to (x'_i,y'_i) efficiently, minimising the total movement time. Several existing routing algorithms can be used for this purpose, such as those in <cit.>. For ease of comparison, we adapt the routing algorithm from Tan et al. <cit.> for our needs. We construct a conflict graph for M, where each node represents a movement in M, and two nodes are connected if their movements are incompatible. This reduces the problem to finding a maximum independent set (MIS) in the conflict graph, as described in <cit.>. Instead of moving two qubits close together, we move each atom from its current position (as specified by the old mapping f) to the next as specified by the new mapping f', meaning no dual movements exist in our conflict graph. Since the MIS problem is NP-hard, we use the same greedy algorithms implemented in Enola to approximate a solution. § EVALUATION We implemented our proposed algorithm in Python. Note we adapted the routing algorithm introduced in <cit.> for transiting two consecutive mappings. All experiments for DasAtom and Enola, which were programmed in Python, were conducted on a MacBook Pro featuring a 2.3 GHz Intel Core i5 processor and 16 GB memory, while all experiments for Tetris, which was programmed in C++, were conducted on an Ubuntu 20.04 server with 40 cores of Intel Xeon Gold 5215 @ 2.50GHz, 512 GB of RAM. Our source code will be made publicly available on the authors' GitHub repository. §.§ Parameters setting In our experiments, we assume the following NA hardware parameters. * Atom distance d: 3 μm * CZ gate duration: 0.2 μs * CZ gate fidelity: 0.995 * Coherence time T_2: 1.5s * Moving speed: 0.55μm/μs * Trap swapping duration: 20μs * Atom transfer fidelity: 1 * Coupling graph size: √(n)×√(n) §.§.§ Comparing different interaction and restriction radii We begin by comparing the performance of DasAtom with various interaction radii. As can be observed from Fig. <ref>, setting r_int=2 provides slightly better results than other configurations. In the following, we choose the following settings, which is also suggested in <cit.>: * R_int = 2d, R_restr = 4d. Note that a larger restriction radius always has a negative impact on performance. §.§.§ Assumptions for Tetris and Enola For both Tetris and Enola, we assume that the input circuit is decomposed in the same manner, with all single-qubit gates removed before applying the algorithms. For Tetris, we assume that CZ gates are executed by individually applying the Rydberg laser; while for Enola, CZ gates are executed using a global Rydberg laser without exploiting gate commutativity. Consequently, the atom distance is set to d=3μm for Tetris and d=7.5μm for Enola. It is worth noting that Tetris adopts a definition of variable restriction area, which depends on the inter-qubit distance of the gate. §.§ Fidelity For easy of comparison among algorithms across different quantum computing platforms, and following <cit.>, we use the approximate success probability as a proxy for circuit fidelity. Let g_i| 0≤ i≤ m be the set of gates in the transformed circuit, and F_g_i the fidelity of gate g_i. The approximate success probability of the compiled circuit is P(C) = exp(-T_idle/T_2) ∏_i=0^m F_g_i, where T_2 is the atom dephasing time, T_idle=nT-∑_i=0^mt(g_i), n is the number of qubits in the circuit, T is the total circuit execution time, and t(g_i) is the execution time of gate g_i. Given our hardware parameters settings (cf. Section <ref>), the cost of inserting a SWAP gate can be approximated by F_g^3=0.995^3=0.985, where F_g is the fidelity of a CZ gate. In contrast, the cost of moving an atom at a speed of 0.55μm/μs over a distance x μm can be roughly calculated as exp(-2× T_trans+x/0.55/T_2), where 2 · T_trans represents the time taken for the atom to transition between SLM to AOD traps at both the initial and target positions. A simple calculation shows that the fidelity drops to 0.985 when x≈ 12384 μm. Additionally, if we want to swap the positions of two atoms, the distance can be as large as 6202 μm for the fidelity to remain at 0.985. This illustrates why SWAP gates are costly in terms of fidelity. However, it is important to note that SWAP gates could be 10x or more faster than atom movement. §.§ Overall fidelity comparison with Tetris and Enola §.§.§ Results on benchmark circuits used in <cit.> In <cit.>, the authors evaluated, among others, 33 benchmark circuits from RevLib and IBM Qiskit. After proper decomposition, these circuits include qubit counts ranging from 5 to 16 and up to 3,089 CZ gates. We empirically evaluated the performance of the three algorithms on these benchmarks, and the results are summarised in Table <ref>. During compilation, we assume that each SWAP gate inserted by Tetris is decomposed into three CZ gates and several single-qubit gates. It is important to note that, in our calculation of Tetris's overall fidelity, we completely disregard the presence of additional single-qubit gates in the compiled circuit. As a result, the calculated fidelity is slightly higher than the actual value. Recall that Enola operates in both static and dynamic modes. For these benchmark circuits, the two modes produce similar overall fidelity, with the dynamic mode performing slightly better. However, this improvement comes at the cost of a significantly increased runtime. So we set a time out of 3600 seconds if the dynamic method does not complete within this time frame, which is the case for all circuits with CZ depth ≥ 30. From Table <ref>, we observe the following: * DasAtom consistently outperforms Tetris and Enola in both fidelity and runtime. For the 15-qubit `square_root_7' circuit, DasAtom achieves a fidelity that is 27x higher than Tetris's, while Tetris's runtime is 178x longer than DasAtom's. For the 16-qubit `ising_model_16', the fidelity of DasAtom is 1.6x of that of Enola (dynamic), with Enola's runtime being 17,490x longer. * The fidelity of Tetris (Enola) deteriorates rapidly with the increasing number of SWAP gates (CZ-depth). * The runtime of Enola (dynamic) is highly related to the CZ-depth of the circuit. It takes around one hour to transpile a circuit with CZ-depth 30 and becomes extremely slow for circuits with CZ-depth ≥ 300. Indeed, it takes (dynamic) Enola 68 hours to finish the 6-qubit `sf_276' circuit, which has CZ depth 300. §.§.§ Results on practical quantum circuits with varying qubit number and topologies The benchmark circuits evaluated above involve a relatively small number of qubits. To assess the scalability of our algorithm, we also evaluated the performance of the three algorithms on larger circuits, extracted from MQTBench[https://www.cda.cit.tum.de/mqtbench/] and <cit.>. These circuits include DJ, GHZ, QFT, W-state, QV, two-local random circuits, and 3-regular MaxCut QAOA circuits, which have very different interaction graphs. Since Enola (dynamic) performs exponentially better than Enola (static) as the number of qubits increases, we focus solely on Enola (dynamic) in these experiments. Table <ref> summarises circuit information and performance details of the three algorithms on seven 20-qubit circuits. From the table we can see that DasAtom consistently outperforms Tetris and Enola in overall fidelity. In addition, the fidelity of DasAtom has strong relation with the number of CZ gates in the circuit. QFT circuits These circuits have complete interaction graphs, see Fig. <ref>(b). In Fig. <ref>(a), the y-axis denotes the ratio of DasAtom's fidelity compared to that of Tetris and Enola, plotted on a logarithmic scale. The results clearly demonstrate that the ratio increases exponentially with the number of qubits, ranging from 5 to 50. In terms of runtime, Fig. <ref>(c) shows that Enola and Tetris are three and two orders of magnitude slower than DasAtom, respectively. Two-local random circuits These circuits have complete interaction graphs. Fig. <ref> shows that the performance ratios of DasAtom's fidelity against that of Tetris and Enola also increases exponentially with number of qubits, ranging from 5 to 30. Note that here we only show results of circuits with up to 30 qubits, as Enola is very slow. Quantum Volume circuits These circuits have (nearly) complete interaction graphs. Fig. <ref> shows that the performance ratios of DasAtom's fidelity against that of Tetris and Enola also increase exponentially with number of qubits. 3-Regular MaxCut QAOA circuits These circuits have 3-regular interaction graphs. Fig. <ref> shows that the fidelity decreases as the number of qubits increases for the algorithms and DasAtom significantly outperforms the other two. Deutsch-Jozsa circuits These circuits have star-like interaction graphs. Fig. <ref> shows how the fidelity decreases as the number of qubits increases for the three algorithms. DasAtom significantly outperforms both Tetris and Enola by a large margin. GHZ circuits and W-state circuits These circuits have linear interaction graphs. Fig.s <ref> and <ref> show how the fidelity decreases as the number of qubits increases for the three algorithms. DasAtom significantly outperforms both Tetris and Enola by a large margin. § FURTHER DISCUSSIONS The evaluation above demonstrates that the DAC scheme, despite its simplicity, is highly effective. This effectiveness can be attributed to two key factors: first, long-range interaction results in dense qubit connections; second, atom shuttling is significantly less costly than SWAP gate insertion (cf. Section <ref>). However, it is important to note that our implementation is not yet fully optimised. Specifically, the mappings for each subcircuit could be designed to enable smoother transitions, making the process more efficient. For example, the mapping of the second part of QFT-5 shown in Fig. <ref>(c) is not optimal. A better approach would be to place q_4 at (0,2) and q_1 at (1,0), requiring only an exchange between the positions of q_0 and q_4. Although our algorithm relies on subgraph isomorphism checks, which are theoretically NP-hard, this has not proven to be a significant obstacle in most cases. By leveraging the Rustworkx implementation of VF2, we successfully compiled the 500-qubit QFT in about one hour. However, this implementation does not perform as well on certain circuits, like QV, especially when the number of qubits exceeds 25. To handle circuits with thousands of qubits, more efficient (though potentially approximate) subgraph isomorphism algorithms will be necessary. Given that our target architecture is grid-based, this should not pose a major challenge. However, as shown in the fidelity ratio curves in Fig. <ref>(a), the primary concern for compilation on NA hardware should be low fidelity rather than scalability. Furthermore, in our implementation of DasAtom, we adopted Enola's atom movement routing algorithm, which requires dropping the atom to the SLM layer to execute two-qubit gates. However, this step is not strictly necessary, as two-qubit gates can be executed between qubits across the SLM and AOD layers <cit.>, potentially reducing the number of atom transfers and movements required and hence further improving the overall fidelity. § CONCLUSION In this paper, we introduced DasAtom, a novel algorithm designed to optimise the execution of quantum circuits on neutral atom platforms by exploiting long-range interactions and atom shuttling. Unlike previous methods such as SWAP-based Tetris and atom move-based Enola, DasAtom achieves significant fidelity improvements through partitioning circuits into subcircuits and dynamically adjusting qubit mappings. Our results demonstrate that for a 30-qubit QFT, DasAtom outperforms Enola by 414x and Tetris by 10.6x, illustrating the substantial gains in fidelity that can be achieved with this method. The ability of DasAtom to scale its performance with the increasing number of qubits highlights its potential as a crucial tool in the future of neutral atom quantum computing. As quantum circuits grow in complexity and size, the exponential improvement in fidelity offered by DasAtom will become increasingly valuable. This work not only provides a pathway to more efficient quantum circuit execution on neutral atom platforms but also sets the stage for further research and development in the field. Future work will focus on expanding the capabilities of DasAtom to handle even larger and more complex quantum circuits. IEEEtran
http://arxiv.org/abs/2409.02498v1
20240904074842
Noise-induced order in high dimensions
[ "Huayan Chen", "Yuzuru Sato" ]
nlin.CD
[ "nlin.CD", "math.DS" ]
Department of Mathematics, Hokkaido University, N12 W7 Kita-ku, Sapporo, 0600812 Hokkaido, Japan [email protected] Department of Mathematics, Hokkaido University, N12 W7 Kita-ku, Sapporo, 0600812 Hokkaido, Japan RIES, Hokkaido University, N12 W7 Kita-ku, Sapporo, 0600812 Hokkaido, Japan London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF, United Kingdom § ABSTRACT Noise-induced phenomena in high-dimensional dynamical systems were investigated from a random dynamical system theoretical point of view. In a class of generalized Hénon maps, which are randomly perturbed delayed logistic maps, with monotonically increasing noise levels, we observed (i) an increase in the number of positive Lyapunov exponents from 4 to 5, and the emergence of characteristic periods at the same time, and (ii) a decrease in the number of positive Lyapunov exponents from 4 to 3, and an increase in Kolmogorov–Sinai entropy at the same time. Our results imply that simple concepts of noise-induced phenomena, such as noise-induced chaos and/or noise-induced order, may not describe those analogue in high dimensional dynamical systems, owing to coexistence of noise-induced chaos and noise-induced order. Noise-induced order in high dimensions Yuzuru Sato September 9, 2024 ====================================== Noise-induced phenomena are caused by interactions between deterministic dynamics and external noise. When a transition occurs owing to small noise, the stationary distribution of the deterministic dynamical system is substantially altered, and the unobservable structure of the original dynamics becomes observable. In such cases, nonlinear phenomena, which qualitatively differ from deterministic dynamics, emerge in the noised dynamics. Typical examples of such noise-induced phenomena include stochastic resonance <cit.>, noise-induced synchronization <cit.>, noise-induced chaos (NIC) <cit.> and noise-induced order (NIO) <cit.>. These low-dimensional noise-induced phenomena can be characterized by the Lyapunov spectrum, Kolmogorov–Sinai entropy, and power spectra. In NIC, small noise from a periodic attractor induces chaotic behavior, resulting in the top Lyapunov exponent becoming positive, increasing the Kolmogorov–Sinai entropy, and causing the power spectrum to show broadband <cit.>. In contrast, in NIO, small noise from a chaotic attractor induces ordered behavior, resulting in the top Lyapunov exponent becoming negative, reducing the Kolmogorov–Sinai entropy, and causing characteristic periods to emerge. Despite the importance of noise-induced transitions in scientific applications, explicit studies on noise-induced phenomena in high-dimensional dynamical systems is not well-developped except for a few works <cit.>. Tractable models for high dimensional dynamical systems have been developed for coupled dynamical systems <cit.> and delayed dynamics <cit.>. Rössler studied high-dimensional chaotic atrractors that have more than one expanding direction <cit.>, which can be characterized by the number of positive Lyapunov exponents, introduced here as κ. In low-dimensional NIC, the number of positive Lyapunov exponents, that is, the number of expanding directions in average, changes from κ=0 to κ=1 and in low-dimensional NIO, from κ=1 to κ=0, respectively. This scheme can be extended to high dimensional dynamical systems, allowing the changes in the number of positive Lyapunov exponents κ, in noised dynamics to be investigated. Indeed, we found multiple noise-induced transitions in random generalized Hénon maps, a class of high dimensional dynamical systems <cit.>, changing κ from κ=4 to 3, 4, and 5, increasing the additive noise level monotonically. The generalized Hénon map is a delayed logistic map with N-1 time delay, which is given by; x_t+1 = F(𝐱_t),  𝐅(𝐱_t)= [ [ a-(x_t^(N-1))^2-bx_t^(N); x_t^(1); ⋮; x_t^(N-1); ]], where x_t=(x^(1)_t,x^(2)_t,…,x^(N)_t), t = 0, 1, 2, … is the iteration time, and a,b are the parameters. The standard Hénon map recovers with (a,b)=(1.4,-0.3) and N = 2 <cit.>. Generalized Hénon maps may have up to N-1 positive Lyapunov exponents and show chaotic behavior <cit.> (see supplemental materials on the bifurcations in generalized Hénon maps). We investigated the NIC with increasing κ, and NIO with decreasing κ using the recently developed random dynamical systems theory. A random generalized Hénon map is given by 𝐱_t+1=𝐅( x_t)+εΞ_t,  Ξ_t=(ξ_t,0,…,0)^T, where ξ_t ∈ [-1, 1] follows i.i.d. uniform distribution, and ε is the parameter of the noise level. We adopted parameters N = 6, a = 1.51, and b = 0.08, and ε∈ [0, 0.1] as a control parameter in this paper. We present a phase diagram of the random generalized Hénon map with parameters (a,b)∈ [1.45, 1.55]× [0.075, 0.085] by computing the Lyapunov spectra and observing the number of positive Lyapunov exponents κ for the noise levels ε=0, 0.02, 0.08. In the case of (a,b)=(1.51, 0.08), for the noiseless case with ε=0, we have 4 positive Lyapunov exponents. For ε=0.02, we have κ = 3, and for ε=0.08, we have κ= 5. Because the Lyapunov exponents of random dynamical systems are typically continuous functions of the noise level, we initially observed decreasing κ and then increasing κ as the noise level increased. These multiple transitions have been reported in a class of one-dimensional maps, and its existence has been rigorously proved by computer assisted proofs <cit.>. Positive measure regions were observed near (a,b)=(1.51,0.08) in the phase diagram FIG. <ref> for all noise ranges, confirming that the phenomenon of changing κ is robustly observed. In Fig. <ref>(a), the Lyapunov spectra (λ_1, λ_2, …,λ_6) are shown. The range of the Lyapunov spectrum is shifted down and then shifted up. In Fig. <ref>(b), the Kolmogorov–Sinai entropy H_KS is plotted as a function of the noise level ε. The Kolmogorov–Sinai entropy decreases first, and then increases. We introduce 4 different phases divided by the critical noise level, ε =ε_1, ε_3, ε_4. The critical noise levels are given by ε_1 ≈ 0.007, ε_2 ≈ 0.0392, ε_3 ≈ 0.0456, ε_4 ≈ 0.0658. Phase 0 is the deterministic limit with κ=κ_0=4 and H_KS = h_0 ≈ 0.0369. In Phase 1  (0<ε <ε_1), κ remains as κ=κ_0, however, H_KS < h_0, resulting in weak ordered behavior. Phase 2 is divided into two sub-phases: 2a  (ε_1 < ε <ε_2) and 2b   (ε_2 < ε < ε_3). In Phase 2a, κ=3 < κ_0 and H_KS < h_0, showing typical NIO behavior. However, in Phase 2b, H_KS > h_0, implying stronger chaoticity than Phase 2a while κ actually decreases. In Phase 3  (ε_3< ε < ε_4), κ = κ_0, H_KS > h_0, which shows that the chaoticity is strengthened again. Finally, in Phase 4  (ε_4 < ε < ε_5), κ=5 > κ_0 and H_KS > h_0, showing typical NIC behavior. Next, to discuss the attractor geometry and periodicity, we particularly focused on a period-2 and a period-4 unstable periodic orbit (UPO) P_2: { q_1, q_2} and a period-4 unstable periodic orbit P_4: { p_1, p_2, p_3, p_4} in the deterministic generalized Hénon map, where 𝐪_1≈ (-0.257, 1.337, -0.257, 1.337, -0.257, 1.337), 𝐪_2≈ ( 1.337, -0.257, 1.337, -0.257, 1.337, -0.257), and 𝐩_1≈ (-0.456, 1.398, 0.128, 1.191, -0.456, 1.398), 𝐩_2≈ ( 1.398, 0.128, 1.191, -0.456, 1.398, 0.128), 𝐩_3≈ ( 0.128, 1.191, -0.456, 1.398, 0.128, 1.191), 𝐩_4≈ ( 1.191, -0.456, 1.398, 0.128, 1.191, -0.456), which are computed by Newton's method. We expected that, with the addition of noise, these UPOs would attract orbits and enhance the stability and periodicity of the noised dynamics. Fig. <ref>(a) illustrates the power spectrum and the characteristic peak as indicated by a red arrow. Fig. <ref>(b) shows attractors projected to a 3-dimensional sub-space (x^(1),x^(5),x^(6)) (top), and the corresponding invariant densities ρ(x^(1)) projected to a 1-dimensional sub-space x^(1) (bottom). The UPO P_2 is shown through red points, and its projection is indicated by red dashed lines. In Fig. <ref> (a) (left), no prominent peaks are observed in the power spectrum of the deterministic time series; however, in Fig. <ref> (a) (right), we observed a clear characteristic peak at the frequency f ≈ 1/2, implying that the characteristic period is 2. Fig. <ref>(b)(left) shows a deterministic attractor with κ_0=4 positive Lyapunov exponents. This attractor is a 4-band chaotic attractor, and the UPO P_2 is located outside of it. In contrast, the random attractor presented in Fig. <ref>(b)(right) is a single-band chaotic attractor, including the UPO P_2. Its invariant density concentrates near P_2, implying that the noised orbits concentrate to the UPO P_2. Simultaneously, the attractor represents a global stretching-and-folding structure, which contributes to the increase in κ=5>κ_0. Table 1 summarizes our observationd in the numerical experiments for the numbers of positive Lyapunov exponents κ, Kolmogorov–Sinai entropy H_KS, concentration of invariant density, peaks in power spectrum, the numbers of major clusters (effectively the number of observable bands), and stretching-and-folding structure. According to these statistics and geometrical structure, we expect that there are mixed states with coexistence of NIC and NIO. For instance, with ε = 0.04 in Phase 2, κ decreases from 4 to 3, which indicates stabilizationm, however, the Kolmogorov–Sinai entropy increases, indicating increase in uncertainty of the dynamics. Similarly, in Phase 4, κ increases from 4 to 5, which is also shown by the emergence of a global stretching-folding structure, indicating stronger chaoticity, while the peak at f=1/2 in the power spectrum indicates the newly emerging peirodicity of the noised dynamics. Similar phenomena with the UPO P_4 are observed in the case of weak noise (see supplemental materials on the periodicity with the UPO P_4.). In these cases, it is thought that the UPOs such as P_2 and P_4 attract the noised orbits, the peaks in the invariant density emerges, and the weak periodicities are observed. This paper reports on noise-induced phenomena in 6-dimensional random generalized Hénon maps, which are randomly perturbed delayed logistic maps. At monotonically increasing noise levels, we observed (i) an increase in the number of positive Lyapunov exponents from 4 to 5 and the emergence of characteristic periods at the same time, and (ii) a decrease in the number of positive Lyapunov exponents from 4 to 3 and an increase in the Kolmogorov–Sinai entropy at the same time. Our results imply that simple concepts of noise-induced phenomena, such as NIC and NIO, may not describe those analogue in high dimensions, owing to the mixed states with coexistence of NIC and NIO. In presented examples of the random generalized Hénon maps, NIC with enhanced regularity and NIO with enhanced uncertainty are observed. Further random dynamical system analysis for these heterogeneous noise-induced transitions will be studied elsewhere. Noise-induced phenomena in continuous flows, which are described by stochastic differential equations (see supplemental materials on NIC in extended Rössler systems), are promissing future works to carry out. Y.S. was supported by JSPS Grant-in-Aid for Scientific Research (B), JP No. 21H01002.
http://arxiv.org/abs/2409.02535v1
20240904085420
Spatial polarization gating of high-harmonic generation in solids
[ "Pieter J. van Essen", "Brian de Keijzer", "Tanya van Horen", "Eduardo B. Molinero", "Álvaro Jiménez Galán", "Rui E. F. Silva", "Peter M. Kraus" ]
physics.optics
[ "physics.optics" ]
APS/123-QED [email protected] Advanced Research Center for Nanolithography, Science Park 106, 1098 XG, Amsterdam, The Netherlands Advanced Research Center for Nanolithography, Science Park 106, 1098 XG, Amsterdam, The Netherlands Advanced Research Center for Nanolithography, Science Park 106, 1098 XG, Amsterdam, The Netherlands Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), E-28049 Madrid, Spain Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), E-28049 Madrid, Spain Max-Born-Institute for Nonlinear Optics and Short Pulse Spectroscopy, Max-Born-Strasse 2A, D-12489 Berlin, Germany Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), E-28049 Madrid, Spain Max-Born-Institute for Nonlinear Optics and Short Pulse Spectroscopy, Max-Born-Strasse 2A, D-12489 Berlin, Germany [email protected] Advanced Research Center for Nanolithography, Science Park 106, 1098 XG, Amsterdam, The Netherlands Department of Physics and Astronomy, and LaserLaB, Vrije Universiteit, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands § ABSTRACT High-harmonic generation from solids can be utilized as probe of ultrafast dynamics, but thus far only over extended sample areas, since its spatial resolution is diffraction-limited. Here we propose spatial polarization gating, that is using a spatially varying ellipticity of a driving laser pulse to reduce the spatial profile of high-harmonic emission below the diffraction limit and hence increase spatial resolution. We show experimentally and by numerical simulations that our method is generally applicable as suppressing high harmonics in elliptical fields is a common response in all solids. We also briefly explore the possibility of applying this technique to widefield imaging, specifically to nonlinear structured illumination microscopy. Our findings indicate that spatial polarization gating can enable all-optical femto-to-attosecond label-free imaging beyond the Abbe limit. Spatial polarization gating of high-harmonic generation in solids Peter M. Kraus September 9, 2024 ================================================================= A plethora of recent works have demonstrated high-harmonic generation (HHG) as a highly sensitive probe for ultra-fast phenomena in solids <cit.>, such as the detection of electron dynamics <cit.>, coherent phonons <cit.>, and material phase transitions <cit.> with in principle sub-cycle (i.e. few to sub-fs) temporal resolution <cit.>. Although recent studies have mostly focused on homogeneous samples there is an exciting prospect in spatially resolving these dynamics in systems such as semiconductor devices, nanophotonic metasurfaces, and microscopic flakes. Complicating the imaging of these devices is the fact that their feature size can be well below the wavelength of visible light. The diffraction-limited nature of light in combination with the relatively long wavelengths required for HHG in solids means that simply combining HHG with conventional microscopy will not yield the required resolution for imaging these microscopic systems. To overcome the diffraction limit, super-resolution imaging techniques have been developed, which are widely used in bio(medical)-imaging <cit.>. Examples of super-resolution techniques include stimulated depletion microscopy (STED) <cit.>, photoactivated localization microscopy (PALM) <cit.>, and stochastic optical reconstruction microscopy (STORM) <cit.>. These techniques rely on specific properties of fluorescence, as such they can not simply be applied to HHG imaging. Recently the first super-resolution imaging technique based on the properties of HHG from solids was demonstrated <cit.>. This harmonic depletion microscopy (HADES) shares similarities with STED as HHG emission is inhibited by an orbital-angular-momentum carrying pre-excitation pulse. Here we will exploit the strong dependence of the HHG process on the specific properties of the driving field to spatially confine HHG below the diffraction limit in a material-independent and thus generally applicable way. We propose to obtain a high spatial resolution for HHG imaging using ellipticity to control emission via spatial polarization gating (SPG). SPG is inspired by attosecond (temporal) polarization gating <cit.>, the first technique that was proposed to generate isolated attosecond pulses in the extreme-ultraviolet spectral range via HHG from gasses. In polarization gating, a half-cycle overlap of two counter-rotating circularly polarized femtosecond pulses facilitates temporally confining HHG and thus enables the generation of isolated attosecond pulses <cit.>. Spatial confinement of third-harmonic generation using a driver with spatially varying ellipticity has been demonstrated <cit.>. Here we show general applicability of SPG to HHG in solids, provide a simple framework that allows us to predict the spatial emission profile, and propose an extension of SPG to widefield imaging. We measured the ellipticity response of HHG from ZnO, silicon, and sapphire (for details see <cit.>). In all cases, HHG suppression can be described by a Gaussian, matching with the theoretical results of Ref. <cit.> and qualitatively consistent with HHG in gasses <cit.>, I(ϵ)/I(0) = S(ϵ) = e^-1/2(ϵ/ϵ_0)^2. Here ϵ is the ellipticity (for detailed definition see <cit.>), and ϵ_0 specifies the efficiency of suppression. ϵ_0 is larger for higher orders (ϵ_0 (H3) = 0.32±0.02; ϵ_0 (H5) = 0.23 ± 0.02) but shows little to no intensity and material dependence. This shows that the field characteristics are the dominant factor in the suppression. A simple analytical model to predict the harmonic emission considers the harmonic intensity projected along the polarization direction r̂, I_n,r̂(ϵ) = |𝐄_n·r̂|^2 ∝ S(ϵ) |𝐄_0·r̂|^2/|𝐄_0|^2|𝐄_0|^2n . Here n denotes the harmonic order, and 𝐄_0 is the incoming electric field. The first factor is the suppression factor S(ϵ). The second term results from the assumption that the polarization of the HHG emission is the same as that of the incoming light, this assumes perfect momentum conservation between the incoming and emitted light and is typically realized for amorphous and polycrystalline samples. The third term describes the intensity scaling of the harmonic with the incoming intensity, which we treat by perturbation theory. This assumption will fail for higher intensities and orders where non-perturbative (and typically lower) intensity scalings are observed <cit.>. For the experiments in this letter, these assumptions were found to hold up. To support the generality of the observations discussed, we have performed simulations of the ZnO ellipticity response by solving the semiconductor Bloch equations (SBE) <cit.>. We have used a simple tight-binding model within the simulations to describe the ZnO (for details see <cit.>). Here, the ellipticity is varied by delaying two equally intense 2000 nm pulses with linear and orthogonal polarizations. For these scans, we isolate emission with a polarizer that is aligned diagonally with the s and p directions. The simulation results are shown in Fig. <ref>a where we observe periodic suppression of all the harmonics orders. As the delay is scanned over the length of one fundamental wavelength, the driver ellipticity switches from diagonal linear (maximum HHG) to right-handed circular (full HHG suppression) to anti-diagonal linear (maximum HHG, linear polarization of driver rotated by 90^∘ with respect to diagonal) to left-handed circular (full HHG suppression) and back to diagonal linear. As we filter out the anti-diagonal emission this leaves us with emission maxima when the delay is a multiple of the wavelength, which is what we observe in our simulation results. For longer delays partial pulse overlap results in both reduced HHG emission for linear and reduced suppression for circular polarization. We compare our SBE simulations with measurements of ZnO, where the ellipticity has similarly been varied by delaying two orthogonally polarized pulses. We observe the same behavior as in the SBE simulation in our measurements in Fig. <ref>b: strong enhancement and suppression with a periodicity matching that of the driving wavelength. The pulse duration in the experiments is longer than the one used in the SBE simulation, resulting in less intensity change over the delay scan and spectrally narrower HHG emission. The lines observed in the H7 data can be attributed to the broadband bandgap fluorescence found in ZnO <cit.>, the underlying multiphoton excitation of which also exhibits a strong ellipticity dependence. Similar to the SBE simulation, we note a spectral tilt in the harmonic emission maxima, which is due to the difference in phase between the different spectral components in the driving pulse for a given delay. Since this depends on the phases of the driver, the spectral tilt is found consistently between all the harmonics, as illustrated via the dashed white lines which all have the same slope. To better compare the SBE simulation and measurement, their integrated harmonic intensity is shown in Fig. <ref>c, together with the analytical model (for H3-5) using equation <ref>. We observe good agreement between SBE simulation, measurement, and analytical model. The different periodicity between SBE simulation and measurement are caused by a slightly longer than 2000 nm experimental wavelength; this can be deduced from the central wavelength of the harmonics shown in figure <ref>b. The fact that our simulations can reproduce our experimental results while using a tight-binding model instead of specific material properties indicates that even in solids the decreased HHG efficiency for increased ellipticity is a general effect that is strongly governed by the properties of the driving field. So far we have considered HHG by drivers with uniform ellipticity, now we will move to SPG, where we consider beams with spatially varying ellipticity. We make use of the setup as shown in Fig. <ref>a. An incoming beam is separated into two arms in one of which the polarization is rotated using a half-wave plate (HWP). The relative spatial profile of the beams in the two arms can now be altered with the use of phase plates, which, when recombining the beams, enables the creation of beams with spatially varying ellipticity. In our calculations and experiments, we have focused on propagation stable excitation profiles, in particular, Hermite-Gaussian (HG_mn, Fig. <ref>b-d) and Laguerre-Gaussian (LG_lp, Fig. <ref>e-g) modes <cit.>. HG profiles have rectangular symmetry and a phase profile consisting of discrete steps, while the LG profiles have radial symmetry with a phase profile that continuously varies in the azimuthal direction. To evaluate the spatial HHG emission profiles, we use the analytical model described by equation <ref> and use the Gaussian suppression with the experimentally found parameters for H3 and H5. Examples of calculated SPG emission profiles are shown in Fig. <ref>d,g. While Fig. <ref>d shows a narrow HHG emission spot, Fig. <ref>g illustrates the possibility of creating more complex emission patterns. These calculations clearly show HHG emission being minimal at the positions of maximal ellipticity. To illustrate the capabilities of SPG for imaging we demonstrate spot size reduction below that of conventional HHG emission from ZnO using an 1800-nm driver. For this, we will focus on the H3 emission from the combination of an s-polarized fundamental Gaussian profile with a p-polarized HG_01 profile, as shown in Fig. <ref>a. The HG_01 profile is obtained by using a phase step plate with the π phase jump centered on the optical axis. The alignment of two beams was optimized to ensure overlap and parallel propagation after recombination. Similar to the delay scans discussed before, the phase of the two beams was scanned using a delay stage; however, instead of recording the emission spectrum, the HHG emission profile was recorded using a camera. Additionally, the polarizer before the detector was rotated to be parallel with the s-polarization. We used the analytical model to evaluate the expected H3 emission spot size, which is shown in Fig, <ref>b. We use two different metrics to evaluate the spot size, the full width at half maximum (FWHM) and the image moment (for details see <cit.>). The relative intensity is the intensity of the p component of the driver compared to the s component. We look at the two extremes of the two fields being in-phase (blue) and λ/4 out-of-phase (red). The in-phase spot size will simply increase with the relative intensity as the effective field strength at the edges of the spot is increased. For the out-of-phase case, we see that a reduction of the spot size can be achieved, which is caused by the edges of the spot becoming elliptically polarized. Comparing the image moment (solid line) and FWHM (dashed line), we see that for the image moment, the spot size reduction only happens for a range of relative intensities, while for the FWHM, the spot size decreases for much longer until it shoots up drastically. For the higher relative intensities, the ellipticity at the edge will be reduced, resulting in the formation of side peaks. These side peaks will result in the HHG emission being emitted throughout a bigger spatial region, however, the central peak still becomes narrower. When the intensity of the side peaks first exceeds half the intensity of the main peak the FWHM spot size is instantly greatly increased. Figure <ref>c-e shows the H3 emission spots for a relative intensity of 1 as calculated (c)-(e) and as measured (f)-(h). In figure <ref>c and e, the emission from only the Gaussian s-polarized driver is shown, which corresponds to conventional Gaussian emission profiles. The spot size of the calculation was chosen to be in good agreement with the measurement. Good agreement is found between the calculations and experimental results for both the out-of-phase and in-phase spots. The out-of-phase spots are considerably narrower in the y-direction, showing the increased spatial confinement of HHG emission. The out-phase spots instead show the greatly increased spot sizes. The maximum experimental spot size reduction found, evaluated via the image moment, was 30%, close to the predicted minimum. This demonstrates that our analytical model can predict the spatial HHG emission and that SPG enables localization of emission beyond that of conventional solid HHG emission. The observed spot size reduction demonstrated here is a noticeable improvement, however, bigger improvements are desirable. Inherent to SPG is the need to balance the amplitude and phase of the electric field to achieve maximum ellipticity. To achieve sharp jumps in ellipticity, either sharp phase or amplitude jumps should be present, both of which are diffraction-limited. This means that the local confinement of HHG emission with SPG is still diffraction-limited. It is possible to make use of higher-order harmonics to improve the resolution further (see <cit.>). However, this does not solve this inherent limitation. To fully utilize SPG, we will have to move beyond point scanning and instead have to consider widefield techniques, such as structured illumination microscopy (SIM) <cit.>. The key here is the observation that although the spatial confinement of HHG emission by SPG is limited, SPG does enable the creation of very sharp, predictable emission features. Conventional SIM uses frequency components in spatially structured illumination patterns to encode information about higher spatial frequencies of the sample into lower spatial frequency components (for details see <cit.>). Conventional SIM can increase the imaging resolution to twice the diffraction limit but is essentially still a diffraction-limited imaging method. However, it is possible to enhance the resolution of SIM beyond this by using non-linear processes that introduce higher spatial frequency components into the illumination patterns. One way of achieving this is by using the saturating intensity scaling of fluorescence or harmonic generation which is then referred to as saturated structured illumination microscopy (SSIM) <cit.>. By using SPG to create the illumination patterns in SIM (SPG-SIM), we can instead use the non-linear ellipticity scaling to introduce higher spatial frequencies. SPG-SIM enables super-resolution imaging without needing the high intensities required to reach the saturating regime, alternatively, SPG-SIM can be used in conjunction with SSIM. To illustrate the super-resolution capabilities of SPG-SIM, Fig. <ref> shows the calculated H5 emission of 2000 nm from respectively, a conventional interference grating pattern and an SPG ellipticity-grating pattern. Both gratings are created by the overlap of two 2000 nm pulses, which have an angle between them (NA=0.125). The difference between the two gratings is that for the conventional grating, the beams have the same polarization while for the ellipticity grating they have orthogonal linear polarization. In SPG, the change from linear to circular happens within a half-cycle of the field, instead of a full cycle, this difference is reflected in Fig. <ref> where the SPG emission profile has twice the number of peaks. The Fourier-transformed emission patterns are shown in Fig. <ref>b. The interference grating shows a sharp cut-off at the diffraction limit of H5 as expected. The SPG pattern instead has spectral components exceeding this sharp cut-off indicating its capabilities to be used for super-resolution imaging. Realistic resolution improvements are difficult to deduce from these calculations as these will now depend on the noise in the imaging system. If we consider that any amplitude contributions above 10^-5 can be detected then SPG-SIM doubles the effective spatial frequencies in the illumination pattern which increases the maximum spatial frequency that can be detected from twice the diffraction for conventional SIM to three times the diffraction limit (for details see <cit.>). For this low NA example, the diffraction limit of H5 is 1600 nm making the resolution limit of conventional SIM 800 nm, while the resolution of SPG-SIM goes up to about 530 nm. This relative increase stays consistent for a different NA, i.e. at an NA of 1, a resolution of 67 nm is possible. These calculations support the possibility of using SPG to enable wide-field super-resolution imaging techniques for solids. In conclusion, we introduced SPG for confining HHG below the diffraction limit, which can find application for high-resolution HHG imaging. Important for SPG is the common ellipticity response in solids where strong suppression of HHG emission is observed, qualitatively matching the atomic response. This ellipticity response was here qualitatively reproduced with SBE simulations without requiring detailed modeling of our material, indicating a dominance of the field characteristics. Spatial confinement of HHG by SPG can be predicted using a simple analytical framework. We demonstrated an H3 spot size reduction of about 30%, closely matching our calculated optimum. Exciting opportunities lie in applying SPG to higher-order harmonics and combining it with structured illumination. With these next steps, SPG can pave the way for high temporal and spatial resolution imaging in solids. § ACKNOWLEDGMENTS This work has been carried out at the Advanced Research Center for Nanolithography (ARCNL), a public-private partnership of the University of Amsterdam (UvA), the Vrije Universiteit Amsterdam (VU), the Netherlands Organisation for Scientific Research (NWO), and the semiconductor equipment manufacturer ASML, and was partly financed by ‘Toeslag voor Topconsortia voor Kennis en Innovatie (TKI)’ from the Dutch Ministry of Economic Affairs and Climate Policy. This manuscript is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement no. 101041819, ERC Starting Grant ANACONDA) and funded P.J.v.E. and partly P.M.K. The manuscript is also part of the VIDI research programme HIMALAYA with project number VI.Vidi.223.133 financed by NWO, which partly funded P.M.K. P.M.K. acknowledges support from the Open Technology Programme (OTP) by NWO, grant no. 18703. E. B. M. and R. E. F. S. acknowledge support from the fellowship LCF/BQ/PR21/11840008 from “La Caixa” Foundation (ID 100010434). This research was supported by Grant PID2021-122769NB-I00 funded by MCIN/AEI/10.13039/501100011033. A.J.G. acknowledges support from Comunidad de Madrid through TALENTO Grant 2022-T1/IND-24102. § SUPPLEMENTARY MATERIAL §.§ I - Ellipticity Response of High-Harmonic Generation from Solids The ellipticity response of HHG in gasses has long been investigated <cit.>. For atomic gasses, the HHG emission decreases rapidly when the driving field's ellipticity is increased. More recent studies have investigated the ellipticity response of solid HHG <cit.>. Solids introduce additional complexity which has translated to the observation of anisotropic ellipticity responses and increased HHG emission for elliptical excitation both of which were found to strongly depend on the topology of the material, the intensity of the excitation field, and the observed harmonic order <cit.>. In this work, we will instead focus on the big class of solids with weak anisotropy where the ellipticity response resembles the atomic response. To illustrate this common ellipticity response we measured the HHG emission generated by an 1800 nm driver from a number of semiconductor materials: ZnO, Silicon (thin film), and Sapphire. Figure <ref> shows the measured ellipticity response of harmonics H3 (600 nm) and H5 (360 nm), where the ellipticity of the driver was varied using a quarter-wave plate (QWP). For all the different materials clear suppression of the HHG emission is observed with increased ellipticity. §.§ II - Ellipticity The ellipticity is defined as the ratio of the field amplitudes along the major and minor axis, while the sign of the ellipticity is dependent on the relative phase of these components: ϵ = E_minor/E_major·sin(Δϕ)/sin(Δϕ) if Δϕ≠ 2πn. 0 otherwise . The major and minor axis are, respectively, the directions in which the field amplitude is maximal and minimal. The major and minor axis are orthogonal, and the fields in these directions have λ/4 phase differences. The ellipticity ϵ is 0 for linear polarized light and ±1 for circularly polarized light. The sign of ϵ indicates left versus right-handed polarization. As a QWP is used to vary ellipticity, the orientation of the wave plate determines the relative field strengths projected onto the major and minor axes. In our experiments, we also create elliptically polarized light by combining two orthogonal fields, for this we have to consider the decomposition of the total electric field in these two orthogonal components 𝐄_s and 𝐄_p, as also shown in Fig. <ref>. In this framework, the ellipticity can be given as a function of the relative amplitude and phase. To evaluate the ellipticity we consider the orthogonal field components 𝐄_s = E_ssin(ω t) and 𝐄_p = E_psin(ω t + Δϕ), these define the total field: 𝐄 = E_s sin(ω t) x̂ + E_psin(ω t + Δϕ)ŷ. To obtain the major and minor axes, we have to consider the amplitude of this field, 𝐄^2 = E_s^2sin^2(ω t)+ E_p^2sin^2(ω t + Δϕ). We are interested only in the ratio of the major and minor axes so we can reduce our number of parameters by considering the ratio A = E_s/E_p, 𝐄^2/E_p^2 = A^2sin^2(ω t)+ sin^2(ω t + Δϕ). To obtain the major and minor amplitudes the extremes of this equation have to be found. The local extremes are found at, t = tan^-1(1/2cot(Δϕ) (A^2 tan^2(Δϕ) + A^2 - √((tan^2(Δϕ) + 1) (A^4 tan^2(Δϕ) + A^4 - 2 A^2 tan^2(Δϕ) + 2 A^2 + tan^2(Δϕ) + 1)) - tan^2(Δϕ) + 1 ) ) + π/2n_1, if Δϕ≠π/2n_2. We can plug this expression back into equation <ref> to obtain, 𝐄^2/E_p^2|_minor = (A^2 cot^2(Δϕ) (A^2 tan^2(Δϕ)+A^2 -tan^2(Δϕ)+1 -√((tan^2(Δϕ)+1) (A^4 tan^2(Δϕ)+A^4-2 A^2 tan^2(Δϕ)+2 A^2+tan^2(Δϕ)+1)))^2 ) ·(4 (1/4cot^2(Δϕ) (A^2 tan^2(Δϕ)+A^2 -tan^2(Δϕ)+1 - √((tan^2(Δϕ)+1) (A^4 tan^2(Δϕ)+A^4-2 A^2 tan^2(Δϕ)+2 A^2+tan^2(Δϕ)+1)))^2+1) )^-1 + sin^2(A tan(1/2cot(Δϕ) (A^2 tan^2(Δϕ)+A^2 -tan^2(Δϕ)+1 - √((tan^2(Δϕ)+1) (A^4 tan^2(Δϕ)+A^4-2 A^2 tan^2(Δϕ)+2 A^2+tan^2(Δϕ)+1))))+Δϕ), 𝐄^2/E_p^2|_major = A^2 (1/4cot^2(Δϕ) (A^2 tan^2(Δϕ)+A^2-tan^2(Δϕ)+1 -√((tan^2(Δϕ)+1) (A^4 tan^2(Δϕ)+A^4-2 A^2 tan^2(Δϕ)+2 A^2+tan^2(Δϕ)+1)))^2+1)^-1 + cos(Atan(1/2cot(Δϕ) (A^2 tan^2(Δϕ)+A^2-tan^2(Δϕ)+1 -√((tan^2(Δϕ)+1) (A^4 tan^2(Δϕ)+A^4-2 A^2 tan^2(Δϕ)+2 A^2+tan^2(Δϕ)+1))) )+Δϕ)^2. From this, we can exactly evaluate the ellipticity, ϵ = sin(Δϕ)/sin(Δϕ)√(𝐄^2/E_p^2|_minor(𝐄^2/E_p^2|_major)^-1) if Δϕ≠π/2n. Additionally, for two particular cases, we obtain simplified exact expressions. When the field components our out-of-phase (Δϕ = π/2 + πn) the ellipticity is given by: ϵ = sin(Δϕ)/sin(Δϕ)min(E_p/E_s,E_s/E_p). In this case, the major and minor axis have the same orientation as the s and p components. When the amplitudes of both fields are equal (A = 1) the ellipticity is given by: ϵ = sin(Δϕ)/sin(Δϕ)min( √(1+cos(Δϕ)/1-cos(Δϕ), √(1-cos(Δϕ)/1+cos(Δϕ)). §.§ III - Experimental Method In this work we presented results from three distinct measurements, basic ellipticity measurements as shown in Fig. <ref>, delay measurements as shown in main text Fig. 1b, and SPG measurements as shown in main text Fig. 3. In all these measurements we use IR drivers generated by an OPA which we weakly focus to generate HHG with peak intensities between 300 GW/cm^2 to 3 TW/cm^2. The beam out of the OPA was used as is, without any spatial filtering of the mode profile. All measurements were performed in transmission. The basic ellipticity measurements were performed with an 1800 nm driver of which the ellipticity was varied using a quarter-wave plate. The ellipticity response was measured from ZnO, sapphire, SiN, silicon (thin film), and UV-fused silica. The delay measurements were done with ZnO using a 2000 nm driver of which the ellipticity was varied by delaying two orthogonally polarized beams, as illustrated in Fig. <ref>. The emission was filtered using a polarizer, with the polarizer aligned as to filter out the anti-diagonal emission with respect to the s- and p-polarized driver beams. The relative phase and amplitude between the two beams were respectively varied using a delay stage and a neutral density filter wheel. The SPG measurements were performed with ZnO using a 1800 nm driver with the ellipticity being varied similarly to the delay measurements. However, the polarizer was now aligned to filter out the p-polarized components. Additionally, a phase plate was used in this measurement to alter the spatial profile of the p-polarized beam. §.§ IV - SBE Simulation To simulate the nonlinear response, we numerically solve the Semiconductor Bloch Equations in the Wannier gauge <cit.>. We have developed a tight-binding model that captures the relevant properties of ZnO, most importantly the amplitude of the bandgap. The model consists of a two-dimensional square lattice with two atoms, A and B, per unit cell, which is a simplified geometry compared to the hexagonal lattice structure of the ZnO sample. The band gap of the system is equal to the difference of their onsite energies, i.e. Δ = |ϵ_A-ϵ_B|. We matched the theoretical gap to the experimental one Δ = Δ_ZnO≈ 3.3 eV. The model has inversion symmetry, which can also be seen from the absence of even harmonics in the simulations. We only considered hopping between nearest neighbors of the lattice with a magnitude of t=1.0 eV ≪Δ_ZnO. The system is driven using two orthogonal pulses with a cos^2 envelope duration of 32.5 fs, peak intensity of 4 TW/cm^2, and a wavelength of 2 μm. Consequently, the delay between pulses is scanned using a 0.4 fs step size. Convergence of the simulations is verified to be achieved at 512x512 points of a Monkhorst-Pack grid in reciprocal space and a propagation time step size of 0.097 fs. §.§ V - Spot Size To evaluate the spot sizes of the HHG emission, we approximate their shape with ellipsoids and consider their enclosed area: A = π R_max R_min. Here, R_max and R_min are the long and short radii of the ellipsoid. We evaluate these radii in two different ways either via the image moment or by calculating the full-width half maximum (FWHM) of the x and y cross-sections. To obtain the relative spot size, the evaluated area is normalized to the area of the HHG spot size, which is generated from only the fundamental Gaussian. For Gaussian spots, the image moment and FWHM will yield the same relative spot sizes, For more complexly shaped spots these two metrics will differ. In particular, when having emission profiles with noticeable side peaks, the image moment will yield a relatively big spot size, while the FWHM will disregard these side peaks and yield smaller spot sizes. The image moment provides a metric on the spatial distribution of the intensity profile throughout space, while the FWHM tells us more about the sharpness and size of the high peak intensity central feature. Generally, for images, the momenta can be evaluated using: M_n,m = ∑_x∑_y x^n y^m I(x,y). The mean values can be calculated using: μ̅_x = M_10/M_00 and μ̅_y = M_01/M_00 The second-order normalized mean-corrected momenta are given by, μ_20 = M_20/M_00 - μ̅_x^2, μ_02 = M_02/M_00 - μ̅_y^2, μ_11 = M_11/M_00 - μ̅_xμ̅_y. Note that for Gaussian distributions the x and y standard deviations are respectively given by σ_x = √(μ_20) and σ_y = √(μ_02). From the second-order moment, the major and minor axes are given by, λ_± = √(μ_20+μ_02/2±√(4μ_11^2+(μ_20-μ_02)^2/2)). These major and minor radii are used as the long and short radii of the ellipsoidal spots. §.§ VI - Spot Size Reduction H5 The relative spot size reduction that can be achieved is different for the different harmonic orders. To illustrate this the calculated spot size reduction of H3 and H5 are shown in Fig. <ref>. We see that the optimal spot size reduction evaluated with the image moment increases from about 30% for H3 to close to 40% for H5. §.§ VII - Structured Illumination Microscopy In widefield microscopy, the image detected by a camera D can be described as <cit.>: D(𝐫) = (s(𝐫)· I(𝐫))* PSF(𝐫). Here s is the structure that is being imaged, I is the illumination profile, and PSF is the point-spread function. We here use * to denote a convolution. To evaluate the possible resolution we can consider the detected image in the spatial-frequency domain: ℱ{D} = (ℱ{s}* ℱ{I}) ·ℱ{PSF}. Here ℱ indicates the Fourier transform. In conventional microscopes, the PSF will be close to an Airy disk which is close to the shape of a Gaussian, which means that equally the Fourier transform of the PSF will resemble a Gaussian. The multiplication of the PSF in Eq. <ref> functions as a filter for the higher spatial frequencies, which results in diffraction-limited imaging. For conventional widefield imaging, the illumination pattern is homogenous, making retrieving information about the higher spatial frequency components from the structure impossible. In structured illumination microscopy (SIM) a structure is introduced to the illumination pattern that can shift information of the higher spatial frequency components of the structure to lower spatial frequencies. Effectively the structure and illumination pattern together will result in a Moiré pattern. The convolution in Eq. <ref> has frequency components at, Ω_s ±Ω_I. With Ω_s and Ω_I being respectively a spatial frequency component of the structure and illumination. If the difference in frequencies is small enough they will not be filtered out by the PSF which enables detection of the higher spatial frequency components of the structure. If the PSF filters out all frequencies above the diffraction-limit Ω_diff, then the maximum frequency of the structure about which information can be detected is Ω_diff + max(Ω_I). If the maximum frequency in the illumination pattern has the same diffraction limit as the PSF then the maximum detectable frequency is 2 Ω_diff which is the case for conventional SIM. In the example shown in main text Fig. 4, we see that the SPG illumination pattern has spatial frequencies up to at least twice the diffraction limit of H5 which means that the maximum detectable spatial frequency will be 3 Ω_diff,H5, which in terms of real space improvement is a resolution increase of 33% beyond conventional SIM.
http://arxiv.org/abs/2409.02296v1
20240903212041
Development of the 220/270 GHz Receiver of BICEP Array
[ "The BICEP/Keck Collaboration", ":", "Y. Nakato", "P. A. R. Ade", "Z. Ahmed", "M. Amiri", "D. Barkats", "R. Basu Thakur", "C. A. Bischoff", "D. Beck", "J. J. Bock", "V. Buza", "B. Cantrall", "J. R. Cheshire IV", "J. Cornelison", "M. Crumrine", "A. J. Cukierman", "E. Denison", "M. Dierickx", "L. Duband", "M. Eiben", "B. D. Elwood", "S. Fatigoni", "J. P. Filippini", "A. Fortes", "M. Gao", "C. Giannakopoulos", "N. Goeckner-Wald", "D. C. Goldfinger", "J. A. Grayson", "P. K. Grimes", "G. Hall", "G. Halal", "M. Halpern", "E. Hand", "S. Harrison", "S. Henderson", "J. Hubmayr", "H. Hui", "K. D. Irwin", "J. Kang", "K. S. Karkare", "E. Karpel", "S. Kefeli", "J. M. Kovac", "C. L. Kuo", "K. Lau", "M. Lautzenhiser", "A. Lennox", "T. Liu", "K. G. Megerian", "M. Miller", "L. Minutolo", "L. Moncelsi", "H. T. Nguyen", "R. O'Brient", "A. Patel", "M. Petroff", "A. R. Polish", "T. Prouve", "C. Pryke", "C. D. Reintsema", "T. Romand", "M. Salatino", "A. Schillaci", "B. L. Schmitt", "B. Singari", "A. Soliman", "T. St. Germaine", "A. Steiger", "B. Steinbach", "R. Sudiwala", "K. L. Thompson", "C. Tucker", "A. D. Turner", "C. Vergès", "A. Wandui", "A. C. Weber", "J. Willmert", "W. L. K. Wu", "H. Yang", "E. Young", "C. Yu", "L. Zeng", "C. Zhang", "S. Zhang" ]
astro-ph.IM
[ "astro-ph.IM" ]
Black holes of type D revisited: relating their various metric forms Marco Astorino September 9, 2024 ====================================================================== § ABSTRACT Measurements of B-mode polarization in the CMB sourced from primordial gravitational waves would provide information on the energy scale of inflation and its potential form. To achieve these goals, one must carefully characterize the Galactic foregrounds, which can be distinguished from the CMB by conducting measurements at multiple frequencies. BICEP Array is the latest-generation multi-frequency instrument of the program, which specifically targets degree-scale primordial B-modes in the CMB. In its final configuration, this telescope will consist of four small-aperture receivers, spanning frequency bands from 30 to 270 GHz. The 220/270 GHz receiver designed to characterize Galactic dust is currently undergoing commissioning at Stanford University and is scheduled to deploy to the South Pole during the 2024–2025 austral summer. Here, we will provide an overview of this high-frequency receiver and discuss the integration status and test results as it is being commissioned. § INTRODUCTION Precise measurements of the cosmic microwave background (CMB) play a key role in cosmology. In particular, we are motivated to measure the polarization patterns in the CMB to search for evidence of cosmic inflation. Under the ΛCDM standard model, assuming only scalar perturbations were present at the surface of last scattering, we expect the primordial CMB signal to consist purely of E-mode polarization patterns <cit.>. However, inflationary theories additionally predict tensor perturbations in the early universe, i.e., a background of primordial gravitational waves. Tensor perturbations source both E-mode and B-mode patterns in the CMB via Thomson scattering at the recombination epoch <cit.>. Thus by searching for an excess of B-modes in the CMB at recombination, we can detect or set limits on primordial gravitational waves from inflation. Since 2006, the program has been taking measurements to constrain this primordial B-mode signature, parameterized by the tensor-to-scalar ratio, r. ’s latest published results, which includes data taken up to the end of the 2018 season (“BK18”), set the most sensitive B-mode-only constraints to date on r, with a sensitivity of σ(r) = 0.009 <cit.>. The search for primordial gravitational waves is complicated by the presence of non-primordial B-mode signals in the CMB that come from Galactic foregrounds and gravitational lensing. We typically focus on two dominant types of polarized Galactic foreground contaminants: synchrotron radiation and thermal dust emission. These foreground components can be separated from the CMB through multi-frequency measurements utilizing their different spectra. While CMB signal peaks at around 160 GHz and follows a blackbody spectrum, Galactic polarized synchrotron emission follows a power law and is brightest at lower frequencies <cit.>, and polarized thermal dust emission from within our Galaxy has a modified blackbody spectrum that dominates at higher frequencies <cit.>. In particular, polarized Galactic dust is brighter than synchrotron at the peak of the CMB spectrum in the region that BICEP is observing, and the sample variance from dust contributes more than 20% of the total uncertainty in the best B-mode-only r constraint to date <cit.>. Thermal dust emission is caused by starlight re-radiating from dust grains in the interstellar medium. Because these dust grains are asymmetric and elongated, they align with the Galactic magnetic field and therefore emit slightly polarized photons. This emission is the brightest in the Galactic plane, highly anisotropic, and extends all the way to high Galactic latitudes where is observing. In order to effectively guard for spurious B-modes from the Galaxy, any future experiment must carefully characterize the spectral properties of dust, which requires appropriate frequency coverage. Thus, a sensitive high-frequency instrument for taking precise measurements of the dust is a crucial part of the program. BICEP Array (BA) is the latest-generation multi-frequency instrument of the program, replacing Keck Array. BA, which specifically targets degree-scale B-modes in the CMB, will consist of four small-aperture receivers in its final configuration, spanning frequency bands from 30 to 270 GHz and covering the same sky patch as BICEP3. Two of these have already been deployed: a 30/40 GHz receiver for constraining synchrotron and a 150 GHz receiver observing close to the peak of the CMB spectrum. The other two slots of the telescope are currently occupied by Keck Array receivers until BA receivers are ready to replace them. The next receiver of BA to deploy will be the 220/270 GHz receiver, which is designed to characterize Galactic dust. With a bigger field of view and larger detector count compared to the Keck 220 GHz receiver that it will be replacing, this receiver will greatly improve the signal-to-noise of our dust measurement. This high-frequency receiver is currently undergoing commissioning at Stanford University and is scheduled to deploy to the South Pole during the 2024–2025 austral summer. In this paper, we will focus on the 220/270 GHz receiver and first present an overview for this high-frequency instrument in Sec. <ref>. We then describe in detail in Sec. <ref> the receiver integration status and performance as it is being commissioned for deployment in North America. We conclude by outlining the predicted overall impact and performance of this receiver on-site at the South Pole in Sec. <ref>. § 220/270 GHZ INSTRUMENT OVERVIEW Fig. <ref> shows a cross section of the 220/270 GHz receiver. §.§ Cryostat The 220/270 GHz receiver, like the other BA receivers, consists of an outer 300 K vacuum shell containing two nested stages with nominal operating temperatures of 50 K and 4 K <cit.>. The stages are cooled using a Cryomech PT415 pulse tube, which has a cooling capacity of 40 W at 45 K and 1.5 W at 4.2 K. Inside the 4 K shell, we have a three-stage 4He/3He/3He sorption fridge from CEA Grenoble, providing the cooling power for the three sub-Kelvin stages at 2 K, 340 mK, and 250 mK, respectively. This fridge is designed to achieve a hold time of over 48 hours for 15 μW load on the 250 mK stage (on-sky, we expect to have less than 8 μW total load on the focal plane). The sub-Kelvin stages are stacked “wedding cake”-style, with each stage separated by low-conductivity carbon fiber truss legs, and the focal plane at the top of the structure containing the detector modules. §.§ Optics At the top of the 300 K vacuum shell, we have the 300 K laminate high modulus polyethylene (HMPE) thin vacuum window, which is 1.25 mm thick <cit.>. For the high frequencies that this receiver will observe at, using a thin window allows us to significantly reduce in-band emissions, and therefore instrumental photon noise on the detectors. HMPE laminates have been measured to be 100 times stronger than bulk high density polyethylene (HDPE), allowing them to get thin enough to act as an additional anti-reflection layer, while still being able to hold vacuum reliably <cit.>. We are using layered expanded polytetrafluoroethylene (ePTFE) for the anti-reflection layers, which in combination with the 1.25 mm thick window should result in a band average reflection less than 0.5% <cit.>. Below the vacuum window, there are various infrared filters in place to reduce radiative heat load onto the colder stages. At the very top, right under the window, is a stack of Zotefoams HD30 filters; underneath that at 50 K is an alumina infrared absorptive filter that is anti-reflection coated with two layers of epoxy on each side. At the 4K stage we have a Nylon infrared absorptive filter, and at the sub-K stage is a low-pass metal mesh filter <cit.>. The lenses are made of HDPE and cooled to 4 K to minimize loading on the detectors. The objective lens is located at the top of the 4 K shell, and the field lens is in the middle. Eccosorb HR10 baffle rings populate the area between the lenses, for the purpose of minimizing in-band reflections. §.§ Detector Modules and Readout The 220/270 GHz BA receiver is a dual-band receiver with a “checkerboard” of 220 and 270 GHz single-band detector modules on its focal plane. The focal plane is “curved” for this receiver, identical in configuration to the 150 GHz BA receiver. Here, the modules themselves are flat as in all the previous BA receivers <cit.>, but they are mounted using spacers to approximate a 1.5 m radius of curvature spherical focal plane surface for improved optical performance. We have 12 modules for a full focal plane, and each module has a 6 inch silicon wafer with 18 by 18 pixels, with 2 detectors per pixel for the two orthogonal polarizations, which gives a total of 7776 detectors, matching the high detector count of the 150 GHz receiver. This is a significant upgrade from the Keck 220 GHz receiver that this receiver will be replacing, which has only 512 detectors. For our detectors, we use dual-polarization antenna-coupled TES arrays. Fig. <ref> shows a partial view of a single 220 GHz pixel for the high-frequency BA receiver. At the top is part of the antenna network. The signal in the vertical slots are fed into the “A” detector, and similarly the signal in the horizontal slots goes to the “B” detector. In this way, we are able to measure orthogonal “A” and “B” polarizations of the same location in the sky. Below the antenna network are two TES islands, each paired with a bandpass filter, as seen on the left and right edges at the bottom of the figure. The signal goes through the bandpass filter before getting transmitted to the TES island. It is this bandpass filter that defines the upper and lower frequency cutoffs of the science band. We read out the detectors using time-division multiplexing multi-channel electronics (MCE) readout developed by the University of British Columbia <cit.>. We have a multiplexing factor of 41 for this receiver, matching the 150 GHz receiver, read out through two stages of SQUIDs made by NIST. § PERFORMANCE The commissioning of the 220/270 GHz cryostat at Stanford University began in 2021. Between mid-2021 and early 2023, multiple cold tests were performed to check the vacuum, thermometry readout, and the cryogenic performance of the receiver as more components were added to it. Starting out with an “empty” receiver, the 4He/3He/3He sorption fridge was the first to be added and checked, then the sub-Kelvin stages, all cabling, and finally the optics and detector modules. We achieved good cryogenic performance with all components installed in 2023. With all optics installed and the receiver open to light, the sorption fridge hold time is well over 48 hours before needing to be cycled, and we achieve a focal plane base temperature of 270 mK. For the rest of 2023 and all of 2024, this receiver has been undergoing various tests of the detectors, optics, and readout. §.§ Integrated Receiver Testing with the First 220 GHz Modules The first optical test runs in the 220/270 GHz receiver were conducted in 2024. In these runs, we tested the first two 220 GHz detector modules fabricated at Caltech/JPL, which we call “H10” and “H9”, in the receiver outfitted with all official optics. Our optical tests include measurements of end-to-end optical efficiency, spectral response, and near-field beam patterns. §.§.§ End-to-End Optical Efficiency We measure the end-to-end optical efficiency (OE) for the two 220 GHz detector modules H10 and H9 in the 220/270 GHz cryostat. This test is done by looking at the detector response for two different beam-filling optical loads, at ambient temperature (300 K) and liquid nitrogen temperature (77 K). For each of the two optical loads, we take load curves, where we measure the TES current while ramping bias current across the Al transition. A typical load curve for a single TES is shown in Fig. <ref>. From the load curves, we extract the dP_opt/dT, the change in optical power deposited on the detectors with respect to optical load temperature. The resulting dP_opt/dT values for every detector of the two tiles are shown in the tile maps in Fig. <ref> and Fig. <ref>. Here, it is evident that the center of H10 has a “hole” of dead TESs that cannot go into transition. This is due to a one-time fabrication error, and the error was fixed for H9 and future wafers. We plan to replace the tile in H10 before deployment. From the histograms, we obtain a median dP_opt/dT of around 0.28 pW/K for H10, and 0.24 pW/K for H9. Assuming the Rayleigh–Jeans limit, we use dP_opt/dT = k_B ηΔν where k_B is Boltzmann's constant, η is the percent efficiency, and Δν is the bandwidth. With Δν = 63 GHz for H10 and Δν = 60 GHz for H9 as obtained from our FTS measurements (see Sec. <ref>), this gives us a percent efficiency of η = 32% and η = 29% for H10 and H9, respectively. This meets our design target of around 30% optical efficiency, including losses from all optics. §.§.§ Fourier Transform Spectroscopy Next, we measure the spectral response of the 220 GHz detector modules with Fourier transform spectroscopy (FTS), using a custom-built Martin–Puplett interferometer that we mount directly to the receiver window. Fig. <ref> shows an interferogram of a typical detector on these modules. We Fourier transform the interferograms, and the plots for the resulting co-added spectral response for each module are shown in Fig. <ref> and Fig. <ref>. The fringing in the H10 spectral response that shows up primarily on the low frequency side of the band is caused by a design error on the pixels of this detector tile. Narrower than intended microstrips were erroneously added to either side of the bandpass filters on the pixels, causing degraded transmission. This error has since been fixed for all future detector tiles including H9. For H10, we measure a median band center and bandwidth of 228 GHz and 63 GHz (28%), respectively. For H9, we measure 233 GHz for the band center and 60 GHz (26%) for the bandwidth. We have also checked that this result is uniform across the entire tile. §.§.§ Near Field Beam Mapping As a final check of the overall health and performance of our optics, we take some beam maps with a source in the near field. For this near field beam mapping (NFBM) test we use a chopped thermal source mounted above the receiver window, which is moved around with a pair of Velmex stepper motors to make a map of the detector response. Fig. <ref> shows beam maps for a typical detector pair. The beams are concentric with good signal-to-noise, and the beam shapes for the two polarizations match. § CONCLUSION In-lab optical testing of the 220/270 GHz receiver was performed at Stanford University. This testing involved the first two 220 GHz detector modules, the complete set of official optics, and all necessary cabling. The goals for this test run was to characterize the optics and the detector modules, assess the cryogenic performance, and verify the readout. These tests in North America provide insight of the predicted overall performance of the receiver on-site at the South Pole. The 220/270 GHz receiver has excellent cryogenic performance, and the readout has been tested. The results of the OE, FTS, and NFBM tests are all within spec, and we are on track to deploy to the South Pole in the 2024–2025 austral summer season. Fig. <ref> is a schematic of the achieved and projected sensitivity for the entire program, and shows the predicted impact of this high frequency receiver as it starts observing at the end of 2024. In the top subplot, the 220/270 GHz BA receiver can be seen starting operations in blue, in the 2025 observing season. Until this, the high frequency band is only covered by Keck receivers. With data up to and including 2027, without delensing, we estimate that our uncertainty on r will be reduced to σ(r) ∼ 0.006 <cit.>. With delensing, we expect to achieve σ(r) ∼ 0.003 <cit.>. Even without delensing, it is clear that there is a “cusp” of improved σ(r) in 2025 with the 220/270 GHz receiver added as seen in the bottom subplot of Fig. <ref>, as this receiver will significantly improve our sensitivity to Galactic dust. More high frequency detector modules are currently being fabricated at Caltech/JPL, and we will run more optical tests in North America before deployment to characterize these modules. The projects have been made possible through a series of grants from the U.S. National Science Foundation and by the Keck Foundation. The development of antenna-coupled detector technology was supported by the JPL Research and Technology Development Fund and NASA Grants. Readout electronics were supported by a Canada Foundation for Innovation grant to UBC. The analysis effort at Stanford and SLAC is partially supported by the U.S. DOE Office of Science. The computations in this paper were run on the Cannon cluster supported by the FAS Science Division Research Computing Group at Harvard University. We thank the staff of the U.S. Antarctic Program and in particular the South Pole Station without whose help this research would not have been possible. Special thanks go to our heroic winter-overs. spiebib
http://arxiv.org/abs/2409.02560v1
20240904092708
Plasticity-induced crack closure identification during fatigue crack growth in AA2024-T3 by using high-resolution digital image correlation
[ "Florian Paysan", "David Melching", "Eric Breibarth" ]
physics.app-ph
[ "physics.app-ph" ]
Interacting Multiple Model-based Joint Homography Matrix and Multiple Object State Estimation [ ============================================================================================= § ABSTRACT Fatigue crack growth in ductile materials is primarily driven by the interaction between damaging and shielding mechanisms. In the Paris regime, the predominant mechanism for retardation is plasticity-induced crack closure (PICC). However, some of the mechanisms behind this phenomenon are still unclear. Identifying and separating the three-dimensional aspect from other shielding aspects during experiments is extremely complex. In this paper, we analyze the crack opening kinematics based on local crack opening displacement measurements in both 2D high-resolution digital image correlation data and 3D finite element simulations. The results confirm that the crack opening stress intensity factor K_op differs along the crack path. We present a new method to determine K_op at the crack front allowing us to identify PICC as the predominant shielding mechanism in fatigue crack growth experiments. Furthermore, this work contributes to the discussion on the damage-reducing effect of PICC, since we find that the influence on fatigue damage in the plastic zone remains negligible when the crack is closed and crack surface contact is directed towards the surface. § INTRODUCTION From a material science perspective fatigue crack growth is mainly driven by an interaction between damaging and shielding mechanism <cit.>. While damage mechanisms occur in front of the crack tip and lead to crack propagation, shielding mechanisms have a retarding effect. Elber <cit.> introduced the concept of crack closure in 1970. Under cyclic loading, the crack faces remain partially closed even though the fatigue crack is not completely unloaded and should be open. Assuming that no damage is induced in the process zone in front of the crack tip, he presents the Δ K_eff concept: Δ K_eff = K_max-K_op K_op is declared as crack opening stress intensity factor and describes the crack tip stress, at which the crack is fully opened for the first time during loading. The crack closure is counted among the shielding mechanisms due to its damage-reducing effect according to Ritchie <cit.>. Elber <cit.> related crack surface contact to plastic deformation within the plastic wake of a growing fatigue crack. This phenomenon is also known as plasticity-induced crack closure (PICC) and is assumed to be the predominant crack closure mechanism in the Paris regime of ductile materials <cit.>. Furthermore, it is the only crack closure mechanism that can be fully explained physically <cit.>. For generic classification, the literature typically distinguishes between plane stress and plane strain conditions. Dugdale <cit.> and Newmann <cit.> explain PICC for plane stress conditions by necking near the crack tip during crack opening. The material deforms plastically towards the crack surface, which is also known as out-of-plane flow. As a result, contact between the crack surfaces occurs before the crack is fully unloaded. While this explanation is widely accepted in the fracture mechanics community <cit.>, the existence of PICC under plane strain conditions is debatable since, per definition, no material flow is allowed in the thickness direction <cit.>. Using finite element (FE) simulations, Riemelmoser and Pippan <cit.> recognize that material flows towards the crack tip while the crack opens. In contrast to PICC under plane stress conditions, this material flow only leads to contact right behind the crack front. Numerical investigations by other authors validate that PICC is mostly present in plane stress conditions near the free specimen surface <cit.>. However, 3D FE simulations are essential to investigate crack closure, but are very time-consuming due to the mesh refinement. Paysan performed a parameter study as a function of the load and geometry to investigate the crack contact kinematics <cit.>. They conclude that the maximum stress intensity factor K_max and the sheet thickness t are the main factors that determine the location of the crack surface contact. However, there is still a need for further research, e.g. on how 3D crack surface contact affects damage development within the plastic zone or how other mechanisms such as crack deflections or branching interact with PICC. The Δ K_eff concept requires the accurate determination of K_op. The ASTM E647-15 standard <cit.> recommends the compliance method, which assumes that K is proportional to the crack opening <cit.>. Backface strain or crack mouth opening displacement (CMOD) gauges are commonly used for measurement (see Figure <ref>a). If crack closure is present, it results in a non-linear opening behavior. Both domains can be separated within the crack opening curve (see Figure <ref>b) enabling the determination of K_op. Both the large distance to the crack tip position and the strong scattering in strain gauges are the main disadvantages of using compliance gauges <cit.>. Tong <cit.> claims that the study of crack closure from a systematic perspective became possible only with high-resolution digital image correlation (HR-DIC). Sutton <cit.> investigated the crack opening displacement (COD) in HR-DIC full-field data as early as 1983. Carroll <cit.> evaluate K_op based on COD measurements at different positions along the crack path and concluded that K_op strongly depends on the measurement position within the full-field displacement DIC data. Comparisons with FE simulations <cit.> confirm these findings and support the conclusions made by Carroll <cit.>. According to O'Conner <cit.> and Tong <cit.>, the value of K_op increases as the distance to the crack tip decreases. However, a systematic relation between K_op and measurement location has not yet been found. The non-linearity of the crack opening curve provides information about the crack contact <cit.>. If the transition to the linear domain is smoothly curved, a continuous opening crack is assumed. If the transition is abrupt, it indicates a sudden loosening of the crack surfaces. This behavior is often associated with roughness-induced crack closure. The impact of PICC on damage evolution or crack driving force has been a topic of debate for some time <cit.>. Research by Tong <cit.> questions Elber's <cit.> claim that no damage is caused when a crack is partially closed. It was found that the J-integral result is only slightly different when crack closure is not taken into account. Vasco-Olmo <cit.> followed their conclusions and proposed ΔCTOD_p as a damage describing parameter. They found that crack tip opening displacement (CTOD) measurements are characterized by another non-linear domain in the upper region of the crack opening curve (see Figure <ref>b). They correlate this behavior to the induction of plastic strain as a cause of the plastic zone evolution. Based on their numerical analysis, Oplt <cit.> correlated the ΔCTOD_p to the size of the plastic zone along the crack front. They concluded that ΔCTOD_p represents the plastic strain accumulation within the plastic zone region. A comparable methodology is employed by the δ_5 concept, which is predominantly utilized for the investigation of fatigue cracks in thin structures <cit.>. A clip gauge is employed for the measurement of displacements in close proximity to the crack tip, with the reference points at a distance of 5mm from each other. However, the position the two strain gauges can remain unchanged with the advancing fatigue crack and its crack tip position. Consequently, the location dependency effect of the measurement position is not incorporated. In the present work, we address the dependence of K_op on the measurement position, the influence of the 3D crack-surface contact on the damage evolution within the plastic zone, and the difficulty in identifying the dominant crack-closure mechanism in experimental data. We aim to improve the general understanding of PICC. Therefore, we use 3D FE crack propagation simulations and link them to crack opening curves based on HR-DIC data. The excellent agreement allows the investigation of the 3D contact characteristics and the associated damage accumulation within the plastic zone. § METHODOLOGY §.§ Specimen, Material and Loads We conducted fatigue crack growth experiments according to the ASTM 647-15 standard <cit.>. A middle tension (MT) specimen with a width of W=160mm is used as a basis for investigating fatigue crack growth. The dimensions of the specimen are given in Figure <ref>. In this study, we consider the aluminum alloy AA2024-T3, a material with a wide range of applications in the field of aeronautics, as a reference material. The specimen was loaded in the rolling direction, resulting in the fatigue crack growing perpendicular to the maximum grain elongation of the sheet material. The mechanical properties are given in the Table <ref> and taken from the literature <cit.>. The fatigue crack starts from an initial notch with a length of a_i=8mm. We investigate mode I fatigue crack propagation at sinusoidal load with a constant amplitude in y-direction. The maximum force F_max=15kN and the load ratio R=0.1 are constant. At a final crack length of a_f=27.8mm we collected data to investigate fatigue crack closure, where stress intensity factors of K_max=14.9MPa√(m) and Δ K=13.4MPa√(m) are present. §.§ Finite Element Simulation In general, the numerical simulations refer to the FE model presented in <cit.> with some crucial adaptions concerning loading constraints and meshing. All FE simulations are performed with ANSYS Mechanical APDL on a RedHat Linux workstation with two Intel Xeon Gold 6240 18C CPUs and 256GB memory (DDR4-2933 RAM). For the sake of comprehensibility, a brief summary of the FE model is following given: The FE model is based on the geometry of the MT(160) specimen, shown in Figure <ref>. To reduce model complexity and computational resources, we exploit symmetries of the specimen to reduce the geometry to a 1/8 model. The clamping region is excluded and all holes are neglected enabling a structured mapped meshing. Figure <ref> shows the constraints definition. The load F is linked into a pilot node that is coupled to all nodes on the top of the model. In addition, the degrees of freedom of all nodes in the symmetry planes are restricted vertically to their corresponding plane. Within the x-z plane at the bottom of the model, the initial center crack with an initial crack length of a_i is defined. All symmetry constraints within the interval x ∈ [0, a_i] are deleted to allow free deformation of the crack surfaces. The following Figure <ref> sums up the most important features of the FE model. In general, the model is meshed using a mapped mesh strategy with 8 nodes of linear hexagonal SOLID185 elements. The structural meshing can be divided into three separate sections: (a) the global mesh of the overall model, (b) a more refined section, schematically shown in Figure <ref>a, in which the following analyses take place, and a transition section, which connects both. Within the refined section, the volume elements have an aspect ratio of 1 and an element size of e_FE,f=0.08mm. Especially contour and deformation analyses benefit from the chosen strategy, as the element shape in the observation space is always identical. However, the element size is much larger than recommended in literature <cit.>. Choosing smaller element sizes results in strong deformations of single elements, which leads to convergence issues in combination with the element contact definition applied. The mesh outside the refined section is characterized by a flexible element size of e_FE,g=1mm. The contact definition is realized via contact elements. The contact is assumed to be asymmetric and all surface effects, such as crack surface roughness, are neglected. As a counterpart to the freely deformable MT(160) model, a rigid target surface in the form of a plate (SOLID185, element size e_FE,c=1mm) is defined. Following, a pairwise frictionless rigid-to-flexible contact formulation is set between the two model components. The solid elements at the crack surface of the MT(160) model are covered with CONTA174 elements and TARGE170 elements overlay the elements of the rigid contact body. Furthermore, we use the Augmented Lagrange Algorithm as a contact solver. The contact stiffness and penetration are set to k_c=1N/mm^2 and z_c=1μ m with the aim of reducing the contact penetration as much as possible. The contact definition is completed with the instruction that all initial gaps are closed. The implemented model reflects the elasto-plastic material behavior using bilinear isotropic hardening and is based on the mechanical properties of AA2024-T3 given in Table <ref>. Anisotropic properties are neglected within the FE simulation. In order to analyze PICC, crack propagation needs to be performed within the elastic-plastic FE simulation. This causes a plastic wake leading to crack surface contact. We use the Releasing-Constraint <cit.> method to conduct crack propagation. A single load cycle consists of a loading and unloading step followed by a node-releasing step, in which all constraints of one element row in z-direction are released. Consequently, the crack advances exactly one element length e_FE,f within the refined mesh with each cycle at minimum load F_min. Taking N=50 cycles into account, a total crack propagation of Δ a= a_f-a_i=4mm is achieved. The application of only one loading and unloading step between crack propagation is in contrast to the general recommendation in the literature <cit.> to iterate this at least twice. The authors <cit.> assume that the influence of cyclic plasticity can be neglected and does not have a significant impact on the values of K_op as reported by Camas <cit.>. When the total crack advancement Δ a=4mm and cycle number N=50 is reached, another load step is added, which is subdivided into 100 sub steps allowing for a detailed characterization of the crack opening behavior. §.§ Fatigue crack growth experiments In order to compare the numerical results with experiments, we perform fatigue crack growth in MT(160) specimen made of AA2024-T3. The experimental setup is illustrated in Figure <ref>, with a servo-hydraulic testing machine inducing the fatigue loading in terms of sinusoidal load with constant amplitude. Load parameters are equal to the numerical studies F_max=15kN at R=0.1. For analyzing crack closure characteristics, we extended the test stand by a robot-assisted high-resolution DIC (HR-DIC) system (see Figure <ref>a) <cit.>. It consists of a KUKA iiwa that guides an optical Zeiss 206C stereo light microscope including a Basler a2A5320-23umPro global shutter 16 Megapixel CMOS camera. It enables HR-DIC displacement fields of the crack tip region along the entire fatigue crack propagation. A detailed description of the testing setup including the algorithms for ensuring good image quality and low DIC scattering are presented in <cit.>. Figure <ref>b illustrates the load sequence, including the time slots, during which the HR-DIC data acquisition is conducted. Before the fatigue crack growth experiment starts, the robot moves the microscope to all positions on the specimen surface, where the fatigue crack is supposed to grow. The area is covered in a checkerboard manner at zero load. At each position a reference image is captured. Automatic algorithms ensure both perfect alignment with the specimen surface and good sharpness and contrast of the images. After this calibration phase, the crack propagation phase starts. The specimen is cyclically loaded until a total crack length of a=27.8mm is measured by a direct current potential drop (DCPD) measuring system (I=100mA, U=60mV). The system stops and the robot moves to the reference image position that best captures the crack tip area. Figure <ref>c shows the von Mises equivalent strain field. Then 30 load levels are approached, starting with the minimum load up to the maximum load. At each load level, a deformed image is captured. The comparison between the reference image and the deformed images allows to calculate the displacement and strain fields, which is done automatically using the GOM Aramis 2020 software. DIC facet size and distance are 40 × 40 pixels and 30 × 30 pixels, respectively, enabling a spatial resolution of 0.06mm/facet. §.§ Crack opening displacement To compare the kinematics of crack opening process of the fatigue crack, we use surface displacement data from both the FE simulation and the HR-DIC system. Therefore, the nodal solutions on the surface of the FE model are exported for this study. We define a COD measurement location, P_cod, within the two-dimensional displacement field by surface coordinates, P_cod = (x_cod, y_cod), where x_cod is the horizontal distance from the measurement point to the crack tip and y_cod is the vertical distance from the crack path. Then a second measurement point, P_cod^* = (x_cod, -y_cod), is positioned symmetrically on the opposite side of the crack path, see Figure <ref>a. At these points the displacement component perpendicular to the crack, u_y, is obtained. If a measurement point does not directly coincide with a node or facet, the u_y value is determined by linear interpolation from adjacent points. The local COD value is then calculated as follows: COD = 1/2·( u_y(x_cod, y_cod) - u_y(x_cod, -y_cod) ) We plot COD against the load for each discretisation step. The resulting curve, known as the crack opening curve, is shown schematically in Figure <ref>b. The algorithm for the determination of the crack opening stress intensity factor, K_op, from displacement data follows the methodology recommended by the ASTM E647-15 standard <cit.>. The high quality of the HR-DIC displacement field data allows for a very precise analysis of crack opening. When the measurement points P_cod and P_cod^* are located near the crack tip, three distinct regions within the crack opening curve can be identified (see Figure <ref>b): (A) A non-linear relationship between load and COD is observed in the lower region, which is characteristic of crack face contact and therefore crack closure. Based on previous research <cit.>, we assume that crack closure will only occur for F < 0.6 · F_max. (B) The middle region shows a linear relationship. The load or stress intensity factor at the crack tip is proportional to the COD, meaning that K ∼COD. This relationship allows the principles of linear-elastic fracture mechanics to be applied. (C) The upper region shows a non-linear relationship caused by large plastic strain accumulation within the plastic zone as reported by Vasco-Olmo <cit.>. Based on previous research <cit.>, we assume that most plastic strain occurs when loads exceed 80% of the maximum load (F> 0.8 · F_max). To determine K_op, we use the data points from region (B) to fit a line using linear regression. The maximum deviation of a single data point in region (B) from the linear regression model is given by e_max. That means, e_max is a parameter describing the inherent DIC scatter. Then, we define a boundary line by shifting the linear model by 1.2 · e_max to the right, as shown in Figure <ref>b in order to avoid the potential for miscalculations due to the inherent HR-DIC scatter. The first data point that intersects this boundary line, starting from F_min, is considered to be K_op. In this paper, we distinguish between three different types of K_op, further explained in Table <ref>. K_pl denotes the first point that leaves the linear-elastic region and crosses the boundary line again. § RESULTS AND DISCUSSION We compare the numerical FE results with the experimental results obtained from HR-DIC measurements. If there is a satisfactory agreement on the surface, it can be assumed that the crack closure observed in the experiment will behave in a manner similar to that observed in the simulation. This enables the analysis of the 3D aspects of PICC. §.§ COD location dependency of K_op,cod It is widely acknowledged in the fracture mechanics community that the K_op,cod results in DIC displacement field data are sensitive to the measurement location <cit.>. We investigate this dependence through a parameter study of the effects of x_cod and y_cod. The numerical results are summarized in Figure <ref>, the experimental ones in Figure <ref>. The study in Figure <ref> is based on numerical surface displacement fields derived from FE analysis with a crack length of a_f = 27.8mm. Subsequently, the crack tip is fixed to x_ct = 27.8 mm, y_ct = 0 mm. In order to examine the effect of y_cod, we set x_cod = 0.3mm to be constant. Then, we vary y_cod in a range of y_cod∈ [0.0, 0.6] (Δ y_cod=0.1 mm). Figure <ref>a shows the location of the measurement points P_cod and P_cod^* within the u_y displacement field. The individual measurement points have a distance of Δ y_cod = 0.1mm. The crack opening curves in Figure <ref>b illustrate that the non-linearity due to contact of the crack faces decreases with increasing distance y_cod. In addition, it is observed that the minimum or initial crack opening displacement COD_min increases with increasing y_cod. In contrast to the DIC-based studies by Sutton <cit.>, we observe no dependence on the determined K_op,cod. For the examined position x_cod = 0.3mm we determine K_op,cod = 4.62MPa√(m) = const. The relationship between the crack opening curves with varying x_cod (Δ x_cod=0.2 mm) is shown in Figure <ref>c and d. In this case, K_op,cod depends on x_cod. Points P_cod close to the crack tip tend to have higher K_op,cod values than those further away. In addition, the evolution of the K_op,cod values shows a gradual, ascending curvature towards the crack tip. Furthermore, the K_op,cod values shown in the crack opening curves can be divided into two different categories: Because of R=0.1 and, thus, F_min≠ 0, the crack is only partially closed behind the crack tip due to PICC, schematically illustrated under Figure <ref>d. Furthermore, it seems that especially the section of the closed crack, in which the largest contact pressure between the crack faces exists (see Figure <ref>a), leads to a delayed crack opening near the crack tip. We denote the length of this section behind the crack tip as l_cc. In the numerical study in Figure <ref>, l_cc is known to be 1.36 mm. Figure <ref>d reveals that the K_op,cod form two different cluster depending on if they were measured at P_cod with x_cod<l_cc or x_cod>l_cc. In particular, those encircled in black represent K_op,cod at P_cods with direct opposing crack surface contact at F_min (x_cod<l_cc), whereas the dotted highlighted ones indicate K_op,cod at initially open crack surface positions or low contact pressure between both crack faces (x_cod>l_cc). This finding leads to two conclusions: Firstly, with the use of COD measurement it is possible to identify l_cc, if PICC is present, since the K_op,cod based on P_cod with x_cod<l_cc behave differently than with x_cod>l_cc. Secondly, there is still a K_op,cod measurable although the value refers to a P_cod with x_cod>l_cc . This implies the contact close behind the crack tip (<l_cc) still influences the opening behavior of crack sections although their crack faces are not in contact at F=F_min or their contract pressures are so low that they rarely contribute to the delayed crack opening behavior due to PICC. We remark that the values of COD_min are occasionally negative. This phenomenon is caused by the FE contact definition and its allowed contact penetration. Figure <ref> shows the experimental COD measurement results from HR-DIC displacement field data at a crack length of a = 27.8mm. The crack path and the position of the crack tip are identified using the neural networks UNetPath and ParallelNets <cit.> implemented in CrackPy <cit.>, which determined a crack tip position of x_ct = 27.8 mm, y_ct = 0.36 mm. In Figure <ref>b we examine the effect of the vertical distance from the crack path y_cod in HR-DIC data. The results reveal no significant deviations in K_op,cod values with variations in y_cod. x_cod=0.6 mm is kept to be constant. This observation confirms that K_op,cod is independent of y_cod, which is consistent with the FE results shown in Figure <ref>b. In addition, the non-linearity of crack closure decreases with the distance from the crack path. The COD measurement points are located based on the detected crack path at a vertical distance of approximately y_cod≈0.2mm. The horizontal distance between each COD measurement point is Δ x_cod=0.4 mm. The results of the x_cod analysis in Figure <ref>d are very similar to the FE results. In particular, the K_op,cod values at points close to the crack tip (x_cod < 1.4mm) are comparable to the FE results, showing the ascending characteristic. In the following, the analysis given indicates that at minimum load F=F_min the crack is closed to l_cc=1.4 mm. For x_cod > 1.4mm the K_op,cod values are almost constant. In conclusion, our study highlights the negligible influence of y_cod on K_op,cod. Deviations from the results of previous studies <cit.> can likely be attributed to the applied measurement methods. Increasing the distance y_cod results in a flattening of the crack opening curves, and thus increases the effect of measurement noise in HR-DIC data on the determination of K_op,cod. For an accurate characterization of crack closure behavior using HR-DIC, we advise to minimize the noise in DIC measurements and ensure high temporal resolution of the crack opening process. Consequently, despite the negligible influence of y_cod, we recommend to aim for the smallest possible vertical distance of the measurement point P_cod to the crack path. This allows a robust segmentation of the non-linear from the proportional part within the crack opening curve and thus a stable identification of the K_op,cod value. However, both results, numerical and experimental ones, confirm the spatial dependency of K_op,cod along the crack path <cit.> and supports the hypothesis of an increasing K_op,cod with decreasing distance to the crack tip x_cod <cit.>. §.§ Identifying plasticity-induced crack closure As crack surface roughness is not considered, our FE crack propagation simulation only covers the effect of plasticity induced crack closure. Based on the agreement of the K_op,cod behavior in our numerical and experimental studies in Figure <ref>d and <ref>d, we conclude that plasticity induced crack closure is the dominant crack closure mechanism here (AA2024-T3; Δ K = 13.4MPa√(m)). In the following, we aim to define a criterion to identify PICC that bases on the K_op,cod dependence along the crack path. First, we only take into account for the K_op,cod that bases on P_cod with x_cod < l_cc. Afterwards, we correct the crack opening curves around their initial COD value (COD - COD_min). This is permissible, since we know on the one hand, that contact penetration as allowed within the FE simulation but does not occur in reality. On the other hand, we showed that y_cod, the second reason for a possible horizontal shift of the crack opening curve, does not affect the resulting K_op,cod. Figure <ref>a and c show the crack opening curves of FE and HR-DIC data shifted to a common minimum. Figures <ref>b and d present the K_op,cod values separately. When the COD values of the crack opening curves are aligned to a common minimum, the K_op,cod values exhibit a linear characteristic when related to P_cod with x_cod < l_cc (highlighted in grey in Figures <ref>b and d). Physically, this represents a uniform crack opening towards the crack tip, which is assumed to be a typical characteristic of the crack opening kinematics in presence of PICC <cit.>. Thus, we define the following criterion: Criterion for plasticity-induced crack closure Let y_cod be fixed and x_cod,i, i=1,…,n, be x-values of the COD measurement points along the crack path. Let K_op,i be the crack opening stress intensity factor determined for the COD curve at measurement point (x_cod,i, y_cod), and λ_1,...,λ_n be the minimum-shifted crack opening displacements λ_i := COD(x_cod,i,y_cod) - COD_min. PICC is present and the pre-dominant crack closure mechanism at measurement points I ⊂{1,…,n}, if the following condition holds true: There exists a linear fit, represented by slope and intercept c,k ∈ℝ, such that for all i ∈ I: |λ_i - c · K_op,i - k|<ε_picc, Here, ε_picc > 0 defines the permissible deviation of the points from the linear approximation. Loosely speaking, PICC is present where K_op,cod∝COD-COD_min with an allowable deviation of ε_picc. This threshold should depend on the discretisation of the crack opening process to reflect the idea that there is less noise in the determination of K_op,cod if the discretization of COD curves is finer, we define ε_picc = Δ K/N_L· s. Here, N_L denotes the number of load steps or sub steps, respectively, used for the discretisation of the crack opening process and Δ K = K_max-K_min. The factor s > 0 is adjustable and can be used to potentially make the criterion more robust. However, it is crucial that the linear character of the K_op,cod values, as illustrated in Figure <ref>, is given. We recommend to keep s as small as possible. Assuming that the crack continues to open uniformly up to the crack tip, the crack tip opening load K_op,ctod can be estimated by setting the slope of the linear model to zero (c=0) in Equation <ref>. It follows: K_op,ctod≈ k Table <ref> compares the determined values for applying the criterion based on FE analysis with those obtained from HR-DIC data. Comparing the K_op,ctod result with the sub-step in the FE crack growth simulation when there is initially no contact during crack opening across the crack faces, K_op,FE = 0.43 · K_max = 6.47 MPa√(m), results in a deviation of less than 3 %. The K_op,ctod result based on HR-DIC data is larger compared to the FE data, which can be attributed to significantly higher contact pressures in real fatigue cracks. The high incremental crack advancement in the FE crack growth simulation reduces plastic strain accumulation within the plastic zone, subsequently leading to lower contact pressures on the crack surface. In actual DIC data, measuring crack opening this close behind the crack tip is not feasible due to generally insufficient local resolution of the measurement. However, this method still allows for the determination of K_op,ctod. Generally, the experimental determination of this value is challenging, and a reliable procedure has not been found so far <cit.>. This methodology provides a potential approach to address this issue and thus improve the understanding of crack opening kinematics of plasticity induced crack closure in experimental data. §.§ 3D aspects of plasticity-induced crack closure The FE and HR-DIC results agree on crack opening on the free surface, so we assume that PICC in the experiment behaves similarly to the FE simulation. Contact elements in the crack plane allow us examine the crack face contact through contact pressure distributions. Figure <ref> shows the contact pressure p_c on the crack surface as contour plots. All four images indicate that the contact pressure is directed towards the specimen's surface as a result of PICC, aligning with the assertions by Dugdale <cit.> and Newmann <cit.>. They explain this to the significant influence of the plane stress state at the free surface of the specimen. Further FE-based studies have also identified that PICC is particularly prevalent under plane stress conditions <cit.>. Moreover, Figure <ref> shows that, due to the positive load ratio R=0.1, only a limited area directly behind the crack front stays in direct contact. Figure <ref> clarifies that the contact pressure p_c increases with an increasing K_max. The proportion of the contact area A_c to the total area A_ges also increases. At a=57.8mm the entire crack surface behind the crack front is in contact. Additionally, it is observed that the location of the maximum contact pressure p_c,max at a_f=47.8mm shifts from the specimen's surface towards its center. This is particularly evident in Figure <ref>d at a crack length of a_f=57.8mm. The variations in the contact pressure distributions can be attributed to the opposing effects of the plastic zone at the specimen's surface and the plane-strain in the specimen's center. As K_max increases, the plane stress state becomes more dominant, resulting in a higher proportion of the crack surface being in contact. Given that contact is primarily focused on the free surface, it is necessary to consider how the external contact of the crack faces influences the crack opening behaviour across the entire specimen thickness. This question is partially answered by Figure <ref>. In order to characterise crack opening, the crack opening curves of both CMOD and CTOD nodes (the first row of nodes behind the crack front) along the specimen thickness are examined in Figure <ref>. The basis for this investigation is the FE crack propagation simulation at a=27.8mm from Figure <ref>. Based on Figure <ref>a, it is apparent that CMOD measurements, regardless of the measurement position in the thickness direction, lead to the same K_op,cmod. Nevertheless, the algorithm-based evaluation (see Section <ref>) detects a K_op,cmod = 2.53MPa√(m), corresponding to 17% of K_max. The surface-near crack closure contact (z=1mm) just behind the crack tip influences the crack opening behavior along the entire specimen thickness. The K_op,ctod values close to the specimen surface feature the largest crack opening values at 47% of K_max. Figure <ref>b, besides illustrating the non-linearity due to crack closure, also shows another non-linear section starting from 70%-80% of the maximum load due to the large plastic strain accumulation in the plastic zone. This region is separated by K_pl as introduced in Section <ref> and is determined comparably to K_op. Vasco-Olmo <cit.> identify a similar non-linear behavior in their crack opening curves. They correlate it to the plastic strain accumulation in the plastic zone during crack opening. We support their hypothesis and extend it to three dimensions. The intermediate region between K_op,ctod and K_pl is characterized by linear-elastic crack opening without crack face contact. §.§ Effect of crack closure on plastic zone In the following, we investigate the effect of the crack closure contact on the shape and damage evolution within the plastic zone. We consider as plastic zone (PZ) allplastic deformations in front of the crack tip. Therefore, we select all elements with plastic deformation after the last opening load step in our FE simulations. The results are illustrated in Figure <ref>. Because of the symmetric FE model, only 1/4 of the entire PZ is displayed. Our results indicate that the PZ exhibits a shape that differs from the dog-bone model <cit.>. Similarly, the numerical shape investigations made by Camas <cit.> and Besel <cit.> are also in agreement with ours, although the effect of crack closure was not considered. Due to the extensive crack advancement, Δ a=0.08mm, employed in the FE simulation, it is only possible to reveal the primary PZ. Consequently, the cyclic PZ cannot be evaluated. In order to analyse the impact of crack surface contact, we conducted FE crack propagation simulations with and without contact definitions. Subsequently, we compared the shapes of the resulting primary PZ. The use of elements with a cuboid shape facilitates quantitative assessments concerning the shape characteristics of the primary PZ. Figure <ref> shows the PZ shapes with active contact definition at four different crack lengths. Table <ref> lists the results of selected descriptors for the shape of PZ. A summary about the parameter is illustrated in Figure <ref>a. Comparing the descriptors listed in the Table <ref>, the deviations are all less than the size of one element, i.e. less than 2 % in all analysed parameters. Therefore, we conclude that the influence of the crack closure contact on the shape of the primary PZ is negligible. Whether the contact is confined to the crack surface near the free surface or encompasses the entire crack surface appears to be unimportant. Furthermore, the contact pressure distributions correlate with the shape of the primary plastic zone, as shown in Figure <ref>. Figure <ref>b plots the fraction of crack surface being in contact s_c to the entire specimen thickness (see Figure <ref>a). The evaluation is based on the second row of elements behind the crack front to avoid numerical singularity effects that can result from the crack propagation algorithm. As illustrated in Figure <ref>b, full-crack surface contact is present when the plastic zone is greater in width than in height. Since the shape of the plastic zone depends on both K_max and the specimen thickness t <cit.>, this statement is so far only valid for the specimen under investigation (t=2mm, AA2024-T3). Nevertheless, the impact of crack closure contact on the shape of the cyclic plastic zone remains to be investigated. Figure <ref>b compares the shape of the plastic zone at the surface based on FE results with those based on the HR-DIC analysis at a crack length of a=27.8mm. Both plastic zones are delineated from the linear-elastically deformed material by the yield strength R_p0.2=350MPa. It is observed that both shapes almost coincide indicating that the experimental results agree with the numerical findings. Furthermore, since the upper wing of the plastic zone is greater in height than in width, it indicates that the contact is only present near the free surface of the specimen. We neglect the lower wing because it has been found that its shape is strongly influenced by a HR-DIC measurement artifact resulting from low quality speckle pattern. Nevertheless, the plastic zone based on HR-DIC data is estimated to be slightly larger, attributed to the inherent measurement noise in HR-DIC data. Because of the good agreement between the numerical and experimental results, we can analyse damage in the plastic zone due to plasticity induced crack closure. Therefore, we use the accumulation of plastic energy over a load cycle as damage describing parameter, following the suggestion by Vormwald <cit.>. In our analysis, we compare the results of an FE simulation with contact definition to one without contact definition. This comparison helps to identify the influence of the contact on the energy development within the plastic zone. Figure <ref> illustrates the distribution of plastic energy d U_pl/d N along the crack front z. The two curves shown in Figure <ref>a deviate less than 1%, but d U_pl/d N with contact definition is slightly larger. This effect results from the numerical singularity that arises from the first FE element behind the crack tip and whose influence is slightly increased by the contact pressure. We observe the same trend for the crack lengths shown in Figures <ref> b, c and d. The increased portion of the crack face being in contact combined with larger contact pressures increases the numerical singularity stress within the crack front element. This causes that the distance between the with contact and no contact curves increases. Only for larger crack lengths a_f=57.8mm the contact pressure appears to reduce the accumulation of plastic energy per load cycle. The contact pressure is no longer focused towards the free specimen surface. At a=57.8mm a plane stress state is pre-dominant for the entire plastic zone and the crack surface behind the crack front is in full contact. The findings indicate that when the contact pressure is concentrated on the free specimen surface, there is no notable impact on the plastic energy accumulation from PICC. However, when the maximum contact pressure point is shifted towards the specimen center and the entire crack face is in contact, PICC appears to exert a reducing effect on the plastic energy accumulation within the PZ. §.§ Fracture surface analysis Finally, we examined the fracture surface by scanning electron microscopy at different scales to find indications of crack surface contact near the surface. Figure <ref>a shows the fracture surface section whose displacement field is shown in Figure <ref>c and to which all COD analyses in Figure <ref> refer. The lower edge of the fracture represents the free specimen surface that has been investigated by the robot-supported HR-DIC measurement system. Before a=27.8mm, two different damaging mechanisms can be identified within the fracture surface. In the center, as shown in Figure <ref>b, striations are observed. This aligns with the literature towards intrinsic fatigue crack growth mechanisms in AA2024 <cit.>. However, since the previous analysis has shown that the crack closure contact due to PICC is focused towards the crack surface edge, we study the intrinsic mechanisms in that particular region. Figure <ref>c shows that the fatigue crack propagates in that region by ductile shear failure mechanisms leading to a rough fracture surface close to the specimen surface. At Δ K=13.4MPa√(m), fatigue crack growth in t=2mm thin MT(160) specimens made of AA2024-T3 tends to develop shear lips. It follows that the surface roughness in that region is increased. However, since the crack opening kinematics are still identical to those observed in the FE crack propagation simulation, this indicates that the increased surface roughness due to the shear lips has a minor influence on the pre-dominant PICC mechanism in this section of AA2024-T3 fatigue cracks. That finding supports research performed by Materna <cit.>. They concluded, based on numerical studies, that fracture surface roughness does not affect the crack closure behavior if PICC is the pre-dominant mechanism. § CONCLUSIONS In this study, we investigate the crack closure behaviour of AA2024-T3 based on numerical simulations and experimental data. We find an excellent agreement between the crack opening curves based on FE and HR-DIC data. Given that the FE crack propagation model only considers the effect of PICC, we conclude that PICC is the dominant crack closure mechanism in AA2024-T3 fatigue cracks in our studies for an MT(160) specimen with t=2mm and a Δ K=13.4MPa√(m). In addition, the agreements between simulation and experiment leads us to the following conclusions: * PICC can be identified by using the K_op,cod dependence on the COD measurement location. Here, K_op,cod only depends on the horizontal distance to the crack tip position (x_cod) and is independent of the vertical distance. However, we recommend to position the measurement points as close as possible to the crack path enabling a more stable identification of K_op,cod. * Using the variation of K_op,cod values along the crack path, we introduce a new method for determining the opening value directly behind the crack tip K_op,ctod. If PICC is present and the crack opening curves are shifted to a common minimum, the K_op,cod values form a linear relationship, that can be approximated by linear regression. * The study supports the general assumption that PICC induces crack surface contact focused towards the free specimen surface. However, we show that even if there is only crack surface contact near the free surface of the specimen, it influences the crack opening behaviour throughout the specimen thickness. * Considering a fatigue crack in AA2024 and a t=2mm thick MT(160) specimen, we found that the shape of plastic zone is correlated to the crack surface contact situation. If the width of the plastic zone is larger than its height, this indicates that the entire crack surface close to the crack front is in direct contact. * Regarding the retardation effect of PICC on the damage accumulation within the plastic zone, we find that the crack closure contact does not affect the plastic energy accumulation if the crack closure contact is directed towards the free surface of the specimen. If the entire crack surface is in contact, this seems to reduce the energy accumulation. * Based on fractographic investigations, we observe that fatigue cracks in AA2024-T3 at Δ K=13.4MPa√(m) tend to develop shear lips. However, the increased surface roughness does not appear to affect the PICC mechanism. § ACKNOWLEDGEMENTS We acknowledge the financial support of the DLR-Directorate Aeronautics. This work was supported by the Deutsche Forschungsgemeinschaft, Germany (DFG) via the project Experimental analysis and phase-field modeling of the interaction between plastic zone and fatigue crack growth in ductile materials under complex loading (grant number BR 6259/2-1). Furthermore, funding came from the Federal Ministry for Economic Affairs and Climate Action, Germany on the basis of a decision by the German Bundestag, within the framework of the aerospace program LuFo-VI of the project "Intelligent FSW Process Monitoring" (Funding ID 20W2201E). § DATA AVAILABILITY The apdl finite element simulation code and data as well as the experimental displacement fields will be publicly available on Github (<https://github.com/dlr-wf>) and Zenodo (<10.5281/zenodo.13643861>). § COMPETING INTERESTS The Authors declare no Competing Financial or Non-Financial Interests. § AUTHOR CONTRIBUTIONS F.P. conceived the idea, conducted the simulations and experiments, evaluated the results, joined the discussions and wrote the manuscript. D.M. joined the discussions and wrote the manuscript. E.B. joined the discussions and wrote the manuscript.
http://arxiv.org/abs/2409.02409v1
20240904032331
Discrete, Kinetic, and Hydrodynamic Descriptions of the Euler Alignment System with Adaptive Communication Strength
[ "Roman Shvydkoy", "Trevor Teolis" ]
math.AP
[ "math.AP" ]
Kinetic Model of Euler Alignment system with adaptive strength]Discrete, Kinetic, and Hydrodynamic descriptions of the Euler Alignment System with adaptive communication strength 851 S Morgan St, M/C 249, Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, IL 60607 [email protected] 851 S Morgan St, M/C 249, Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, IL 60607 [email protected] 37A60, 92D50 Acknowledgment. This work was supported in part by NSF grant DMS-2107956 and Simons Foundation. § ABSTRACT The -model, introduced in <cit.>, is an alignment model with the property that the strength of the alignment force, , is transported along an averaged velocity field. Inspired by the 1D threshold regularity criterion for the Cucker-Smale model in terms of the so-called e-quantity, e = ∂_x u + ρ∗ϕ, the transport of the strength in the -model was explicitly designed so that it possesses its own e-quantity, e = ∂_x u +, which yields an analogous threshold regularity criterion for global well-posedness in 1D. Owing to the existence of this e-quantity, it also possesses similar long-time behavior to the Cucker-Smale model in 1D <cit.>. The utility of the -model is that it also has the versatility to behave qualitatively like the Motsch-Tadmor model, for which global regularity theory is not known. The goal of this paper is to put the -model on more firm physical grounds by formulating and justifying the microscopic and mesoscopic descriptions from which it arises. A distinctive feature of the microscopic system is that it is a discrete-continuous system: the position and velocity of the particles are discrete objects, while the strength is an active continuum scalar function. We establish a rigorous passage from the microscopic to the mesoscopic description via the Mean Field Limit and a passage from the mesoscopic to the macroscopic description in the monokinetic and Maxwellian limiting regimes. We present a survey of such results for the Cucker-Smale model and explain how to extend these arguments to the -model, where the strength of the alignment force is transported. We also address the long-time behavior of the kinetic Fokker-Planck-Alignment equation by establishing the relaxation to the Maxwellian in 1D when the velocity averaging is the Favre averaging. As a supplement to the numerical results already presented in <cit.>, we provide additional numerical evidence, via a particle simulation, that the -model behaves qualitatively like Motsch-Tadmor. [ Trevor Teolis September 9, 2024 ===================== § INTRODUCTION Collective patterns and behavior can emerge from simply interacting agents. Flocks of birds, schools of fish, and flashing of fireflies are examples of systems of simply interacting agents where there is a clear emergent behavior of the whole system. This has spurred much interest in recent years in the mathematical modeling of this large class of phenomena. The question is: how do we create a model that describes the behavior of a large class of communicating agents and is mathematically tractable in the sense of well-posedness analysis and provability of the emergent phenomena? Many classical alignment models possess some of these features. The 3-zone model was developed in 1987 by Craig Reynolds, a computer animator, who was interested in developing realistic models of birds for cinema, see <cit.>. It is characterized by a repulsion force in the close range, alignment in the medium range, and attraction in the far range. The repulsion force makes it challenging to describe the long-time behavior due to the lack of a natural equilibrium state. The Viscek model proposed in <cit.> incorporates stochastic forces that facilitate phase transitions as the strength of noise varies – an important feature of emergent dynamics. It evolves in discrete time steps whereby agents get assigned the average and normalized velocities of nearby agents. Phase transitions were observed numerically, however the model at the moment lacks a satisfactory analysis, see <cit.>. In 2007 <cit.> Cucker and Smale proposed another alignment model with the following key features: it weights the alignment force with a radial interaction kernel ϕ inversely proportional to the distance between agents. The model allows for an array of different dynamics and, most importantly, it was the first model that allowed a proof of alignment which depended only on the initial state and not on perpetual connectivity of the flock. Moreover, the long-time behavior holds under the large crowd limit N→∞, which allows to carry such results over to the kinetic and macroscopic descriptions, <cit.>. The universality of description and mathematical tractability of the model made it the subject of a series of studies surveyed for example in <cit.>. Despite its success, however, the Cucker-Smale model does not respond well in certain modeling scenarios. Motsch and Tadmor argued in <cit.> that in heterogeneous formations when, say, a small subflock separates itself from a distant large flock, its internal forces become annihilated by the latter if subjected to the Cucker-Smale protocol <cit.> creating unrealistic behavior. A proposed renormalization of the averaging operation restored the balance of forces and the alignment results under long range communication were proved similar to those of Cucker and Smale at the microscopic level and in the large crowd limit, <cit.>. Such renormalization, however, destroys the symmetry of the alignment force and possible vacuum formation makes the system more singular, which is the reason for the lack of a coherent well-posedness theory at the moment. The -model has been designed to have an adaptive averaging protocol which allows for many similar regularity properties as those of the Cucker-Smale model, while maintaining the same level of universality as that of the Motsch-Tadmor model. One downside of the adaptive as opposed to prefixed protocol is the lack of control over kinematic properties of the alignment force, which makes the analysis of the long-time behavior a challenging but interesting problem, see <cit.>. The goal of this paper is to (1) introduce the microscopic and mesoscopic counterparts to the previously studied macroscopic description of the -model in <cit.>; (2) to establish a rigorous passage between the levels of description; and (3) to establish the relaxation to a thermodynamic state in one dimension for the mesoscopic description when the velocity averaging is the Favre averaging. Before we turn to the technical description of the results let us present a more detailed motivation of the -model by giving a brief survey of the classical Cucker-Smale and Motsch-Tadmor alignment models as well as the more general environmental averaging models from which it originated. §.§ Cucker-Smale model The pressureless Euler alignment system based on the Cucker-Smale model on ^n or ^n is given by ∂_t ρ + ∇· (ρ̆) = 0 ∂_t +̆·̆∇=̆ (ρ̆)_ϕ - ρ̆_ϕ where ρ, $̆ are the density and velocity of the flock andϕ≥0is a smooth radially decreasing communication kernel. The notationf_ϕ = f ∗ϕdenotes a convolution. Provided the kernel has a fat tail (i.e. is non-integrable), ∫_0^∞ϕ(r) dr = ∞ this model admits alignment and flocking: there existsδ> 0such that lim_t →∞(ρ) < ∞, lim_t →∞sup_x,y ∈(ρ) |(̆t,x) - (̆t,y)| ≤ C e^-δ t It is a well-known result of Carillo, Choi, Tadmor, and Tan <cit.> that e = ∂_x u + ρ_ϕ, ∂_t e + ∂_x(ue) = 0 provides a threshold regularity criterion in 1D: the solution remains smooth iffe_0 ≥0. In fact, it is not only useful for global regularity; it is also instrumental in proving 1D strong flocking (i.e. convergence to a limiting distribution) and estimating the limiting distribution, <cit.>. §.§ Motsch-Tadmor model It was observed in <cit.> that the Cucker-Smale model displays unrealistic behavior in heterogeneous formations. In particular, when there is a small mass flock and a faraway, large mass flock, the internal dynamics of the small mass flock are hijacked by the large mass flock. The Motsch-Tadmor model was introduced in order to resolve this behavior. It was proposed that the strength of the alignment force on a particle should be scaled by the total influence on that particle. With this modification, we get: ∂_t ρ + ∇· (ρ̆) = 0 ∂_t +̆·̆∇=̆1/ρ_ϕ ((ρ̆)_ϕ - ρ̆_ϕ) We will explain what this fix does to restore the balance of forces in Section <ref>. It was shown in <cit.> that (<ref>) aligns under the same fat tail kernel condition (<ref>). The cost is that it no longer possesses thee-quantity and, as a result, there is no known 1D threshold regularity criterion. The-model aims to fix this lack of a threshold regularity criterion while also retaining the desired qualitative behavior of the Motsch-Tadmor model in heterogeneous formations. Before we introduce it, we will mention the environmental averaging models from which it originated. §.§ Environmental Averaging Models Many alignment models are characterized by an alignment force which pushes the velocity towards an averaged velocity field. The Vicsek, Cucker-Smale, and Motsch-Tadmor models are among many examples. For the Cucker-Smale model, we can rewrite the alignment force as F = s_ρ ( - )̆, _ρ = ρ_ϕ, = (ρ̆)_ϕ/ρ_ϕ . The so-called strength_ρof the alignment force is a fixed function which depends on the density; andis an averaged velocity field. Written in this form, it is more clear that it is an alignment model– the velocity$̆ is being pushed towards the averaged velocity . Alignment forces which take this more general form are called environmental averaging models and the macroscopic version is given by: ∂_t ρ + ∇· (ρ̆) = 0 ∂_t +̆·̆∇=̆_ρ ( - )̆ . They generalize the Cucker-Smale model by treating _ρ and as arbitrary modeling components in a proper functional framework. The general theory of environmental averaging models has been developed extensively in <cit.>. An important class of models are those for which the velocity averaging has an integral representation against a smooth kernel Φ_ρ: = ∫Φ_ρ(x,y) (̆y) ρ(y) , Φ_ρ is smooth. Models whose velocity averaging has the form (<ref>) will be an important sub-class for the as-yet defined -model since the regularity of the model is tied to the regularity of the kernel Φ_ρ(x,y). The representation (<ref>) holds for many classical models: Cucker-Smale (M_CS), Motsch-Tadmor (M_MT), the overmollified version of (M_MT) introduced in <cit.>, = ( (ρ̆)_ϕ/ρ_ϕ)_ϕ, _ρ = 1, M_ϕ the segregation model based on a partition of unity ∑_l=1^L g_l = 1, = ∑_l=1^L g_l(x) ∫_Ø u g_l ρ/∫_Ø g_l ρ , _ρ = 1, M_seg and other multi-flock, multi-species variants of the above, see <cit.>. All of these models possess a representation kernel as shown in table <ref>. Strength _ρ Kernel Φ_ρ(x,y) M_CS ρ_ϕ ϕ(x-y)/ρ_ϕ(x) M_MT 1 ϕ(x-y)/ρ_ϕ(x) M_ϕ 1 ∫_Øϕ(x-z) ϕ(y-z)/ρ_ϕ(z) M_seg 1 ∑_l=1^L g_l(x) g_l(y)/∫_^n g_l ρ We can see that many of the models have the density-dependent renormalization at its core, _F: = (ρ̆)_ϕ/ρ_ϕ. This is known as the Favre filtration, which was introduced in the context of non-homogeneous turbulence in <cit.>. In our context, we will refer to it as the "Favre averaging" since it represents the averaged velocity. The Favre-averaged velocity is given by _F := (ρ̆)_ϕ/ρ_ϕ §.§ The -model The -model is a descendant of the environmental averaging models and was introduced in <cit.> at the macroscopic level. The only adendum to the environmental averaging models is that is no longer a fixed function of the density, but instead evolves according to its own evolution equation along the average velocity field. This has a profound impact on the 1D regularity theory at the macroscopic level of description, and subsequently it lends to stronger long-time behavior results, all of which resemble the favorable behavior of the Cucker-Smale model in this more general setting. It was observed in <cit.> that the existence of the e-quantity in the Cucker-Smale case is owed to the transport of the strength function, _ρ = ρ_ϕ, along the Favre-averaged velocity field: ∂_t ρ_ϕ + ∂_x (ρ_ϕ_̆F) = 0 . The new model therefore proposes that is more natural for the strength to evolve according to its own transport equation along the averaged velocity field: ∂_t + ∂_x () = 0 . The -model is given by SM∂_t ρ + ∇· (ρ̆) = 0 ∂_t + ∇· () = 0, ≥ 0 ∂_t +̆·̆∇=̆( - )̆ and, by design, it admits the e-quantity in 1D e = ∂_x u + , ∂_t e + ∂_x(ue) = 0. The special case where the velocity averaging is the Favre averaging, i.e. = _̆F (see Definition <ref>), bears a similar alignment force to the classical Cucker-Smale and Motsch-Tadmor alignment models. Indeed, we can cast it into a similar form by introducing a new variable , which we call the "weight", defined by = ρ_ϕ. With this change of variables, satisfies a pure transport along the Favre-averaged velocity field ∂_t + _̆F ·∇ = 0 and the system becomes WM∂_t ρ + ∇· (ρ̆) = 0 ∂_t + _̆F ·∇ = 0, ≥ 0 ∂_t +̆·̆∇=̆ ((ρ̆)_ϕ - ρ̆_ϕ) We obtain a model that looks like a hybrid of Cucker-Smale and Motsch-Tadmor. We refer to this particular variant of the -model as the -model. Setting _0 = 1, we recover the Cucker-Smale model; and setting _0 = 1/(ρ_0)_ϕ, we recover the Motsch-Tadmor model at the initial time (at later times, the transport of the weight will cause it to deviate from the Motsch-Tadmor weight). Due to the existence of the e-quantity and the similar structure of the alignment force to the Cucker-Smale model, many classical results of the Cucker-Smale case were extended to the -model in our joint work <cit.>. The following is a list of the results which were obtained: * Local well-posedness and global well-posedness for small data in multi-D; * Global well-posedness in 1D and for unidirectional flocks in multi-D under the threshold e ≥ 0; * L^∞-based alignment; conditional L^2-based alignment; * Strong flocking in 1D (i.e. convergence of the density to a limiting distribution); * Estimates on the limiting distribution of the flock in 1D. The importance of the -model is that it retains the regularity and alignment characteristics of the Cucker-Smale system while also having the versatility to behave qualitatively like the Motsch-Tadmor model (when _0 = 1 / (ρ_0)_ϕ). In other words, the -model can be thought of as a more analytically tractable version of the Motsch-Tadmor model. Numerical evidence of the qualitative similarities between the -model and the Motsch-Tadmor model has been presented at the macroscopic level in 1D in <cit.>. Here, we present, perhaps more clear, numerical evidence of this qualitative behavior by showcasing simulations of the microscopic system in Section <ref>. First, we introduce the microscopic and mesoscopic versions of the -model, and hence also the -model as a particular case. §.§ Microscopic and mesoscopic levels of description The -model was introduced in <cit.> at the level of the hydrodynamic description, while the microscopic and mesoscopic levels have remained unattended. To fill this gap, we introduce and justify them here. Due to its ancestral relation to the Cucker-Smale model, the microscopic and mesoscopic systems for the -model will have a similar structure to the Cucker-Smale versions, albeit with important differences. As a reference point for the reader, we will first present the microscopic and mesoscopic descriptions for the Cucker-Smale model followed by those for the -model, highlighting the differences between the two. We could have also juxtaposed the -model to the environmental averaging models, which also have a well developed theory at all levels of description. §.§.§ Cucker-Smale: microscopic and mesoscopic descriptions The classical discrete Cucker-Smale model, originally introduced by Cucker and Smale in <cit.>, is given by an ODE system of N interacting agents: ẋ_̇i̇ = v_i v̇_̇i̇ = λ∑_j=1^N m_j ϕ(x_i - x_j)(v_j - v_i) . where x_i, v_i are the velocity and positions of the agents and λ > 0 is a scalar that affects the strength of the alignment force. Of course, it aligns and flocks under the same fat tail condition on the kernel, (<ref>). As the number of particles N →∞, it is more convenient to look at the evolution of the probability density, f(t,x,v), of finding a particle at position x and velocity v at time t instead of tracking individual particle trajectories. The evolution equation for the probability density, f(t,x,v), is given by a kinetic Vlasov equation derived according to the BBGKY hierarchy <cit.>: ∂_t f + v ·∇_x f + λ∇_v · [f F(f)] = 0, F(f)(t,x,v) = ∫_^2nϕ(|x-y|)(w - v)f(t,y,w) . The passage from the (<ref>) to (<ref>) in the scaling regime where the range of the communication between particles remains independent of N, i.e. the mean field limit, was established in <cit.>. In practice, one cannot physically observe the probability density function. The physically observable (or macroscopic) variables are the density and the momentum, which are defined by ρ(t,x) = ∫_^n f(t,x,v) , (ρ)̆(t,x) = ∫_^n v f(t,x,v) . To determine the evolution of macroscopic variables, one computes the v-moments of the kinetic Vlasov equation and obtains ∂_t ρ + ∇· (ρ̆) = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ + ∇· = ρ ((ρ̆)_ϕ - ρ̆_ϕ), where is the Reynolds stress tensor (t,x) = ∫_^n (v - (̆t,x)) ⊗ (v - (̆t,x)) f(t,x,v) . This results in the closure problem: the system (<ref>) depends not only on the macroscopic variables ρ and $̆, but also on the kinetic distributionf. The system can be closed by adding a local alignment force which pushes the distributionfto be concentrated around the macroscopic velocity and thus removes the dependence onf. Two such alignment forces have been considered in the literature corresponding to the monokinetic and Maxwellian limiting regimes. For background, we will survey the monokinetic and Maxwellian limits for the Cucker-Smale model in Section <ref>. The macroscopic pressureless Euler alignment system (<ref>) arises from the monokinetic limiting regime. The as-yet defined macroscopic isentropic Euler alignment system (<ref>) (discussed in Section <ref>) arises from the Maxwellian limit. §.§.§ -model: microscopic and mesoscopic descriptions The microscopic-model stands uniquely from the other microscopic alignment models as a discrete-continuous system. The strength satisfies its transport equation along the averaged velocity field and it is therefore not a discrete object. Instead, it is an active continuum scalar function that is transported along the continuous field(·, x). As far as the alignment force is concerned, it only keeps track ofandat the discrete pointsx_i. The discrete densityρ^N ∈_M(^n)is given byρ^N = ∑_j=1^N m_j δ_x_jand the discrete velocity is given byu^N = ∑_j=1^N v_j 1_x_j. The microscopic description of (<ref>) is then given by an ODE system describing the position and velocity of particles coupled with a PDE describing the transport of the strength function: ẋ_̇i̇ = v_i v̇_̇i̇ = λ(x_i) ((x_i) - v_i ), ρ^N = ∑_j=1^N m_j δ_x_j, u^N = ∑_j=1^N v_j 1_x_j, ∂_t + ∇_x · () = 0 . Once again,λ> 0is a scalar that affects the strength of the alignment force. Althoughappears to be a discrete object, observe that when the velocity averaging satisfies the integral representation (<ref>), the velocity averaging is given by (t,x) = ∑_j=1^N m_j Φ_ρ^N(x, x_j) v_j(t) . In this case,is a smooth object wheneverΦ_ρ^Nis smooth. The well-posedness of this discrete-continuous system for smooth kernelsΦ_ρ^Nis proved in Section <ref>. The corresponding mesoscopic description is given by a Vlasov-type equation coupled with a transport of the strength function: f_t + v ·∇_x f + λ∇_v ( (v - ) f) = 0 ∂_t + ∇_x · () = 0 . For the mesoscopic case, when the velocity averaging satisfies the integral representation (<ref>), the velocity averaging is given by (x) = ∫_^nΦ_ρ(x,y) ∫_^n v f(t,y,v) . From table <ref>, we see that the discrete Cucker-Smale model (<ref>) can be recovered by setting= ρ^N_ϕandΦ_ρ(x,y) = ϕ(x-y) / ρ_ϕ(x). Sinceρ_ϕautomatically satisfies the transport along the averaged velocity, the strength equation drops out. For the kinetic system, (<ref>) is recovered similarly by setting= ρ_ϕandΦ_ρ(x,y) = ϕ(x-y) / ρ_ϕ(x). For the-model, the mean field passage from the microscopic description (<ref>) to the mesoscopic description (<ref>) necessitates some uniform regularity onand. Let us make the following observations of the Cucker-Smale model, which will influence how we handle the mean field passage for the-model: * the regularity of the strength comes for free when the communication kernel ϕ is smooth, and * the regularity of the velocity averaging is tied to the regularity of the kernel Φ_ρ . For the-model, the regularity of the strength will not come for free, but we will see later that it is also tied to the regularity of the kernelΦ_ρ. Therefore, instead of imposing regularity conditions on the solution, we will subordinate the uniform regularity assumptions to the kernelΦ_ρ, (<ref>) and (<ref>), which are sufficient for achieving the desired regularity on the velocity averaging and the strength, provided that theL^1norm of the momentum is bounded. The mean field limit under these uniform regularity assumptions on the kernelΦ_ρwill be established in Section <ref>. This passage will simultaneously show existence and uniqueness of solutions to (<ref>) and that the solutions arise as a limit, in some sense, of solutions to (<ref>). Taking thev-moments of the Vlasov equation (<ref>), we obtain ∂_t ρ + ∇· (ρ̆) = 0 ∂_t + ∇· () = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ + ∇· = ρ( - )̆, where the densityρand momentumρ$̆ are defined as in (<ref>) and is the same Reynolds Stress tensor that appeared in the Cucker-Smale case, (<ref>). This results in the same closure problem and it will be addressed in the same way by adding a local alignment force. The monokinetic and Maxwellian limits for the -model will be proved in Section <ref>. The pressureless macroscopic system (<ref>) arises from the monokinetic limiting regime. The as-yet defined isentropic macroscopic system (<ref>) (introduced in Section <ref>) arises from the Maxwellian limit. §.§ Numerical evidence for similar qualitative behavior to Motsch-Tadmor To make the case for physical relevance of the -model, we present numerical evidence, at the microscopic level, that the -model with _0 = 1/ρ_ϕ, and hence the -model, displays similar qualitative behavior to that of Motsch-Tadmor in heterogeneous formations. We first clarify the qualitative behavior that we are seeking. §.§.§ Qualitative behavior of Motsch-Tadmor Here, we reproduce the motivation for the Motsch-Tadmor model given in <cit.>. Let m, M with M >> 1 ≈ m be the masses of the small and large flock, respectively. Let I = {i : agent i is in the small flock} and define J similarly for the large flock. Suppose that the flocks are far enough away so that M ∑_j ∈ Jϕ(|x_i - x_j|) << 1 for all i ∈ I and that the agents in the small flock are close enough so that ϕ(|x_i - x_i'|) ≈ 1 for all i, i' ∈ I. Then for any agent i, the alignment force of the Cucker-Smale system (<ref>) can be written as v̇_̇i̇ = λ/m + M( ∑_j ∈ J m_j ϕ(|x_i - x_j|) (v_j - v_i) + ∑_i' ∈ I m_i'ϕ(|x_i - x_i'|) (v_i' - v_i) ) := A_J + A_I. Since the velocities are bounded and the flocks are far away, A_J << 1. On the other hand, since M >> m, we also have A_I << 1. The result is that d/dt v_i << 1 for all i ∈ I so the small flock proceeds with essentially no force on it. The small flock is, in this sense, "hijacked" by the large flock. To see how the Motsch-Tadmor fixes this issue, let us introduce the discrete version of (<ref>): ẋ_̇i̇ = v_i v̇_̇i̇ = λ/∑_j=1^N m_j ϕ(|x_i - x_j|)∑_j=1^N m_j ϕ(x_i - x_j)(v_j - v_i) . Under the same assumptions and notation as before, we estimate the alignment force. Since the large and small flock are far away, the strength of the alignment force any agent in the small flock, i ∈ I, is approximately λ/∑_i' ∈ I^N m_i'ϕ(|x_i - x_i'|) + ∑_j ∈ J^N m_j ϕ(|x_i - x_j|)≈λ/∑_i' ∈ I^N m_i'ϕ(|x_i - x_i'|) . The interactions with the agents from the large flock also drop out as before. So, the total alignment force on an agent i ∈ I is approximately d/dt v_i ≈λ/∑_i' ∈ I^N m_i'ϕ(|x_i - x_i'|)∑_i' ∈ I m_i'ϕ(|x_i - x_i'|) (v_i' - v_i) ) The small flock then behaves according to the Cucker-Smale model, but independently of the large flock. In our -model simulation, we are looking for this qualitative behavior. In particular, we aim to see: (Q_cs) For the Cucker-Smale model: The small flock proceeds linearly as if there were no force on it. (Q_) For the -model with _0 = 1/(ρ_0)_ϕ: The small flock behaves according to Cucker-Smale, but independently of the large flock. The velocities of the small flock will therefore align to the average velocity of the small flock. §.§.§ Discrete -model Let us state the discrete -model as a special case of (<ref>). Setting Φ_ρ^N(x,y) = ϕ(x-y) / (ρ^N)_ϕ where ρ^N ∈_M(^n) is given by ∑_j=1^N m_j δ_x_j(t) in (<ref>), we get = ∑_j=1^N ϕ(|x_i - x_j|) (v_j - v_i) . Recalling that = ρ^N_ϕ, we obtain the discrete -model: ẋ_̇i̇ = v_i v̇_̇i̇ = λ(x_i) ∑_j=1^N m_j ϕ(|x_i - x_j|) (v_j - v_i) ∂_t + ·∇_x = 0 . §.§.§ Numerical simulation The solutions are computed on the 2D unit torus, ^2. We consider initial data consisting of a small mass and a faraway large mass flock. The aim is to illustrate that, in solutions to the Cucker-Smale system, the small flock proceeds linearly as if there were no force on it (Q_cs); and that, in solutions to the -model (with _0 = 1/(ρ_0)_ϕ), the small flock aligns to the average velocity of the small flock, independently of the large flock (Q_). The parameters of the experiment are as follows. * The scalar strength of the alignment force is λ = 10. * ϕ(r) = 1/(1 + r^2)^80/2. * ρ_0^N is identical in the Cucker-Smale and the -model simulation. It is shown in Figure <ref> (and Figure <ref>) as the leftmost picture. The kernel is periodized so that the distance r measures the distance on ^2. The large mass flock is indicated by red particles and the small mass flock is indicated by green particles. Each red particle has 100 times the mass of a green particle. We see that the small agents in the Cucker-Smale case proceed linearly while the small agents in -model case align to the average velocity of the small flock. This is the desired qualitative behavior. §.§ Relevant quantities and Notation Let us introduce relevant quantities and notation. ^n is the n-dimensional Torus. _M(Ω) denotes the set of measures on Ω with total mass M = ∫_Ωρ(x). R > 0 is a bound on the maximum velocity of the initial flock, i.e. μ_0 ⊂^n × B_R where B_R is the ball of radius R in ^n centered around zero. C^k is the space of k continuously differentiable functions with the usual norm f_C^k = ∑_i=0^k f_C^i. As in the introduction, we will write f_ϕ := f ∗ϕ to denote convolutions. Subscripts - and + will be used as a shorthand for infima and suprema. For instance, f_- = inf_x ∈^n f(x), f_+ = sup_x ∈^n f(x). We will use (f_1, f_2) = ∫_^n f_1 f_2 to denote the L^2 inner product and (f_1, f_2)_h = ∫_^n f_1 f_2 h to denote the weighted L^2 inner product. As per the notation in <cit.>, we will use the notation κ_ρ = ρ. For instance, (·, ·)_κ_ρ denotes the L^2 inner product with respect to the measure κ_ρ. When a constant C depends on a parameter α, for instance, we will write C := C(α). Finally, let us define J, which will be a key quantity in the inheritance of regularity of the strength. We will use J to denote the maximum L^1 norm of the velocity with respect to the density measure on the time interval [0,T]: J := sup_t ∈ [0,T](̆t, ·)_L^1(ρ) = sup_t ∈ [0,T]∫_Ω (|| ρ)(t,x) . §.§ Outline The remaining goal of this paper is to establish the passage between the microscopic, mesoscopic, and macroscopic levels of descriptions of the -model and to address the relaxation to the Maxwellian in one dimension for the mesoscopic description. A key observation we make in this paper is that, when J < ∞ (given in Definition <ref>) and satisfies the integral representation (<ref>), the regularity conditions needed on can be subordinated to regularity conditions on the kernel Φ_ρ(x,y), (<ref>) and (<ref>). That is, and inherit regularity from the kernel Φ_ρ. This inherited regularity resembles the 'uniform regularity' conditions outlined in <cit.> for the environmental averaging models. The only result considered in the paper for which we do not have inheritance of regularity is the Maxwellian limit. There, we instead resort to a set of continuity conditions on the strength and the velocity averaging, (<ref>)-(<ref>), which are stated in Section <ref>. Notably, these continuity conditions hold a priori for the -model when the communication kernel ϕ is bounded away from zero. The -model therefore serves as an important example where all of our results hold. The rest of the paper will be organized as follows. In Section <ref>, we will show that, when J < ∞, the velocity averaging and strength inherit the regularity of the kernel. To prepare our arguments for the passage from the microscopic to macroscopic system for the -model, we will survey the results related to the passage for the environmental averaging models, but catered to the Cucker-Smale model, in Sections <ref> and <ref>. Indeed, the mathematical tools and arguments used in the passage for the -model follow a similar outline to those used for the environmental averaging models, but adapted to accommodate for the transport of the strength. In preparation for the mean field limit, the well-posedness of the microscopic -model (<ref>) is established in Section <ref>. The mean field limit is proved in Section <ref>. The hydrodynamic limits are proved in Section <ref>. Finally, in Section <ref>, we establish the relaxation to the Maxwellian in 1D for the mesoscopic -model (<ref>) provided the variation of the weight is small. §.§ Assumptions Our results will be stated for the torus ^n. Letting ρ, ρ', ρ”∈_M(^n), we assume throughout the paper, unless stated otherwise, that has the integral representation (<ref>) and that its reproducing kernel Φ_ρ(x,y) ≥ 0 satisfies the following uniform regularity assumptions: Reg1 ∂_x,y^k Φ_ρ_∞≤ C(k,M) Reg2 ∂_x,y^k (Φ_ρ'- Φ_ρ”) _∞≤ C(k,M) W_1(ρ', ρ”), where W_1 is the Wasserstein-1 distance. In order to guarantee that is an averaging operator, we also assume that Φ_ρ is right stochastic: ∫_^nΦ_ρ(x,y) ρ(y) = 1. Table <ref> lists models for which satisfies the integral representation (<ref>). All of these models– namely, M_CS, M_MT, M_ϕ, and M_seg– have kernels Φ_ρ(x,y) which satisfy (<ref>) and (<ref>) when ϕ≥ c > 0 and is smooth. The Maxwellian limit is the only result for we which these assumptions do not guarentee that the strength inherits the regularity of the kernel. There, we instead introduce a set of continuity assumptions, (<ref>)-(<ref>), which are verified for the -model when ϕ≥ c > 0. § INHERITED REGULARITY FROM THE KERNEL Our main assumptions (stated in Section <ref>) together with J < ∞ (given in Definition <ref>) imply that the velocity averaging and the strength inherit regularity from kernel. The inherited regularity is recorded in Proposition <ref> and Proposition <ref> below and will be the key ingredient to establishing the mean field limit and the monokinetic limit. (Inherited regularity of the velocity averaging) Suppose that the velocity averaging has the integral representation (<ref>) and that the kernel satisfies the uniform regularity assumptions (<ref>) and (<ref>). Let ρ_0, ρ'_0, ρ”_0 ∈_M(^n). If J < ∞, then for all k ≥ 0: $̆Reg1 ∂^k _∞≤ C_1 $̆Reg2 ∂^k ( - )_∞≤ C_1 W_1(ρ', ρ”) + C_2 W_1('̆ρ', ”̆ρ”), whereC_1 := C_1(k, M, J)andC_2 := C_2(k, M). For (<ref>), place all of the derivatives on the kernel and use For (<ref>), we write - = ∫_^nΦ_ρ'(x,y) '̆(y) ρ'(y) - ∫_^nΦ_ρ”(x,y) ”̆(y) ρ”(y) = ∫_^n (Φ_ρ'(x,y) - Φ_ρ”(x,y)) ('̆ρ')(y) + ∫_^nΦ_ρ”(x,y) ( ('̆ρ')(y) - (”̆ρ”)(y)) . Once again, placing the derivatives on the kernel and using J < ∞ on the first term, we arrive at (<ref>). J < ∞ holds a priori for the microscopic and mesoscopic -model, (<ref>) and (<ref>), due to the maximum principle on the velocity. The maximum principle also holds for the mesoscopic -model with the strong local alignment force for the monokinetic limiting regime, (<ref>) (the justification is provided in Section <ref>). However, for the mesoscopic -model with strong Fokker-Planck penalization force, (<ref>), there is no control on J and, therefore, there is no inheritance of regularity from the kernel. The strength subsequently inherits regularity from the velocity averaging. (Inherited regularity of the strength) Suppose that the uniform regularity conditions on the velocity averaging (<ref>) and (<ref>) hold. Let ρ_0', ρ_0”∈_M(^n). If ', ”∈ C([0,T]; C^k+1(^n)) with _0' = _0” where ' solves ∂_t ' + ∇_x · (' ) = 0 and ” solves ∂_t ” + ∇_x · (”) = 0, then for all k ≥ 0: Reg1 ∂^k ' _∞≤ C Reg2 ∂^k (' - ”) _∞≤ C ∂^k (s_0' - s_0”)_∞ + C sup_t ∈ [0,T]( W_1(ρ', ρ”) + W_1('̆ρ', ”̆ρ”) ) (t), where C := C(k, M, J, T). We only prove (<ref>) since (<ref>) follows a similar computation. Taking the k^th partial derivative of the strength equation, we get ∂_t ∂^k(' - ”) + ∇·∂^k (' - ”) = 0. This can be written as ∂_t ∂^k(' - ”) + ∇·∂^k ( ( - ) ' + (' - ”) ) = 0. Evaluating at a point of maximum of ∂^k (' - ”), the term ∂^k+1 (' - ”) drops out. We obtain for some constant A depending on k: ∂_t ∂^k (' - ”)_∞≤ A(k) ( - _C^k+1'_C^k+1 + _C^k+1' - ”_C^k). Summing over k, we obtain for some constant A' depending on k: ∂_t ' - ”_C^k≤ A'(k) ( - _C^k+1'_C^k+1 + _C^k+1' - ”_C^k). Applying (<ref>), (<ref>), and (<ref>) along with Gronwall's inequality yields constants C_1'(k,M,J), C_2'(k,M,J,T) such that ' - ”_C^k≤ e^C_1' Ts_0' - s_0”_C^k + C_2' e^C_1' Tsup_t ∈ [0,T]( W_1(ρ', ρ”) + W_1('̆ρ', ”̆ρ”) ) (t,·) Setting C = max{ C_2' e^C_1' T, e^C_1' T} gives (<ref>). § PASSAGE FROM MICROSCOPIC TO MACROSCOPIC FOR CUCKER-SMALE It will be helpful to survey the mean field and hydrodynamic arguments for the Cucker-Smale model in order to understand the extension of our argument to the-model. Indeed, these limiting arguments for the-model fit into the same framework as the arguments used in the Cucker-Smale case. Each section will outline the argument for the Cucker-Smale case and conclude with the statement of the corresponding theorem for the-model. The mean field and hyrdoynamic limits for the-model are stated in Theorems <ref>, <ref>, and <ref>. The proofs are in Section <ref> and <ref>. §.§ Mean field limit (Cucker-Smale survey) We will include details for pieces of the argument that do not depend (in a significant way) onand leave the pieces which depend onfor Section <ref>. To rigorously pass from the discrete system (<ref>) to the kinetic one (<ref>), we would like to show that solutions to (<ref>) are unique and that the solution arises as a limit, in some sense, of solutions to the discrete equation (<ref>). To make sense of discrete solutions converging to the kinetic one, the solutions to (<ref>) and (<ref>) must lie in the same space. We therefore consider measure-valued solutions to both systems. For (<ref>), a measure-valued solution is given by the empirical measure μ^N_t(x,v) = ∑_i=1^N m_i δ_x_i(t)⊗δ_v_i(t) where(x_i(t), v_i(t))_i=1^Nsolve (<ref>). To make sense of measure valued solutions for (<ref>), we define a weak solution as follows. Fix a time T > 0 and an integer k ≥ 0. We say μ∈ C_w^*([0,T]; _M(^n ×^n)) is a weak solution to (<ref>) if for all g(t,x,v) ∈ C_0^∞([0,T] ×^n ×^n) and for all 0 < t < T, ∫_^n ×^n g(t,x,v) dμ_t(x,v) = ∫_^n ×^n g(0,x,v) dμ_0(x,v) + ∫_0^t ∫_^n ×^n (∂_τ g + v ·∇_x g + λ F(μ_τ) ·∇_v g dμ_τ(x,v) . Observe that the empirical measure (<ref>) is a solution to (<ref>) if and only if(x_i(t), v_i(t))solve the discrete system (<ref>). We will always assume thatμ_0⊂^n × B_Rfor some fixedR>0. We now aim to show the existence and uniqueness of weak solutions and that they arise as weak limits of these empirical measures (<ref>), where the weak limit can be topologized by the Wasserstein-1 metric: W_1(μ, ν) = sup_Lip(g) ≤ 1| ∫_^n ×^n g(ø) (dμ(ø) - dν(ø)) | . The Wasserstein-1 distance metrizes the weak topology as long as the measures lie on some common compact set. Owing to the maximum principle on the velocity equation, this is the case here:μ_t⊂^n × B_Rfor allt ∈ [0,T). The goal will be to establish a stability estimate in the Wasserstein-1 metric: forμ_t', μ_t”∈_M(^n × B_R), there exists a constantC(M,R,T)such that W_1 (μ_t', μ_t”) ≤ C(M,R,T) W_1(μ_0', μ_0”) . For then, a Cauchy sequenceμ_0^NwithW_1(μ_0^N, μ_0) → 0yields a Cauchy sequenceμ_t^NwithW_1(μ_t^N, μ_t) → 0for someμ_t ∈ C_w^*([0,T]; (^n ×^n)). Thatμis a solution to (<ref>) will follow from taking limits. Stability in the Wasserstein-1 metric is tied to the stability of the characteristic flow of (<ref>). X(t,s,x,v) = V(t,s,x,v), X(s,s,x,v) = x, V(t,s,x,v) = λ F(μ_t)(X,V), V(s,s,x,v) = v . Indeed, letX(t,x,v) := X(t,0,x,v)andV(t,0,x,v) = V(t,x,v)and(x,v) = ø. Givenh ∈ C_0^∞(^2n), define the test functiong(s,ø) = h(X(t,s,ø), V(t,s,ø)). Then ∂_s g + v ·∇_x g + λ F(μ_s) ·∇_v g = 0 . Pluggingginto (<ref>), we see thatμ_tis the push-forward measure ofμ_0along the characteristic flow(X,V): ∫_^n ×^n h(X(t,ø), V(t,ø)) _0(ø) = ∫_^n ×^n h(ø) _t(ø) . LettingX' := X'(t,ø),V' := V'(t,ø)and similarly forX”,V”, we have for anyh ∈ Lip(^n)withLip(h) ≤ 1, ∫_^n ×^n h(ø) (dμ_t' - dμ_t”) = ∫_^n ×^n h(X', V') dμ_0' -∫_^n ×^n h(X”, V”) dμ_0”) = ∫_^n ×^n h(X', V') (dμ_0' - dμ_0”) +∫_^n ×^n (h(X', V') - h(X”, V”)) dμ_0” ≤ (∇ X'_∞ + ∇ V'_∞) W_1(μ_0', μ_0”) + X' - X”_∞ + V' - V”_∞ . Therefore, W_1(μ_t', μ_t”) ≤ (∇ X'_∞ + ∇ V'_∞) W_1(μ_0', μ_0”) + X' - X”_∞ + V' - V”_∞ . The stability estimate (<ref>) reduces to establishing the following stability estimates on the characteristic flow: ∇ X _∞ + ∇ V _∞≤ C(M,R, T) X' - X”_∞ + V' - V”_∞≤ C(M, R, T) W_1(μ_0', μ_0”) . This in turn implies the Wasserstein-1 stability (<ref>). Since these estimates will depend on the regularity of the strength, we will address these details for the-model in Lemmas <ref> and <ref>. Wasserstein-1 stability implieslim_N →∞ W_1(μ_t^N, μ_t) = 0for eacht ∈ [0,T]and for someμ_t ∈(^n ×^n). The last piece is to showμ∈ C_w^*([0,T]; (^n ×^n))is a weak solution to (<ref>). The weak convergenceW_1(μ_t^N, μ_t) → 0immediately implies that the linear terms in (<ref>) converge. For the non-linear term, the strength enters so we will address this for the-model later in Lemma <ref>. We record the corresponding mean field limit theorem for themodel, which will be proven in Section <ref>. (-model: Mean Field Limit) Suppose that the velocity averaging has the integral representation (<ref>) with the kernel satisfying (<ref>) and (<ref>). Given T>0, R>0, k ≥ 0, μ_0 ∈_M(^n × B_R), and _0 ∈ C^∞(^n), then there exists a unique weak solution (μ, ) ∈ C_w^*([0,T]; _M(^n × B_R)) × C([0,T]; C^k(^n)) to (<ref>). Moreover, the solution can be obtained as a limit of empirical measures (<ref>), μ_t^N, with corresponding strength functions _t^N: If _0^N = _0 and the empirical measures μ_0^N are constructed from agents (x_i^0,v_i^0) ∈^n ×^n with total mass M = ∑_i=1^N m_i = ∫_^n ×^n dμ_0^N(x,v) in such a way that W_1(μ^N_0, μ_0) → 0, then * sup_t ∈ [0,T] W_1(μ^N_t, μ_t) → 0, and * sup_t ∈ [0,T]∂^k (_t^N - _t)_∞→ 0 for any k ≥ 0 where solves the transport equation in (<ref>), ∂_t + ∇· () = 0. The strength enters in the stability estimates on the characteristic flow and in showing the limits (a) and (b). The inherited regularity from the kernel will play a crucial role in each of these. §.§ Hydrodynamic limits (Cucker-Smale survey) Once again, we will present the core arguments of the hydrodynamic limits for the Cucker-Smale case which do not depend (in a significant way) onand leave the pieces which depend onfor Section <ref>. To resolve the closure problem for (<ref>), a strong local alignment force,F_la, is added to the kinetic Vlasov equation, ∂_t + v ·∇_x + ∇_v · [ F()] + 1/ϵ F_la() = 0 . Under the right local alignment forceF_la, the corresponding macroscopic system formed from taking thev-moments will lose its dependence on the kinetic solutionasϵ→ 0. We will consider two such local alignment forces,F_la, corresponding to the monokinetic regime, where the distribution is forced to be concentrated around the macroscopic velocity$̆; and maxwellian regime, where the distribution is forced to be a Gaussian distribution centered around the macroscopic velocity $̆. We will denote the macroscopic density and momentum corresponding to the distributionbyand: (t,x) = ∫_^n(t,x,v) , ()(t,x) = ∫_^n v (t,x,v) . §.§.§ Monokinetic regime Under the monokinetic local alignment force, F_la = F_mono = ∇_v [ ( - v) ] = 0, the probability densityis forced to the monokinetic ansatz f(t,x,v) = ρ(t,x) δ(v - (̆t,x)) where(ρ, )̆solves: ∂_t ρ + ∇· (ρ̆) = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ = ρ ((ρ̆)_ϕ - ρ̆_ϕ) . If we know a priori that the solution remains non-vacuous, i.e.ρ > 0, then we can divide the momentum equation byρin order to rewrite it as an equation on the velocity$̆. We arrive at the pressureless Euler alignment system from the introduction (<ref>). The system (<ref>) has the advantage that the velocity equation obeys the maximum principle (as does (<ref>)): _L^∞≤_̆0_L^∞, which lends it to alignment analysis. For this reason, it has received a lot of attention in the literature. This monokinetic limit was first proved by Figalli and Kang for non-vacuous solutions ρ > 0 on the torus ^n in <cit.>. It was later improved to allow vacuous solutions on the open space ^n in <cit.> and extended to environmental averaging models in <cit.>. There, the local aligment is modified to force the system to a special averaged velocity field instead of the rough velocity field . F_la = F_mono reg = ∇_v [ (_δ - v) ] = 0 where _δ is a special mollification given by _̆δ = ( (ρ̆)_Ψ_δ/ρ_Ψ_δ)_Ψ_δ for some smooth mollifier Ψ_δ(x) = 1/δ^nΨ(x/δ). This special mollification has the key approximation property that _̆δ is close to $̆ for smallδwith a bound independent ofρ. The following approximation lemma can be found in <cit.>. For any ∈̆Lip and for any 1 ≤ p < ∞, one has _̆δ - _L^p(ρ)≤ C δ_Lip where C > 0 depends only on Ψ and p. We will be interested in extending this argument with the local alignment forceF_mono regfrom the environmental averaging models to the-model, see Theorem <ref>. The convergence to the monokinetic ansatz is measured in the Wasserstein-2 metric, which is given by W_2^2(f_1,f_2) = inf_γ∈∏(f_1,f_2)∫_^2n×^2n |w_1 - w_2|^2 γ(w_1, w_2) . The modulated kinetic energy,e( | )̆, which will be crucial in controllingW_2^2(_t, f_t), is defined by: e( | )̆ = ∫_^n ×^n | v - (̆x)|^2 (x,v) . Let us state the theorem for the Cucker-Smale model. (Cucker-Smale: Monokinetic Limit) Let (ρ, )̆ be a classical solution to (<ref>) on the time interval [0,T] and let f be the monokinetic ansatz (<ref>). Suppose that f_0^ϵ∈ C_0^k(^n ×^n) is a family of initial conditions satisfying: (i)f_0^ϵ⊂^n × B_R for any fixed R>0, and (ii)W_2(f_0^ϵ, f_0) < ϵ Then there exists a constant C(M,R,T) such that for all t ∈ [0,T] one has W_2(f_t^ϵ, f_t) ≤ C √(ϵ + δ/ϵ) . In order to controlW_2(f_t^ϵ, f_t), it suffices to control ∫_^2n×^2n |w_1 - w_2|^2 γ(w_1, w_2) for a particularγ_t ∈∏(f_t^ϵ, f_t). The natural choice forγ_tis given by the flow ∂_t γ + v_1 ·∇_x_1γ + v_2 ·∇_x_2γ + ∇_v_1 [ γ (v_1 - ) + 1/ϵ (v_1 - _δ) ] + ∇_v_2 [ γ (v_2 - _F) ] = 0, where = ()_ϕ / _ϕ. Sinceγ_t ∈∏(f_t^ϵ, f_t), W := ∫_^2n×^2n |w_1 - w_2|^2 γ_t(w_1, w_2) ≥ W_2^2(_t, f_t) So, we aim to controlW. We split it into the potential and kinetic components W = ∫_^2n×^2n |v_1 - v_2|^2 γ_t + ∫_^2n×^2n |x_1 - x_2|^2 γ_t := W_v + W_x ForW_x, we have d/dt W_x ≤ W_x + W_v . ForW_v, we have W_v ≤∫_^2n×^2n |v_1 - (̆x_1)|^2 γ_t + ∫_^2n×^2n |(̆x_1) - (̆x_2)|^2 γ_t + ∫_^2n×^2n |(̆x_2) - v_2|^2 γ_t ≤∫_^n ×^n | v - (̆x)|^2 (x,v) + C ∫_^2n×^2n |x_1 - x_2|^2 γ_t = e( | )̆ + C W_x . We obtain d/dt W_x ≲ e( | )̆ + W_x W_v ≲ e( | )̆ + W_x . Expandinge( | )̆, we get e( | )̆ = _ϵ - ∫_^n·+̆1/2∫_^n ||̆^2 , where_ϵis the kinetic energy: _ϵ = 1/2∫ |v|^2 (x,v) . Under no assumptions on the strength, it is shown in <cit.> that ≲ + 1/ϵ∫_^n (_δ - ) · + ( - ,̆ - )_κ_ + ∫_^n(-̆) · ( - )̆ . The local alignment term is controlled with the help of Lemma <ref>, see <cit.>: 1/ϵ∫_^n (_δ - ) ·≲δ/ϵ . So, it remains to estimate the alignment terms A := ( - ,̆ - )_κ_ + ∫_^n(-̆) · ( - )̆ . The estimate onAdoes depend on the regularity of the strength and it is addressed in Theorem <ref>, which is proved in Section <ref>. (-model: Monokinetic Limit) Suppose the velocity averaging has the integral representation (<ref>) with a kernel Φ satisfying the regularity conditions (<ref>) and (<ref>). Let (ρ, , )̆ be a smooth solution to (<ref>) on the time interval [0,T] with mass M. Let f = ρ(t,x) δ(v - (̆t,x)). Suppose _0 ∈ C^∞(^n), f_0^ϵ∈ C_0^k(^n ×^n) is a family of initial conditions satisfying: (i)f_0^ϵ⊂^n × B_R, for any fixed R>0. (ii)W_2(f_0^ϵ, f_0) < ϵ (iii)_0 = _0 ∈ C^∞(^n) . Then there exists a constant C(M,R,T) such that for all t ∈ [0,T]: W_2(f_t^ϵ, f_t) ≤ C √(ϵ + δ/ϵ) . As a consequence of this and Proposition <ref>, sup_t ∈ [0,T]( - )(t,·)_∞→ 0. §.§.§ Maxwellian regime Let us rewrite the equation for the macroscopic density and momentum (<ref>) as ∂_t ρ + ∇· (ρ̆) = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ + ∇ρ + ∇· = ρ ( (ρ̆)_ϕ - ρ̆_ϕ ) where (t,x) = ∫_^n ((v-)̆⊗ (v - )̆ - 𝕀) f . Under the local alignment force, F_la = F_max reg = Δ_v + ∇_v · ((v - _δ) ), where_δis given in (<ref>), the probability densityis forced to the Maxwellianμ(t,x), μ(t,x) = ρ(t,x)/(2π)^n/2 e^|v - (̆t,x)|^2/2, where(ρ, )solve the isentropic Euler alignment system: ∂_t ρ + ∇· (ρ̆) = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ + ∇ρ = ρ ( (ρ̆)_ϕ - ρ̆_ϕ ) . As in the monokinetic limit, we are interested in extending this argument with the local alignment forceF_max regfrom the environmental averaging models to the-model, see Theorem <ref>. The distance betweenand the Maxwellianμis controlled by the relative entropy( | μ), which is defined as: ( | μ) = ∫_^n ×^nlog/μ . Due to the classical , for some constantc > 0, c - μ_L^1(^n ×^n)≤( | μ), it suffices to show that( | μ) → 0. We state the theorem for the Cucker-Smale model. (Cucker-Smale: Maxwellian limit) Let (ρ, )̆ be a smooth, non-vacuous solution to (<ref>) on the torus ^n and on the time interval [0,T]. Suppose that f_0^ϵ∈ C_0^k(^n ×^n) is a family of initial conditions satisfying lim_ϵ→ 0(_0 | μ_0) = 0 . Then for δ = o(ϵ), sup_t ∈ [0,T]( | μ) → 0 . Breaking the relative entropy into the kinetic and macroscopic parts, we have ( | μ) = _ϵ + _ϵ, _ϵ = ∫_^n ×^n( log + 1/2 |v|^2 ) + n/2log(2π), _ϵ = ∫_^n( 1/2 ||̆^2 - ·-̆logρ) . We then seek to estimate_ϵand_ϵ. Let us define the Fisher information,, which will be relevant for these estimates: = ∫_^n ×^n | ∇_v + (1 + ϵ /2) (v - ) |^2 / . The following identities are from <cit.>: = -1/ϵ∫_^n ×^n[ |∇_v |^2/ + 2 ∇_v · (v - _δ) + |v - _δ|^2 ] - 1/ϵ [(_δ, )_ - (_δ, _δ)] -∫_^n ×^n [ ∇_v · (v - ) + v · (v - ) ] and = ∫_^n [ ∇:̆ - ( - )̆·∇·̆( - )̆ ] + 1/ϵ∫_^n (_δ - ) · + _L^2()^2 - (, )_ + δ A . whereδ Ais the same alignment term used in the proof of the monokinetic case. From (<ref>), dropping the first two terms, we obtain d/dt≤ -∫_^n ×^n [ ∇_v · (v - ) + v · (v - ) ] = ∫_^n ×^n -∫_^n ×^n |v|^2 + (, )_κ^ϵ . This estimate will be used to show thatis bounded which will in turn be used to control the Reynolds stress termin the equation for. All that is needed to complete this estimate is the boundedness of, which will indeed hold due to the inherited regularity of the kernel. We will nonetheless delay this detail until the proof of Theorem <ref> (in order to isolate the pieces of the argument which depend on the regularity of the strength). A stronger estimate onis achieved by retaining the first dissipative term in (<ref>). From <cit.>, the first term and the last term in (<ref>) combine to control the Fisher information and we obtain d/dt≤ -1/ϵ + ϵ/4∫_^n ×^n |v - |^2 - _L^2(κ_ϵ)^2 + (, )_κ^ϵ . Turning to the terms in (<ref>) which do not depend on the strength, we have due to the assumed smoothness of$̆, | ∫_^n ( - )̆·∇·̆( - )̆| ≲∫_^n | - |̆^2 ≤( | )̆ . The local alignment term is the same as in the monokinetic case and is handled the same way with the help of Lemma <ref>, see <cit.>: 1/ϵ∫_^n (_δ - ) ·≲δ/ϵ . The remaining estimates depend on the regularity of the strength and will be addressed in the proof of Theorem <ref>. The Fokker-Planck penalization force destroys any uniform control on the momentum uniformly in ϵ so the strength will not inherit the regularity of the kernel in this case. Instead, we identify a set of continuity conditions (<ref>)-(<ref>) which are sufficient for proving the Maxwellian Limit. Notably, these conditions hold for the -model, which we show (along with the proof of Theorem <ref>) in Section <ref>. (-model: Maxwellian Limit) Suppose the continuity conditions (<ref>)-(<ref>) hold. Let (ρ, , )̆ be a smooth, non-vacuous solution to (<ref>) on the torus ^n and on the time interval [0,T] with mass M. Suppose also that _0 ∈ C^∞(^n) and f_0^ϵ∈ C_0^k(^n ×^n) is a family of initial conditions satisfying (i)(_0 | μ_0) → 0 as ϵ→ 0, (ii)_0 = _0 ∈ C^∞(^n). Then for δ = o(ϵ), sup_t ∈ [0,T]( | μ) → 0. As a consequence of this and Proposition <ref>, sup_t ∈ [0,T]( - )(t,·)_∞→ 0. § WELL-POSEDNESS OF THE DISCRETE-CONTINUOUS -MODEL The well-posedness of the discrete-continuous system must be established before addressing the mean field passage to the kinetic system. This section is devoted to proving global existence and uniqueness of solutions to (<ref>) in class ({x_i(t)}_i=1^N, {v_i(t)}_i=1^N, (t)) ∈ C([0,T]; ^nN×^nN× H^k(^n)), k > n/2 + 2. The choice of k > n/2 + 2 guarantees that ∂^2 _∞≲_H^k for an arbitrary second order partial derivative ∂^2 by the Sobolev Embedding Theorem. Let us recall our main assumptions which will be relevant here: the velocity averaging satisfies the integral representation (<ref>) and the kernel Φ_ρ^N is smooth. Written in the terms of the discrete empirical data ρ^N, ^̆N, the velocity averaging becomes: (t,x) = ∑_j=1^N m_j Φ_(ρ^N)(x, x_j) v_j(t), (ρ^N)(t,x) = ∑_j=1^N m_j δ_x_j(t) . From Proposition <ref>, inherits the regularity of the kernel and, in particular, satisfies (<ref>). For the remainder of this section, we will use C := C(k,M) to denote a constant depending on k and M. For instance, the inherited regularity of velocity averaging (<ref>) can be written: ∂^k _∞≤ C. The constant C may change line by line. Consider the following regularized version of (<ref>): ẋ_̇i̇ = v_i v̇_̇i̇ = λ(x_i) ((x_i) - v_i ) ∂_t + ∇_x · () = ϵΔ . For the moment, we avoid writing the explicit dependence of x_i, v_i, on ϵ for the sake of brevity. They are not to be confused with solutions to the unregularized system (<ref>). Let X = C([0,T]; ^2nN× H^k(^n)) and define Z(t) := ({x_i(t)}_i=1^N, {v_i(t)}_i=1^N, (t)). The norm ·_X is given by: Z_X = sup_t ∈ [0, T]max_i = 1 … Nx_i(t) + v_i(t) + (t,·) _H^k(^n). Define the map by ( Z(t) ) = [ 1; 1; e^ϵ t Δ ] Z_0 + ∫_0^t [ 1; 1; e^ϵ (t-τ) Δ ] A(τ) dτ, where A represents all of the non-laplacian terms. Existence and uniqueness of solutions to (<ref>) amounts to showing that : B_1(Z_0) ↦ B_1(Z_0) and that it is a contraction mapping, where B_1(Z_0) is the ball of radius 1 centered at Z_0 in X. Contraction will follow similarly from invariance. For invariance, we aim to show that: ( Z(t) ) - Z_0 _X ≤ e^ϵ t Δ Z_0 - Z_0 _X + ∫_0^t e^ϵ (t-τ) Δ A( Z(τ) ) dτ_X ≤ 1 . The first term is small for small T due to the continuity of the heat semigroup. For second term, we will treat each component individually. The x_i-component gives x_i(t) - x_i(0)≤ T max_i v_i(0) . For the v_i-component, we have v_i(t) - v_i(0) ≤ T _∞ (_∞ + max_i v_i(0)) ≤ C T (Z_0 + 1) . For the -component, we will use the analyticity property of the heat semigroup: ∇ e^ϵ t Δ f_L^2≤1/√(ϵ t)f_L^2 along with the product estimate to get ∫_0^t e^ϵ (t-τ) Δ∇_x · () dτ_H^k ≤2 T^1/2/ϵ^1/2sup_t ∈ [0,T]_H^k ≤2 T^1/2/ϵ^1/2sup_t ∈ [0,T] ( _∞_H^k + _H^k_∞) ≤2 T^1/2/ϵ^1/2 (Z_0_X + 1) . For small enough T, we have invariance. Contractivity of follows from similar estimates. This time interval of existence T could depend on ϵ. To establish that the there is a common time interval of existence independent of ϵ, we establish an ϵ-independent energy estimate. For the x_i and v_i components, the estimates follow easily from the invariance estimates. We record them here. x_i(t)≤x_i(0) + t max_i v_i(0), and v_i(t)≤v_i(0) + t _∞ (_∞ + max_i v_i(0)) ≤ C( 1 + t _H^k ). For the strength, we multiply (<ref>) by ∂^2j and integrate by parts. We obtain for any 0 ≤ j ≤ k, d/dt_Ḣ^j = ∫∇_x · |∂^j |^2 - ∫ (∂^j (·∇_x ) - ·∇_x ∂^j ) ∂^j - ∫∂^j ((∇_x ·) ) ∂^j - ϵ∫ |∂^j ∇|^2 . Dropping the ϵ term coming from the Laplacian, we obtain _Ḣ^j^2 ≤ C ( _Ḣ^j^2 + _Ḣ^j∇_∞ + _Ḣ^j_∞), for all 0 ≤ j ≤ k. Since k > n/2 + 2, _H^k^2 ≤ C _H^k^2. We will now denote the explicit dependencies on ϵ and take ϵ→ 0. From Grownwall, we conclude that Z^ϵ(t) exists on a common time interval independent of ϵ. Writing the equation for ()^2, we have _L^2^2 ≤ C _H^1 + ϵ_H^2≤ C _H^k. By local well-posedness Z^ϵ∈ C([0,T]; R^2nN× H^k(^n)) so that d/dt Z^ϵ∈ L^2([0,T]; ^2nN× L^2(^n)). By the Aubin-Lions lemma, we obtain a subsequence, which we denote again by Z^ϵ such that Z^ϵ→ Z^0 in C([0,T]; R^2nN× H^k-1(^n)). Since H^k-1(^n) is dense in H^k(^n), we have Z^0 ∈ C_w([0,T]; ^2nN× H^k(^n)). Finally, since k > n/2 + 2, the terms A^ϵ converge pointwise to A. Taking ϵ→ 0 in the Duhamel formula (<ref>), we get Z^0(t) = Z^0_0 + ∫_0^t A(τ) dτ . That is, Z^0 ∈ C_w([0,T]; ^2nN× H^k(^n)) solves (<ref>). Finally, we note that due to the ϵ-independent energy estimate (<ref>), Z^0_X remains bounded for any finite time and thus exists for all time. This concludes existence and uniqueness of solutions to (<ref>) on the global time interval [0, ∞). § MEAN FIELD LIMIT We establish the passage from from the discrete system (<ref>) to the kinetic system (<ref>) in the mean field limit for the -model, Theorem <ref>. The outline for this argument is discussed in Section <ref> for the Cucker-Smale model. We will skip the pieces of the argument which are covered there (in particular, we will skip the pieces of the argument in which has an insignificant role). The empirical measures are given by: μ^N_t(x,v) = ∑_i=1^N m_i δ_x_i(t)⊗δ_v_i(t), (x_i(t), v_i(t), (t,x))_i=1^N solve (<ref>). To make sense of measure valued solutions, we consider a weak version of (<ref>). Fix a time T > 0 and an integer k ≥ 0. We say the pair (μ, ) with μ∈ C_w^*([0,T]; (^n ×^n)) and ∈ C([0,T]; C^k(^n)) is a weak solution to (<ref>) if for all g(t,x,v) ∈ C_0^∞([0,T] ×^n ×^n) and for all 0 < t < T, ∫_^n ×^n g(t,x,v) dμ_t(x,v) = ∫_^n ×^n g(0,x,v) dμ_0(x,v) + ∫_0^t ∫_^n ×^n (∂_τ g + v ·∇_x g + ( - v) ·∇_v g) dμ_τ(x,v) ∂_t + ∇_x · () = 0. In other words, μ solves the first equation weakly and the strength, , solves the second equation strongly. As in the Cucker-Smale case, the empirical measure (<ref>) is a solution to (<ref>) if and only if (x_i(t), v_i(t), (t,x)) solve the discrete system (<ref>). The well-posedness of (<ref>) for empirical measure-valued solutions is therefore equivalent to the well-posedness of the discrete-continuous system (<ref>) established in Section <ref>. To show existence of general weak solutions to (<ref>), we will show that the weak solution arises a weak limit of the empirical measures (<ref>). The goal is to establish the Wasserstein-1 stability estimate (<ref>). Stability in the Wasserstein-1 metric amounts to showing the stability estimates (<ref>) and (<ref>) on the characteristic flow: X(t,s,x,v) = V(t,s,x,v), X(s,s,x,v) = x V(t,s,x,v) = (X) ((X) - V), V(s,s,x,v) = v, which will be established in the following Lemmas. Let us first note that the total momentum ∫_^nρ = ∫_^n ×^n v dμ_t(x,v) in (<ref>) is conserved (by plugging in g = v). In particular, J<∞ and we can apply Propositions <ref> and <ref> and use the inherited regularity from the kernel. By a similar argument to the Cucker-Smale case, we also have that μ_t is the push-forward of μ_0 along the characteristic flow: ∫_^n ×^n h(X(t,ø), V(t,ø)) _0(ø) = ∫_^n ×^n h(ø) _t(ø), ø = (x,v) (Deformation Tensor Estimates) Suppose that the velocity averaging has the integral representation (<ref>) and that the kernel satisfies the uniform regularity assumptions (<ref>) and (<ref>). Let (μ, ) be a weak solution to (<ref>) on [0,T] with characterstics X,V given in (<ref>). Then ∇ X _∞ + ∇ V _∞≤ C(M,R,J,T). Differentiating (<ref>), ∇ X = ∇ V ∇ V = ∇ X^T ∇(X) ((X) - V) + (X) ∇ X^T ∇(X) - (X) ∇ V. By the maximum principle on the velocity and the inherited regularity (<ref>), (<ref>), and (<ref>) (∇ V_∞ + ∇ X_∞) ≤ C'(M, R, J, T) ( ∇ X_∞ + ∇ V_∞ ) . We conclude by Gronwall. (Continuity Estimates) Suppose that the velocity averaging has the integral representation (<ref>) and that the kernel satisfies the uniform regularity assumptions (<ref>) and (<ref>). Let (μ', '), (μ”, ”) be weak solutions to (<ref>) on [0,T]; and let X', X”, V', V” be the corresponding characteristics given by (<ref>). Then X' - X”_∞ + V' - V”_∞≤ C(M, R, J, T) W_1(μ_0', μ_0”). We have (X' - V') = V' - V” (V' - V”) = '(X') ((X') - V') - ”(X”) ((X”) - V”) . By the maximum principle on the velocity and inherited regularity conditions (<ref>), (<ref>), (<ref>), (<ref>), V' - V”_∞ ≤'(X') - '(X”)_∞(X') - V' _∞ + '(X”) - ”(X”)_∞(X') - V' _∞ + ”(X”)_∞(X') - V' - (X”) + V”_∞ ≤ C(M,R,J,T) ( V' - V”_∞ + sup_t ∈ [0,T] (W_1(ρ', ρ”) + W_1('̆ρ', ”̆ρ”))(t) ). Combining with X' - X”_∞≤V' - V”_∞, we get ( X' - X”_∞ + V' - V”_∞) ≤ C(M,R,J,T) ( X' - X”_∞ + V' - V”_∞ + sup_t ∈ [0,T] (W_1(ρ', ρ”) + W_1('̆ρ', ”̆ρ”))(t) ) . To estimate W_1(ρ',ρ”) and W_1('̆ρ', ”̆ρ”), we use the fact that μ_t is the push-forward of μ_0, (<ref>). Fix g_Lip≤ 1. Since M' = M, ∫_^n g(x) (dρ_t' - dρ_t”) = ∫_^n ×^n g(x) (dμ_t' - dμ_t”) = ∫_^n ×^n g(X') dμ_0' - ∫_^n ×^n g(X”) dμ_0” = ∫_^n ×^n (g(X') - g(X”)) dμ_0' + ∫_^n ×^n g(X”) (dμ_0' - dμ_0”) ≤ M X' - X”_∞ + ∇ X”_∞ W_1(μ_0', μ_0”). For W_1('̆ρ', ”̆ρ”), we have ∫_^n g(x) (d('̆ρ_t') - d('̆ρ_t”)) = ∫_^n ×^n vg(x) (dμ_t' - dμ_t”) = ∫_^n ×^n V'g(X') dμ_0' - ∫_^n ×^n V”g(X”) dμ_0” = ∫_^n ×^n (V'g(X') - V”g(X”)) dμ_0' + ∫_^n ×^n V”g(X”) (dμ_0' - dμ_0”) ≤ M g_∞V' - V”_∞ + M X' - X”_∞ R + ( g_∞∇ V”_∞ + R ∇ X”_∞) W_1(μ_0', μ_0”). These estimates hold uniformly in time. Plugging these into (<ref>) and using Lemma <ref>, we conclude by Gronwall. An immediate Corollary is that the regularity conditions (<ref>) and (<ref>) can be restated in terms of the distance W_1(μ_0',μ_0”). Suppose that the velocity averaging has the integral representation (<ref>) and that the kernel satisfies the uniform regularity assumptions (<ref>) and (<ref>). Let (μ', '), (μ”, ”) be weak solutions to (<ref>) on [0,T]. Then sup_t ∈ [0,T]( W_1(ρ', ρ”) + W_1('̆ρ', ”̆ρ”) )(t) ≤ C_1(M,R,J,T) W_1(μ_0', μ_0”), and $̆Lip2' sup_t ∈ [0,T]∂^k ( - )(t, ·)_∞≤ C_2(k,M,R,J,T) W_1(μ_0', μ_0”) , Lip2' sup_t ∈ [0,T]∂^k (' - ”)(t, ·) _∞≤ C_3(k,M,R,J,T) ( ∂^k(_0' - _0”)_∞ + W_1(μ_0', μ_0”) ) . Lemmas <ref>, <ref> and inequality (<ref>) imply the desired Wasserstein-1 stability (<ref>). We conclude that the empirical measuresμ_t^Nconverge in the Wasserstein-1 metric to someμ∈ C_w^*([0,T]; (^n × B_R))uniformly on[0,T](the weak^*continuity ofμowes to the weak^*continuity of the empirical measures and the uniform convergence on[0,T]). In addition, by Corollary <ref>, for anyk≥ 0,_t^Nconverges inC^kto some_t ∈ C^k(^n)uniformly on[0,T]. It remains to show that the limiting pair(μ, )is in fact a weak solution to (<ref>) in the sense of Definition <ref>. Suppose a sequence of solutions μ^N ∈ C_w^*([0,T]; (^n × B_R)) converge weakly pointwise, i.e. μ_t^N →μ_t, uniformly for all t ∈ [0,T]; and suppose that for any k≥ 0, _t^N converges in C^k(^n) to _t, i.e. ∂^k(_t^N - _t)_∞→ 0, uniformly for all t ∈ [0,T]. Then (μ, ) ∈ C_w^*([0,T]; (^n × B_R)) × C([0,T], C^k(^n)) is a weak solution to (<ref>). In the following proof, C := C(M,R,J,T) and C' := C(k,M,R,J,T). We have already observed that, due to Corollary (<ref>), for any k, _t^N converges in C^k to some _t ∈ C^k(^n) uniformly on [0,T]. In addition ∂_t ^N = -∇· (^N ) is uniformly bounded on [0,T] due to (<ref>) and (<ref>). So, ∈ C([0,T]; C^k(^n)). To show that solves the transport equation ∂_t + ∇· () = 0, let X̃ be the characteristic flow, X̃(t, α) = (t, X̃(t,α)), X̃(0, α) = α . Similarly, let X̃^N be the characteristic flow for the empirical strength _t^N which solves ∂_t ^N + ∇· (^N ) = 0, X̃^N(t, α) = (t, X̃^N(t,α)), X̃^N(0, α) = α . Abbreviating X̃(t,α), X̃^N(t,α) by X̃, X̃^N and using (<ref>) and (<ref>), we have for all k ≥ 0 ∂^k ( (τ, X̃^N) - (τ, X̃)) _∞ ≤∂^k ( (τ, X̃) - (τ, X̃))_∞ + ∂^k ( (τ, X̃^N) - (τ, X̃)) _∞ ≤∂^k ( (τ, X̃) - (τ, X̃)) _∞ + ∂^k+1_∞X̃^N - X̃_∞ ≤ C' ( W_1(μ_0^N, μ_0) + X̃^N - X̃_∞). As a result, (X̃^N - X̃)(t,·)_∞≤ C ( W_1(μ_0', μ_0”) + (X̃^N - X̃) (t,·) _∞), for all t ∈ [0,T) . By Gronwall and X̃(0,α) = X̃^N(0,α) = α, we obtain (X̃^N- X̃)(t,·)_∞≤ C W_1(μ_0^N, μ_0), for all t ∈ [0,T) . Now solving along the characteristic X̃^N, ^N (t, X̃^N(t, α)) = _0^N(α) exp{∫_0^t ∇· (τ, X̃^N(s, α)) dτ}. With (<ref>) and (<ref>) in hand, we obtain uniform in time convergence to (t, X̃(t, α)) = _0(α) exp{∫_0^t ∇· (τ, X̃(s, α)) dτ}. So, _t is a solution to ∂_t + ∇() = 0. Turning to the convergence of the empirical measures, the weak convergence W_1(μ_t^N, μ_t) → 0 immediately implies that the linear terms in (<ref>) converge. Let us address the nonlinear term. ∫_0^t ∫_^2n∇_v g ^N ( - v)) dμ^N_τ(x,v) - ∫_0^t ∫_^2n∇_v g ( - v)) dμ_τ(x,v) ≤∇_v g_∞∫_0^T ∫_^2n^N ( - v)) - ( - v))_∞ dμ^N_τ(x,v) + ∇_v g_∞∫_0^T ∫_^2n ( - v))_∞ d(μ^N_τ(x,v) - μ_τ(x,v)). The second term goes to zero by weak convergence. For the first term, we simply use the regularity conditions (<ref>), (<ref>), (<ref>), (<ref>) to get ^N ( - v)) - ( - v))_∞ ≤^N - _∞_∞ + _∞ - _∞ ≤ C W_1(μ_0^N, μ_0). This gives ^N ( - v)) - ( - v))_∞→ 0 uniformly on [0,T]. § HYDRODYNAMIC LIMITS The goal of this section is to establish a passage from the kinetic description (<ref>) to the corresponding macroscopic description in the monokinetic and Maxwellian limiting regimes, Theorems <ref> and <ref>. The arguments presented here contain similar estimates to the hydrodynamic limiting arguments presented for environmental averaging models. We will therefore often refer to Section <ref> where relevant quantities and estimates are presented in the context Cucker-Smale model (i.e. when the regularity of the strength is known). From the introduction, thev-moments of the mesoscopic system (<ref>) yields (<ref>) with the Reynolds Stress tensorgiven by (<ref>). The system is closed by adding a local alignment forceF()to the kinetic equation (<ref>): ∂_t + v ·∂_x = ∇_v · ( (v - ) ) + 1/ϵ F() ∂_t + ∇· () = 0 . We will consider two such local alignment forces corresponding to the monokinetic and Maxwellian regimes. In the Monokinetic regime,F()is given by (<ref>). This monokinetic local alignment force pushes the kinetic solution towards the monokinetic ansatz f(t,x,v) = ρ(t,x) δ(v - (̆t,x)), where(ρ, , )̆solve (<ref>) (Theorem <ref>). The Reynolds stress tensor becomes zero and the macroscopic description becomes ∂_t ρ + ∇· (ρ̆) = 0 ∂_t + ∇· () = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ = ρ( - )̆ . In the non-vacuous case, we can divide the$̆-equation by ρ and this becomes (<ref>). The inherited regularity of the strength from the kernel will once again play a crucial role in the proof of the monokinetic limit. For the Maxwellian regime, we will rewrite the v-moments system (<ref>) as: ∂_t ρ + ∇· (ρ) = 0 ∂_t + ∇· () = 0 ∂_t (ρ) + ∇· (ρ⊗) + ∇ρ + ∇· = ρ ( - ) , where _δ is given in (<ref>) and (t,x) = ∫_^n ((v-) ⊗ (v - ) - 𝕀) . To close this system, we consider a strong Fokker-Planck force F() given by (<ref>), which pushes the kinetic solution towards the Maxwellian, μ(t,x) = ρ(t,x)/(2π)^n/2 e^-|v - (̆t,x)|^2/2, where (ρ, , )̆ solve (<ref>) (Theorem <ref>). The Reynolds stress term vanishes and the system becomes ∂_t ρ + ∇· (ρ̆) = 0 ∂_t + ∇· () = 0 ∂_t (ρ)̆ + ∇· (ρ⊗̆)̆ + ∇ρ = ρ ( - )̆ . Unfortunately, the total momentum of solutions to (<ref>) with the strong Fokker-Planck force F()(<ref>) is not necessarily bounded uniformly in ϵ, so Propsitions <ref> and <ref> do not apply and does not inherit regularity from the kernel. We've instead identified a set of sufficient continuity conditions for the solution (<ref>)-(<ref>) given in Section <ref>. An important class of models for which these conditions can be verified is the w-model with the interaction kernel ϕ bounded from below away from zero. §.§ Monokinetic Regime We consider (<ref>) under the local alignment force (<ref>): ∂_t + v ·∂_x = ∇_v · ( (v - ) ) + 1/ϵ∇_v · ((v - _δ )) ∂_t + ∇· () = 0 , where _̆δ is the special mollification given in (<ref>). We will first verify the maximum principle on the velocity so that we can use the inherited regularity of the strength. The characteristic equations of (<ref>) are given by: X(t,s,x,v) = V(t,s,x,v), X(s,s,x,v) = x V(t,s,x,v) = (X)((X) - V) + 1/ϵ (_δ - V), V(s,s,x,v) = v. As in the Cucker-Smale and -model, the measure f_t(x,v) is the push-forward of f_0(x,v) along the characteristic flow. Letting ø := (x,v), X' := X(t, ø'), V' := V(t, ø'), and using the right stochasticity of Φ_ρ, (<ref>), we compute: (X)((X) - V) = ∫_^n ×^n(X) Φ_(X, x) v _t(x,v) - V = ∫_^n ×^n(X) Φ_(X, x) (v - V) _t(x,v) = ∫_^n ×^n(X) Φ_(X, X') (V' - V) _0(ø') ' . Considering compactly supported initial data, f_0 ⊂^n × B_R, and evaluating at a point of maximum of V, we have ∫_^n ×^n(X) Φ_(X, X') (V' - V_+) _0(x,v) ' ≤ 0, V_+(t) = max_(x,v) ∈^n × B_R |V(t,0,x,v)| . For the local alignment term, recall that _δ is the averaging for the M_ϕ model with kernel given in Table (<ref>) with ϕ = Ψ_δ for a smooth mollifier Ψ_δ. Therefore, it is handled similarly. Denoting the kernel by Φ_, δ, we have: 1/ϵ∫_^n ×^nΦ_, δ(X, X') (V' - V_+) f_0(x,v) ' ≤ 0 . Using the classical Rademacher Lemma, we obtain V_∞≤ 0 . Of course, this implies boundedness of the macroscopic velocity: | | = | ∫_B_R v dv | /∫_B_R dv≤ R ∫_B_R dv/∫_B_R dv = R . With the maximum principle in hand, we can readily apply the inheritance of regularity. Furthermore, since , f ⊂^n × B_R, we have W_1(, ρ̆) ≤ C W_1(f^ϵ, f) ≤ CW_2(f^ϵ, f). Similarly, W_1(, ρ) ≤ CW_2(f^ϵ, f). We can then modify (<ref>) and (<ref>) to get $̆Reg2' - _L^∞≤ C(M,J) W_2(, f) Reg2' - _L^∞≤ C(M,J,T) sup_t ∈ [0,T] W_2(, f) (t) provided _0 = _0. Since we are working on^n, theL^∞norm can be interchanged withL^2norm (with the only difference being the constantC, which is inconsequential for our arguments). (proof of Theorem <ref>) When an identity or inequality does not depend on the regularity of the strength and can be found in the proof for the Cucker-Smale case, we refer the reader to Section <ref>. In order to control W_2(_t, f_t), we consider the flow γ_t whose marginals are _t and f_t (i.e. γ_t ∈∏(_t, f_t)) ∂_t γ + v_1 ·∇_x_1γ + v_2 ·∇_x_2γ + ∇_v_1 [ γ (v_1 - ) + 1/ϵ (v_1 - _δ) ] + ∇_v_2 [ γ (v_2 - ) ] = 0. Since γ_t ∈∏(_t, f_t), W := ∫_^2n×^2n |w_1 - w_2|^2 μ̣_t(w_1, w_2) ≥ W_2^2(_t, f_t). So, we aim to control W. Splitting it into the potential and kinetic components, we get: W = ∫_^2n×^2n |v_1 - v_2|^2 γ_t + ∫_^2n×^2n |x_1 - x_2|^2 γ_t := W_v + W_x . We will from here on use the notation A ≲ B as a shorthand for A ≤ C(M,J,T) B. As in the Cucker-Smale case, we get W_x ≲ + W_x W_v ≲ + W_x , where the modulated kinetic energy is given in (<ref>). The modulated kinetic energy can also be estimated as in the Cucker-Smale case: ≲ + δ/ϵ + ( - ,̆ - )_κ_ + ∫_^n(-̆) · ( - )̆ . The alignment terms can be written as δ A = ( - ,̆ - )_κ_ + ∫_^n(-̆) · ( - )̆ = ∫_^n (-̆) ·( - + -̆) = ( - ,̆ [ - ]̆_)_κ_ + ∫_^nρ^ϵ (-̆^̆ϵ) · ( []̆_ - ) - ∫_^n |-̆|^2 + ∫_^n (-̆) ·(̆ - ) := I + II + III + IV . By (<ref>) and (<ref>), I ≲∫_^n | - |̆^2 . From (<ref>) and (<ref>) and the assumed smoothness of , we get II ≲ + sup_t ∈ [0,T] W^2_2(, f) . The term III is negative so we drop it. Finally, from (<ref>) and the assumed smoothness of $̆, IV ≲ + sup_t ∈ [0,T] W^2_2(, f) . Using (<ref>), the estimate for the modulated kinetic energy becomes ≲ + δ/ϵ + sup_t ∈ [0,T] W_2^2(, f) ≤ + δ/ϵ + sup_t ∈ [0,T] W_x + sup_t ∈ [0,T] W_v ≲δ/ϵ + sup_t ∈ [0,T] (W_x + ). All together, W_x ≲ + W_x ≲δ/ϵ + sup_t ∈ [0,T] (W_x + ). Since the initial value of + W_xis smaller thanϵ, Gronwall implies + W_x ≲ϵ + δ/ϵ, and by (<ref>),W_v ≲ϵ + δ/ϵ. §.§ Maxwellian Limit We consider (<ref>) under a strong Fokker-Planck penalization force (<ref>): ∂_t + v ·∂_x = ∇_v · ( (v - ) ) + 1/ϵ[ Δ_v + ∇_v · (v - _δ) ] ∂_t + ∇· () = 0 To obtain the equation for the macroscopic quantitiesand, we take thev-moments to get: ∂_t + ∇· () = 0 ∂_t + ∇· () = 0 ∂_t (ρ) + ∇· (ρ⊗) + ∇ + ∇· = ( - ) + 1/ϵ (_δ - ) , where_δis given in (<ref>) and (t,x) = ∫_^n ((v-) ⊗ (v - ) - 𝕀) . To justify the Maxwellian limit, we will assume the following continuity conditions on solutions to (<ref>) and on the averaging[·]_ρ: R1 is bounded uniformly in ϵ: sup_t ∈ [0,T](t, ·)_∞≤ C R2 - _L^2≲sup_t ∈ [0,T] (W_1(, ρ) + W_1(, ρ̆))(t, ·) R3 - _L^2≲sup_t ∈ [0,T] (W_1(, ρ) + W_1(, ρ̆))(t, ·) provided _0 = _0 R4 [·]_ρ: L^2(ρ) ↦ L^2(ρ) is bounded uniformly over ρ∈_M(Ω) with equal mass Note that (<ref>) clearly holds with our usual assumption on the velocity averaging and kernel. These conditions hold for the-model, which we will show at the end of this section. Define the Maxwellians associated to solutions(ρ, , )̆to (<ref>) and solutions(, , )to (<ref>) by μ(t,x) = ρ(t,x)/(2π)^n/2 e^|v - (̆t,x)|^2/2, (t,x) = (t,x)/(2π)^n/2 e^|v - (t,x)|^2/2 . and the relative entropy by ( | μ) = ( | ) + ( | μ) ( | μ) = 1/2∫_^n | - |̆^2 + ∫_^nlog( / ρ) . Due to the ,( | μ) → 0implies →ρ →ρ̆ inL^1(^n). In particular,( | μ) → 0implies that W_1(, ρ) → 0, W_1(, ρ̆) → 0. (Proof of Theorem <ref>) When an identity or inequality does not depend on the regularity of the strength and can be found in the proof for the Cucker-Smale case, we refer the reader to Section <ref>. As in the Cucker-Smale case, we break the relative entropy into the kinetic and macroscopic parts, ( | μ) = _ϵ + _ϵ with _ϵ, _ϵ given (<ref>) and (<ref>). By (<ref>), d/dt_ϵ = - 1/ϵ∫_^n ×^n[ |∇_v |^2/ + 2 ∇_v · (v - _δ + |v - _δ|^2 ) ] - 1/ϵ [ (_δ, )_ - (_δ, _δ)_ ] - ∫_^n ×^n [ ∇_v · (v - ) + v · (v - ) ] := A_1 + A_2 + A_3. A_1 is strictly negative so it can be dropped; according to <cit.> (due to ball-positivity of the overmollified M_ϕ model), A_2 is also strictly negative. And A_3 = n ∫_^n ×^n -∫_^n ×^n |v|^2 + (, )_κ^ϵ . Due to boundedness of (<ref>), A_3 ≤ c_1 + c_1 _L^2()^2 + (, )_κ^ϵ . Recalling that _ϵ = ∫ |v|^2 (x,v) = _L^2()^2 ≤ and using L^2(ρ) boundedness of [·]_ρ, we have A_3 ≤ c_1 + c_2 _ϵ≤ c_3 + c_4 , where the last inequality can be found in <cit.>. Thus, ≤ c_3 + c_4 , and by Gronwall is bounded uniformly in ϵ. Uniform boundedness of in ϵ will be used to control the Reynolds stress term. By (<ref>) and boundedness of (<ref>), d/dt_ϵ≤ -1/ϵ + c ϵ e( | ) - _L^2(κ_ϵ)^2 + (, )_κ^ϵ. Recall that is the Fisher information and is given by (<ref>). Turning to , the evolution equation is given by (<ref>). The term δ A is the same alignment term used in the proof of the monokinetic case. As in the monokinetic case, the inherited regularity is precisely what is needed to control this term. The difference here from the monokinetic case is that we use (<ref>) and (<ref>) instead of (<ref>), (<ref>); and we retain the term ∫_^n | - |̆^2 and bound it by instead of the modulated kinetic energy. We get δ A ≲∫_^n | - |̆^2 + sup_t ∈ [0,T] (W_1^2(, ρ) + W_1^2(, ρ̆))(t,·) and W_1^2(, ρ̆) ≤ - ρ̆_L^1^2 ≤ ( - )̆_L^1^2 + _∞^2 - ρ_L^1^2 . By , ( - )̆_L^1^2 ≲ - _L^2(ρ_ϵ)^2 ≤( | μ) ≤( | μ) . By - ρ_L^1^2 ≲( | ρ) ≤( | μ) ≤( | μ) . So W_1^2(, ρ̆) ≲( | μ) and therefore by (<ref>) δ A ≲( | μ) . The Reynolds stress term can be written = ∫_^n [2 ∇_v √() + (1 + ϵ /2) (v - ) √()] ⊗ [(v - ) √()] - ϵ/2 ∫_^n (v - ) ⊗ (v - ) . Note that e( | ) = _ϵ - 1/2∫_^n ||^2 ≤_ϵ≤ . Due to boundednes of (<ref>) and the boundedness of , we get ≲√(e( | ) ) + ϵ e( | ) ≲√() + ϵ. The remaining terms do not depend on are estimated in Section <ref>. In total, we get d/dt≲( | μ) + √() + ϵ + δ/ϵ + _L^2() - (, )_. Combining the estimates on and and using √()≤1/2ϵ + 2 ϵ we arrive at d/dt( | μ) ≲( | μ) - 1/ϵ + ϵ + δ/ϵ + √()≲( | μ) + ϵ + δ/ϵ. Gronwall concludes the proof. We conclude this section by verifying (<ref>)-(<ref>) for the-model forϕ≥ c > 0. Recall that = _̆F = (ρ̆)_ϕ / ρ_ϕand = ρ_ϕwheresolves the pure transport along_Fgiven in (<ref>). Let(ρ, , )̆be a smooth solution to (<ref>) and(, , )be a solution to (<ref>); and let(ρ, , )and(, , )be the corresponding change of variables. For (<ref>),is transported sois clearly bounded. For (<ref>), we have - _L^2 = ()_ϕ - (ρ̆)_ϕ_L^2 ≤ - _L^2 (ρ̆)_ϕ_∞ + ()_ϕ - (ρ̆)_ϕ_L^2_L^∞. To control - _L^2, we turn to control the velocities_F - _̆F_∞. Sinceϕ≥ c > 0, _F - _̆F_∞ = ρ_ϕ ()_ϕ - _ϕ (ρ̆_ϕ) /ρ_ϕ_ϕ_∞≲( W_1(, ρ) + W_1(, ρ̆) ) . Integrating∂_t ( - )along characteristics, we get: - _L^2≲sup_t ∈ [0,T] (W_1(, ρ) + W_1(, ρ̆))(t, ·) . Moreover, ()_ϕ - (ρ̆)_ϕ_L^2≲sup_t ∈ [0,T] (W_1(, ρ) + W_1(, ρ̆))(t, ·) . (<ref>) follows similarly. For (<ref>), the Favre averaging is clearly a bounded operator onL^2when theϕis bounded from below away from zero. § RELAXATION TO GLOBAL MAXWELLIAN IN 1D Adding noise with the strength coefficient to the velocity equation leads to the Fokker-Planck-Alignment system given by ∂_t f + v ·∇_x f + ∇_v · ( (v - ) f) = σ_v f ∂_t + ∇_x · () = 0. We study the case where the velocity averaging is given by the Favre averaging:_̆F := (ρ̆)_ϕ / ρ_ϕ(i.e. the-model). Recall that the weightis defined by = ρ_ϕand the system becomes ∂_t f + v ·∇_x f = ρ_ϕ_v ·( σ∇_v f - v f) + (ρ̆)_ϕ·∇_v f ∂_t + _̆F ·∇_x = 0. Let us discuss well-posedness of (<ref>). General well-posedness theory for kinetic alignment equations based on a predefined strength_ρhas been developed in <cit.>. The new system, under the uniform regularity assumptions (<ref>) and (<ref>), which for the Favre filtration (withΦ_ρ(x,y) = ϕ(x-y) / ρ_ϕ(x)) means all-to-all interactions inf_Øϕ = c_0 >0, falls directly into the same framework of <cit.>. In fact, in this case every flock is automatically "thick", meaning that inf_Øρ_ϕ = c_0 M >0. We also have the a priori bounded energy ≤ c_1 + c_2, (t) = 1/2∫_Ω×^n |v|^2 f_t(x,v) on any finite time interval. Indeed, multiplying (<ref>) by1/2|v|^2and integrating by parts, we have: 1/2∫_Ω×^n |v|^2 f = ∫_Ω×^nρ_ϕ v · (vf - σ∇_v f) - ∫_Ω×^n (ρ)_ϕ· v f = ∫_Ω×^nρ_ϕ· |v|^2 f + n σ∫_Ω×^nρ_ϕ f - ∫_Ω×^n (ρ)_ϕ· v f = ∫_Ωρ_ϕ ||^2 ρ + n σ∫_Ωρ_ϕρ - ∫_Ω (ρ)_ϕ· (ρ) . Sinceremains bounded, the first term bounds the energy and the second term is bounded. For the last term, we estimate, ∫_Ω (ρ)_ϕ· (ρ) ≤_+ ϕ_+ _L^1(ρ)^2 ≲_+ ϕ_+ _L^2(ρ)^2 = _+ ϕ_+ , which yields the desired energy inequality. Since_L^1(ρ)≲_L^2(ρ) = √(), we have the inheritance of regularity: _x^k _̆F _∞ < C(k,M,J,T), for anyk∈. Consequently, according to (a trivial adaptation of) <cit.> for any data in the classical weighted Sobolev spacesf_0∈ H^m_q()and_0 ∈ C^m,inf_Ø_0 >0, where H^m_q() = { f : ∑_ k + l ≤ m∑_|| = k, || = l∫_ | v^q - k - l^_x^_v f |^2 <∞}, q > m, there exists a unique global in time classical solution in the same data space. Let us note that the transport ofpreserves the lower bound onuniformly in time, and by the automatic thickness condition we have the ellipticity coefficientρ_ϕuniformly bounded away from zero as well. Let us now recall a set of conditions on a given global solutionfthat are sufficient to imply global relaxation offto the Maxwellian μ_σ, = 1/|^n| (2πσ)^n/2 e^|v-|^2/2σ, where = 1/M∫_Øρ. We note that this total momentumis not preserved in time generally, but rather satisfies the equation ∂_t = ∫_Ø (_̆F - )̆_ρ, _ρ = ρ. We can't determine whethereventually settles to a fixed vector, as it does for all the classical alignment models including those that do not preserve the momentum. So, according to <cit.> a given solutionfconverges to the Maxwellian in the sense defined in (<ref>) provided there exists a set of fixed constantsc_0,... >0such that * c_0 ≤≤ c_1 and ∇_∞≤ c_2 for all ρ. * sup{ (,̆)_κ_ρ : ∈̆L^2(κ_ρ), = 0, _L^2(κ_ρ) = 1 }≤ 1 - ϵ_0 * [·]_ρ_L^2(ρ) → L^2(ρ) + ∇_x ( [·]_ρ)_L^2(ρ) → L^2(ρ)≤ c_3. We will be able to show (i)-(iii) in one dimensional case. Let us discuss these conditions starting from (i). The key condition here is∂_x _∞≤ c_2. We can establish such uniform control in 1D only. Indeed, since _t ∂_x + ∂_x (u_F ∂_x ) = 0, we can see that∂_x satisfies the same continuity equation asρ_ϕ(this only holds in 1D). Thus, _t ∂_x /ρ_ϕ + u_F _x ∂_x /ρ_ϕ = 0, Since initially|∂_x |/ρ_ϕ≤ Cfor some largeC>0due to the all-to-all interaction assumption, this bound will persist in time. Hence,_∞≤ C ρ_ϕ_∞≤ c_2. Condition (iii) follows from (i). Indeed, since, remain uniformly bounded, it reduces to ∫_Ø | (uρ)_ϕ |^2 ρ + ∫_Ø | (uρ)_ϕ |^2 ρ≤ c_3 u^2_L^2(ρ). This follows by the . Finally, let us get to (ii). We reduce the computation of the spectral gap to the low-energy method and the classical Cucker-Smale model. To this end, we assume thatϕ = ψ∗ψwhereψ>0is another positive kernel. In other words,ϕis Bochner-positive. In order to establish (<ref>) it suffices to show that (u, u)__̨ρ - (u, u_ρ)__̨ρ≥ϵ_0 (u, u)__̨ρ. Let us start by symmetrizing and using cancellation in the second obtained integral: (u, u)__̨ρ - (u, u_ρ)__̨ρ = ∫_Ø×Ø u(x) · (u(x) - u(y)) ρ(x) ρ(y) ϕ(x-y) (x) = 1/2∫_Ø×Ø |u(x) - u(y)|^2 ρ(x) ρ(y) ϕ(x-y) (x) - 1/2∫_Ø×Ø | u(y)|^2 ρ(x) ρ(y)((x) - (y)) ϕ(x-y) = I + II. Notice that I ≥_- ∫_Ø×Ø u(x) · (u(x) - u(y)) ρ(x) ρ(y) ϕ(x-y) = _- [ (u, u)_ρ_ϕρ - (u, u_ρ)_ρ_ϕρ ]. The difference of the inner productions inside the bracket represents exactly the spectral gap of the Cucker-Smale model computed in <cit.>. From that computation it follows that (u, u)_ρ_ϕρ - (u, u_ρ)_ρ_ϕρ≥ c M^3 (u, u)_ρ_ϕρ≥ c _- M^3 (u, u)__̨ρ, wherecdepends only on the kernelψ. Let us now estimateII: II ≤ (_+ - _-) (u, u)_ρ_ϕρ≤_+ - _-/_- (u, u)__̨ρ. We can see that provided _+ - _-/_-≤1/2 c _- M^3, we obtain (u, u)__̨ρ - (u, u_ρ)__̨ρ≥ϵ_0 (u, u)__̨ρ. which is the needed result. Let us collect now all the assumptions we have made and state the main result. Suppose n=1 and the kernel is Bochner-positive, ϕ = ψ∗ψ, with infψ >0. Then any initial distribution f_0∈ H^m_q() and strength _0 ∈ C^m satisfying the following small variation assumption sup (_0) - inf (_0)/inf (_0)^2≤ c M^3, for some absolute c>0, gives rise to a global classical solution f, which relaxes to the Maxwellian exponentially fast f(t) - μ_σ, u̅_L^1(^n ×^n)≤ c_1 σ^-1/2 e^-c_2 σ t. We note once again that the solution relaxes to a moving Maxwellian centered around a time-dependent momentumu̅. There two conceivable mechanisms of stabilizingu̅. One is sufficiently fast alignment: ∫_0^∞sup_x,y |u(x,t)- u(y,t)| <∞, which our relaxation seems to be not strong enough to imply. And another is stabilization of the density to a uniform distributionρ→1/|Ø|, which we do have from (<ref>) and in the case when≡it does imply exponential stabilization of the momentum for Favre-based models, see <cit.>. Ifvaries, even ifρ = 1/|Ø|, from (<ref>) we have, ∂_t u̅ = ∫_Ø (u_F - u) _ρ = 1/|Ø|∫_Ø ( u_ϕ - u ϕ_1) . We can see that unless (<ref>) holds, the persistent variations ofmay keep this term large leaving a possibility forever moving momentumu̅. 10ABFHKPPS G. Albi, N. Bellomo, L. Fermo, S.-Y. Ha, J. Kim, L. Pareschi, D. Poyato, and J. Soler. Vehicular traffic, crowds, and swarms: From kinetic theory and multiscale methods to applications and research perspectives. Math. Models Methods Appl. Sci., 29(10):1901–2005, 2019. CCTT2016 J. A. Carrillo, Y.-P. Choi, E. Tadmor, and C. Tan. Critical thresholds in 1DEuler equations with non-local forces. Math. Models Methods Appl. Sci., 26(1):185–206, 2016. CFRT2010 J. A. Carrillo, M. Fornasier, J. Rosado, and G. Toscani. Asymptotic flocking dynamics for the kinetic Cucker-Smale model. SIAM J. Math. Anal., 42(1):218–236, 2010. CS2007a F. Cucker and S. Smale. Emergent behavior in flocks. IEEE Trans. Automat. Control, 52(5):852–862, 2007. CS2007b F. Cucker and S. Smale. On the mathematics of emergence. Jpn. J. Math., 2(1):197–227, 2007. Favre A. Favre. Turbulence: Space-time statistical properties and behavior in supersonic flows. The Physics of Fluids, 26(10):2851–2863, 1983. FK2019 A. Figalli and M.-J. Kang. A rigorous derivation from the kinetic Cucker-Smale model to the pressureless Euler system with nonlocal alignment. Anal. PDE, 12(3):843–866, 2019. HL2009 S.-Y. Ha and J.-G. Liu. A simple proof of the Cucker-Smale flocking dynamics and mean-field limit. Commun. Math. Sci., 7(2):297–325, 2009. HT2008 S.-Y. Ha and E. Tadmor. From particle to kinetic and hydrodynamic descriptions of flocking. Kinet. Relat. Models, 1(3):415–435, 2008. LS-entropy T. M. Leslie and R. Shvydkoy. On the structure of limiting flocks in hydrodynamic Euler Alignment models. Math. Models Methods Appl. Sci., 29(13):2419–2431, 2019. MMPZ-survey P. Minakowski, P. B. Mucha, J. Peszek, and E. Zatorska. Singular Cucker-Smale dynamics. In Active particles, Vol. 2, Model. Simul. Sci. Eng. Technol., pages 201–243. Birkhäuser/Springer, Cham, 2019. MT2011 S. Motsch and E. Tadmor. A new model for self-organized dynamics and its flocking behavior. J. Stat. Phys., 144(5):923–947, 2011. MT2014 S. Motsch and E. Tadmor. Heterophilious dynamics enhances consensus. SIAM Rev., 56(4):577–621, 2014. Rey1987 C. W. Reynolds. Flocks, herds and schools: A distributed behavioral model. ACMSIGGRAPH Computer Graphics, 21:25–34, 1987. Sbook Roman Shvydkoy. Dynamics and analysis of alignment models of collective behavior. Nečas Center Series. Birkhäuser/Springer, Cham, [2021] 2021. shvydkoy2022environmental Roman Shvydkoy. Environmental averaging. arXiv preprint arXiv:2211.00117, 2022. S-hypo Roman Shvydkoy. Global hypocoercivity of kinetic Fokker-Planck-Alignment equations. Kinet. Relat. Models, 15(2):213–237, 2022. ST2 R. Shvydkoy and E. Tadmor. Eulerian dynamics with a commutator forcing II: Flocking. Discrete Contin. Dyn. Syst., 37(11):5503–5520, 2017. shvydkoy_teolis2023_smodel Roman Shvydkoy and Trevor Teolis. Well-posedness and long time behavior of the Euler Alignment System with adaptive communication strength. arXiv preprint arXiv:2310.00269, 2023 TT2014 E. Tadmor and C. Tan. Critical thresholds in flocking hydrodynamics with non-local alignment. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 372(2028):20130401, 22, 2014. Tadmor-notices Eitan Tadmor. On the mathematics of swarming: emergent behavior in alignment dynamics. Notices Amer. Math. Soc., 68(4):493–503, 2021. VCBCS1995 T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet. Novel type of phase transition in a system of self-driven particles. Physical Review Letters, 75(6):1226–1229, 1995. VZ2012 T. Vicsek and A. Zefeiris. Collective motion. Physics Reprints, 517:71–140, 2012.
http://arxiv.org/abs/2409.03063v1
20240904203109
Interplay of Charge Density Wave and Magnetism on the Kagomé Lattice
[ "Yu-Han Lin", "Jin-Wei Dong", "Ruiqing Fu", "Xian-Xin Wu", "Ziqiang Wang", "Sen Zhou" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Anhui Province Key Laboratory of Condensed Matter Physics at Extreme Conditions, High Magnetic Field Laboratory, Chinese Academy of Sciences, Hefei 230031, China CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China Corresponding author: [email protected] Department of Physics, Boston College, Chestnut Hill, MA 02467, USA Corresponding author: [email protected] CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China CAS Center for Excellence in Topological Quantum Computation, University of Chinese Academy of Sciences, Beijing 100049, China § ABSTRACT Motivated by the recent discovery of charge density wave (CDW) order in the magnetic kagomé metal FeGe, we study the single-orbital t-U-V_1-V_2 model on the kagomé lattice, where U, V_1, and V_2 are the onsite, nearest neighbor, and next-nearest-neighbor Coulomb repulsions, respectively. When the Fermi level lies in the flat band, the instability toward ferromagnetic (FM) order gives rise to a FM half-metal at sufficiently large onsite U. Intriguingly, at band filling n=17/24, the Fermi level crosses the van Hove singularity of the spin-minority bands of the half-metal. We show that, due to the unique geometry and sublattice interference on the kagomé lattice at van Hove singularity, the intersite Coulomb interactions V_1 and V_2 drive a real and an imaginary bond-ordered 2a_0 × 2a_0 CDW instability, respectively. The FM loop current CDW with complex bond orders is a spin-polarized Chern insulator exhibiting the quantum anomalous Hall effect. The bond fluctuations are found to be substantially enhanced compared to the corresponding nonmagnetic kagomé metals at van Hove filling, providing a concrete model realization of the bond-ordered CDWs, including the FM loop current CDW, over the onsite charge density ordered states. When the spins are partially polarized, we find that the formation of bond-ordered CDWs enhances substantially the ordered magnetic moments. These findings provide physical insights for the emergence of loop-current and bond-ordered CDW and their interplay with magnetism on the kagomé lattice, with possible connections to the magnetic kagomé metal FeGe. Interplay of Charge Density Wave and Magnetism on the Kagomé Lattice Sen Zhou ==================================================================== Introduction. - Kagomé metals (A = K, Rb, Cs) <cit.> have attracted enormous attention recently as a fertile playground for exploring novel states of matter <cit.> due to the interplay between lattice geometry, band topology, and electron correlation inherent on the kagomé lattice. A rich array of correlated and topological electronic states has been observed <cit.>, including a novel charge density wave (CDW) metal with 2a_0 × 2a_0 in-plane periodicity below T_cdw, rotation symmetry breaking, charge-stripe order, electronic nematicity, and a superconductor exhibiting pair density wave order. Central to the physics of is the evidence for spontaneous time-reversal symmetry (TRS) breaking in the exotic CDW state <cit.>, despite the absence of local moment <cit.> and itinerant magnetism <cit.>. A promising candidate for the TRS breaking is the bond CDW with persistent loop current (LC) order <cit.>, a long-sought-after quantum order also relevant for the quantum anomalous Hall insulator <cit.> and the pseudogap phase of cuprate superconductors <cit.>. The physical origin, model realization, and broader implications of the proposed LC order have been a subject under intensive research <cit.>. More recently, a CDW order with identical 2a_0× 2a_0 in-plane periodicity was discovered in the magnetic kagomé metal FeGe below T_cdw∼110 K <cit.>, deep inside the A-type antiferromagnetic (AFM) phase (T_N∼410 K). Here the magnetic moments order ferromagnetically (FM) within each kagomé layer with the out-of-plane moments anti-aligned between adjacent layers <cit.>. Moreover, the ordered magnetic moments increases substantially at the onset of the CDW transition <cit.>, indicating strong coupling between CDW and magnetism. Though recent theoretical calculations <cit.> and experimental investigations <cit.> suggest a CDW mechanism in FeGe different than , the presence of van Hove (VH) singularities in the vicinity of the Fermi level revealed in angle-resolved photoemission spectroscopy (ARPES) <cit.>, together with the observed anomalous Hall effect <cit.> and topological edge modes <cit.>, hint at possible connections between the CDWs in magnetic FeGe and nonmagnetic kagomé metals. Here, we study the interplay between FM order and CDW on the kagomé lattice and provide a mechanism for the emergence of a topological magnetic CDW state may relevant for FeGe. To this end, we consider the minimal single-orbital model on the kagomé lattice depicted in Fig. <ref>a. The Fermi surface (FS) at van Hove (vH) filling n=5/12 is the hexagon in Fig. <ref>b that connects the three independent vH points labeled by _1,2,3 at the Brillouin zone boundary. These vH points with divergent density of states are perfectly nested by the 2a_0 × 2a_0 wave vectors _α =1 2_α, where α=1,2,3 and _α is the reciprocal wave vector. Since the Bloch state at vH point _α is exclusively localized on the αth sublattice due to the sublattice interference <cit.>, as shown in the sublattice-resolved FS in Fig. <ref>b, the leading 2a_0 × 2a_0 instability is believed to be associated with a bond order that connects different sublattices. Therefore, while onsite Coulomb repulsion U acting on the same sublattice is obstructed, inter-site Coulomb repulsions are important for the formation of 2a_0 × 2a_0 CDWs. Weak-coupling and mean-field theories find that inter-site Coulomb repulsions V_1 on nearest neighbor (nn) bonds and V_2 on next-nn (nnn) bonds indeed drives real and complex bond-ordered CDW with LC order <cit.>. This picture captures the most essential physics of CDW in nonmagnetic . We propose that for the magnetic FeGe, the Fermi level in the high-temperature symmetric phase lies instead inside the flat band of the Fe d electrons (Fig. <ref>c), which is conjectured based on the observations of the ARPES experiments <cit.>. The onsite U, assisted by the divergent uniform spin susceptibility tied to the flat band, drives a FM ordered state with spin-split bands shown in Fig. <ref>d. Intriguingly, at band filling n=17/24 and for a sufficiently large U, the strong FM ordered phase becomes a half-metal (Fig. <ref>e) where the spin-majority (↑) bands is pushed completely below the Fermi level that passes through the vH singularity of the spin-minority (↓) bands, i.e. n_↑=1 and n_↓=5/12. The spin-polarized bond fluctuations turn out to be substantially enhanced compared to the spin-degnerate nonmagnetic kagomé metals at vH filling <cit.>. The leading instability of the FM half-metal is toward nn real and nnn imaginary bond order rather than the onsite charge density ordered (CDO) states. When all particle-hole channels in the Wick contraction are treated on equal footing, we show that the intersite Coulomb interactions (V_1,V_2) drive the FM ordered state into 2a_0 × 2a_0 real and complex CDW phases with spin-polarized LC order, providing a concrete model realization of bond ordered CDW and LC order <cit.>. We obtain and discuss the properties of the stable phases and the phase diagram. In particular, we show that at a smaller U where spins are partially polarized, the emergence of bond-ordered CDW in the FM ordered kagomé plane would induce a significant enhancement of the ordered magnetic moments. Model and mean-field theory. - We consider the single-orbital t-U-V_1-V_2 model on the kagomé lattice H = -t ∑_⟨ i,j⟩, σ c^†_iσ c_jσ +U ∑_i n̂_i↑n̂_i↓ +V_1 ∑_⟨ i,j⟩n̂_i n̂_j +V_2 ∑_⟨⟨ i,j ⟩⟩n̂_i n̂_j, where c^†_iσ creates a spin-σ electron on lattice site i, the electron density operators n̂_iσ = c^†_iσ c_iσ and n̂_i = ∑_σn̂_iσ. Hereinafter, we set nn hopping t≡ 1 as the energy unit. Decoupling the onsite Coulomb repulsion U in terms of local magnetic moment m̂^η_i = ∑_σσ' c^†_iστ^η_σσ' c_iσ' (η=x,y,z), and the inter-site Coulomb repulsions V_1 and V_2 in terms of bonds ^μ_ij = ∑_σσ' c^†_iστ^μ_σσ' c_jσ' (μ=0,x,y,z), with τ^η/μ the corresponding Pauli matrix, one obtains the mean-field Hamiltonian H_MF= -t∑_⟨ i,j⟩^0_ij -U 4∑_i,η[ 2m^η_i m̂^η_i - (m^η_i )^2 ] - V_1 2∑_⟨ i,j⟩, μ[ (^μ_ij)^* ^μ_ij +h.c. - |^μ_ij|^2 ] - V_2 2∑_ i,j, μ[ (^μ_ij)^* ^μ_ij +h.c. - |^μ_ij|^2 ]. The order parameters, magnetic moment m^η_i =⟨m̂^η_i ⟩ and bonds ^μ_ij =⟨^μ_ij⟩, are to be determined self-consistently by minimizing the state energy. We note that in the mean-field theory used here <cit.>, the direct Hartree terms of the interactions depending on the electron density n_i =⟨n̂_i ⟩ are neglected to avoid double-counting, since their contributions are already considered in the LDA <cit.>. These terms are expected to be insignificant in these kaogmé metals, since no large charge disproportionation has been observed. Moreover, the symmetry preserving correlation induced bond orders, i.e., the real uniform components of the bond order parameters ^μ_ij, are subtracted in the interactions, such that the vH singularities and the band structure are maintained without normalizing the bare band parameters <cit.>. This is consistent with the fact that vH singularity at the points has been established by ARPES in both and FeGe. Implementing this subtraction scheme, the correlation effects in the self-consistent mean-field theory will only generate spontaneous symmetry-breaking states. We stay at band filling n=17/24, at which the Fermi level of free electrons passes through the flat band, as shown in the tight-binding band dispersion displayed in Fig. <ref>c. The presence of flat band at Fermi level gives rise to divergent bare susceptibilities at wave vector =0, and consequently leads to colinear FM at any infinitesimal U. Though the spins in the studied model have global SU(2) symmetry, we restrict, for simplicity, the spins to order along c axis, in connection to FeGe. Consequently, bond orders _x,ij and _y,ij vanish on all bonds. It is thus more convenient to define the spin-dependent bonds, ^σ_ij = 1 2(^0_ij +σ^z_ij), which we use hereinafter. Furthermore, we consider the quantum states described by the Hamiltonian in Eq. (<ref>) to be periodic with an enlarged 2a_0 × 2a_0 unit cell, which allows investigations on the interplay between =0 magnetism and 2a_0 × 2a_0 CDW. In addition, we focus on states invariant under C_6 rotation and satisfying the current continuity condition <cit.>, which implies that there are, independently, two lattice sites, three nn bonds, and three nnn bonds within each 2a_0 × 2a_0 enlarged unit cell. In order to obtain all the possible states, we use different initial conditions for solving the self-consistent equations numerically. When multiple converged states exist for a given set of Coulomb interactions, we compare the state energies to determine the true mean-field ground state. FM LC CDWs - The fully polarized FM phase at sufficiently large U is a half-metal, with the spin-majority (↑) bands pushed completely below the Fermi level. Remarkably, the Fermi level exactly passes through the vH points of the spin-minority (↓) bands, as shown by the FM band dispersion at U=6 displayed in Fig. <ref>e. Therefore, the fully polarized t-U-V_1-V_2 model at a fixed large U and filling n=17/24 provides effectively a spinless counterpart of the spinfull t-V_1-V_2 model at vH filling studied in Ref. <cit.>. Fig. <ref>a presents the mean-field phase diagram at fixed U=6 in the plane spanned by V_1 and V_2. Indeed, nn V_1 and nnn V_2 drives, respectively, real and complex bond-ordered CDW, coexisting with the fully polarized FM induced by the large onsite U. The ground state at V=(1.5, 1) is schematized in Fig. <ref>b. The electron densities on the two inequivalent sites, A and B, are 1.404 and 1.430, and both of them are fully spin polarized. Consequently, the spin-majority (↑) electrons are completely localized on lattice sites, with ^↑_ij=0 on all bonds. The nn bonds for spin-minority (↓) electrons are real and ^↓ = (0.264, 0.268, 0.172), with the stronger bonds forms the inverse Star-of-David (ISD) pattern displayed in Fig. <ref>b. This coexistent state of fully polarized FM and real bond-ordered CDW is thus referred to as FM+ISD from now on. Fig. <ref>c plots the corresponding band dispersion along the high-symmetry path in the reduced Brillouin zone (BZ), where the vH points of spin-down bands near the Fermi level are fully gapped by the 2a_0× 2a_0 CDW. The coexistent state of fully polarized FM and complex bond-ordered CDW obtained at V=(1.2, 2.8) is illustrated in Fig. <ref>d, with n_A = 1.420, n_B = 1.413, and ^↓ = (0.197 + 0.028i, 0.194 - 0.029i, 0.241 + 0.028i). We call this state as FM+LC hereinafter, since the complex bonds of spin-↓ electrons generate spin-polarized LCs circulating on the kagomé lattice. Consequently, the spin-↓ bands are topologically nontrivial. The band dispersion is presented in Fig. <ref>e, with the Chern numbers of the isolated spin-↓ bands indicated by numbers. Note that while the Chern number of individual band may change its value as V_1,2 change in the FM+LC regime in the phase diagram we explored, the total Chern number of all occupied bands 𝒩 remains unchanged and 𝒩=-1. As a result, FM+LC is a spin-polarized Chern insulator exhibiting quantum anomalous Hall effect <cit.>. CDW enhanced magnetism - Next, we set onsite U=3.6 where the magnetic ground state is a partially polarized FM with ordered moment m = 0.328 μ_B, which we label as a pFM to distinguish from FM with full spin polarization. The band dispersion is shown in Fig. <ref>d. The vH points of spin-minority electrons are driven closer to, but still quite away from the Fermi level, due to the insufficiency of FM splitting. Fig. <ref>a displays the ground state phase diagram at fixed U=3.6 in the V_1-V_2 plane. Clearly, the uniform pFM is the ground state in the regime where inter-site interactions are weak, while FM+ISD and FM+LC set in, respectively, when nn V_1 or nnn V_2 is sufficiently strong. Intriguingly, at the onset of 2a_0 × 2a_0 bond-ordered CDW, the FM moments are enhanced from a partially polarized value in pFM to fully polarized one in FM+ISD or FM+LC. This drives the vH points of spin-minority bands to the Fermi level and thus gains energy by gapping out the vH points via the formation of 2a_0 × 2a_0 CDW. The enhancement of ordered magnetic moments by the emergence of CDW is illustrated more evidently in the temperature evolution of FM+ISD at V=(1, 1) in Fig. <ref>b and FM+LC at V=(0.8, 2) in Figs. <ref>c and <ref>d. At high temperatures, the system is a paramagnetic (PM) metal at both sets of interactions, with m=0 on all sites and χ^↑_ij =χ^↓_ij on all bonds. As temperature reduces, pFM with spins partially polarized sets in via a continuous phase transition. The spin-up and spin-down hoppings start to deviate from each other. This phase transition breaks the time-reversal symmetry but preserves the translation symmetry. The latter is broken at a lower temperature, where a first-order phase transition takes place to the coexisting state of FM and 2a_0 × 2a_0 bond-ordered CDW, i.e., FM+ISD at V=(1, 1) and FM+LC at V=(0.8, 2). This transition is accompanied by a substantial enhancement in the ordered magnetic moments <cit.> and the energy separation between spin-majority and spin-minority bands. The latter may induce a sudden spectral weight transfer detectable in optical spectroscopy <cit.>. Rivalry with onsite charge density order. - So far, the mean-field theory used in this work has subtracted the contributions from onsite Hartree terms and the symmetry preserving bond corrections. Though these subtractions are reasonable and physical, as we discussed above, it is desirable to investigate theoretically if the exotic LC order can develop spontaneously in the microscopic model within a complete, self-contained mean-field theory. The perfectly nested FS and its sublattice features shown in Fig. <ref>b inform us that the vH filled kagomé lattice has instabilities at 2a_0 × 2a_0 wavevectors = associated with bond orders connecting two different sublattices. Similarly, it has instabilities at wavevector =0 that involve onsite density orders acting on the same lattice. Therefore, the Hartree terms depending on the electron density would drive 1a_0 × 1a_0 CDO. The =0 CDO maintains the translation symmetry but breaks rotation symmetry, with charge density modulating on the three sublattices in each unit cell. It is expected to prevail over 2a_0× 2a_0 bond-ordered CDWs in the regime with strong inter-site Coulomb repulsions to gain electrostatic potential energy. On the other hand, the nnn V_2 introduces dynamically a nonzero nnn hopping integral and thus alleviates or even removes the vH singularity at the points, further discouraging the emergence of LC. Indeed, LC CDW fails to stabilize as the ground state in the entire phase diagram in the vH filled t-V_1-V_2 model studied in Ref. <cit.>, as shown in the End Matter. This raises the concern that if LC order can emerge spontaneously in a microscopic model on the kagomé lattice without subtracting anything for even physical reasons. The pursuing of spontaneous LC order on the kagomé lattice has led to the recent studies on the spinless model where the spin degree of freedom is removed <cit.>. As compared to its spinful conterpart, the charge fluctuations on the bonds are effectively enhanced by a factor of 2 with respect to those on lattice sites in the spinless model, and thus raises the hope of LC CDW being the ground state. Indeed, random phase approximation <cit.> and functional renormalization group <cit.> studies on the spinless t-V_1-V_2 model reveal real and imaginary bond-ordered CDW states at small to intermediate Coulomb repulsions, and they give their way to =0 CDO when V_1,2 are strong. This is further confirmed by the complete mean-field calculations reported in the End Matter. Intriguingly, at large enough U, the t-U-V_1-V_2 model provides a physical realization of spinless t-V_1-V_2 model where the LC order can survive in certain parameter regime. First, we include the Hartree terms and the ground state phase diagram is presented in Fig. <ref>a at n=17/24 and U=6t. Clearly, as compared to the diagram in Fig. <ref>a, the regime with strong inter-site interactions is now taken over by onsite CDO and its coexistence with FM. The phase diagram obtained by complete mean-field calculations is displayed in Fig. <ref>b, where some of the area for FM+LC is further replaced by the FM metal. Nevertheless, LC order survives in a small parameter regime in the t-U-V_1-V_2 model even when magnetism, onsite CDO, and bond orders are treated on equal footing, in sharp contrast to the t-V_1-V_2 model at vH filling where LC order is absent. The regime for LC order can be enlarged via introducing a finite nnn hopping t' in the bare band parameter, which compensates to some extent the dynamical correction from nnn V_2, as shown in Fig. <ref>c for t'=0.015. Interestingly, FM+CDO and CDO differ to each other dramatically, as the two equivalent sites in the unit cell have smaller electron density in FM+CDO but larger electron density in CDO, as illustrated in Fig. <ref>d. Summary and discussions. - The single-orbital t-U-V_1-V_2 model on the kagomé lattice is studied within mean-field theories at n=17/24, at which the noninteracting Fermi level passes the flat band. At sufficiently large onsite Coulomb repulsion, says U=6, spins are fully polarized. The strong FM splitting pushes the spin-majority bands completely below the Fermi level, and the spin-minority bands realize effectively a spinless model at vH filling. Consequently, ISD and LC with real and complex 2a_0 × 2a_0 bond orders are induced by inter-site Coulomb repulsions, and give rise to fully spin-polarized FM+ISD and FM+LC. Both FM+ISD and FM+LC are fully gapped insulators. FM+ISD is topologically trivial, whereas FM+LC is a spin-polarized Chern insulator exhibiting quantum anomalous Hall effect. Interestingly, FM+LC with 2a_0 × 2a_0 LC orders persists as a ground state presented in the V_1-V_2 phase diagram even when the model is treated by the full mean-field theory, in analogue to the situation in the spinless t-V_1-V_2 model at vH filling. In addition, when the spins are partially polarized at a smaller U, the formation of bond-ordered CDW triggered by inter-site Coulomb repulsions induces a significant enhancement in the ordered magnetic moments, which is illustrated clearly in the temperature evolution of states. The physical origin of CDW with 2a_0 × 2a_0 in-plane periodicity in FeGe is still elusive. Recent theoretical calculations and experimental measurements <cit.> suggest that CDW in FeGe arises from out-of-plane dimerization of Ge atoms that saves magnetic exchange energy, implying the importance of spin-phonon coupling, instead of electron-phonon coupling in <cit.>. On the other hand, neither electron-phonon nor spin-phonon coupling alone is capable of producing topological states with anomalous Hall effect. The presence of vH singularities in the vicinity of Fermi level, together with the observed anomalous Hall effects in both FeGe and , hint at possible connections between the CDWs in magnetic FeGe and nonmagnetic kagomé metals. Therefore, though this work of single d-orbital model with electron correlations is an oversimplified model and has limited connections to the real material, it may provide some physical insights on the emergence of CDW in FeGe and its interplay with magnetism. This work considers a single kagomé layer, where magnetic moments are FM ordered and the orbital magnetic moments generated from circulating LCs are staggered with zero net moments. Neutron scattering revealed that magnetic moments are anti-aligned between adjacent layers <cit.>. It is desirable to investigate the inter-layer correlation of the LC order <cit.> in the presence of A-type AFM. Acknowledgments. -This work is supported by the National Key Research and Development Program of China (Grant No. 2022YFA1403800 and 2023YFA1407300), the National Natural Science Foundation of China (Grant Nos. 12374153, 12047503, and 11974362), and the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB28000000). Z.W. is supported by the U.S. Department of Energy, Basic Energy Sciences (Grant No. DE-FG02-99ER45747) and the Research Corporation for Science Advancement (Cottrell SEED Award No. 27856). Numerical calculations in this work were performed on the HPC Cluster of ITP-CAS. § END MATTER §.§ A. Mean-field theories for spinful and spinless models The spinful t-U-V_1-V_2 model on the kagomé lattice reads H = -t ∑_⟨ i,j⟩, σ( c^†_iσ c_jσ +h.c. ) +U ∑_i n̂_i↑n̂_i↓ +V_1 ∑_⟨ i,j⟩n̂_i n̂_j +V_2 ∑_⟨⟨ i,j ⟩⟩n̂_i n̂_j, where c^†_iσ creates a spin-σ electron on lattice site i, the electron density operators n̂_iσ = c^†_iσ c_iσ and n̂_i = ∑_σn̂_iσ. Decoupling the interactions in all particle-hole channels on equal footing, one obtains the mean-field Hamiltonian H_MF= -t∑_⟨ i,j⟩( ^0_ij +h.c.) + U 4∑_i [ 2n_i n̂_i - n^2_i ] + V_1 ∑_⟨ i,j⟩[ n_j n̂_i + n_i n̂_j -n_i n_j ] + V_2 ∑_ i,j[ n_j n̂_i + n_i n̂_j -n_i n_j ] -U 4∑_i,η[ 2m^η_i m̂^η_i - (m^η_i )^2 ] - V_1 2∑_⟨ i,j⟩, μ[ (^μ_ij)^* ^μ_ij +h.c. - |^μ_ij|^2 ] - V_2 2∑_ i,j, μ[ (^μ_ij)^* ^μ_ij +h.c. - |^μ_ij|^2 ] . Here, local magnetic moment m̂^η_i = ∑_σσ' c^†_iστ^η_σσ' c_iσ' (η=x,y,z), nn and nnn bonds ^μ_ij = ∑_σσ' c^†_iστ^μ_σσ' c_jσ' (μ=0,x,y,z), with τ^η/μ the corresponding Pauli matrix. The order parameters, electron density n_i =⟨n̂_i ⟩, magnetic moment m^η_i =⟨m̂^η_i ⟩, and bonds ^μ_ij =⟨^μ_ij⟩, are to be determined self-consistently by minimizing the mean-field state energy. Subtracting the direct Hartree terms depending on the electron density n_i from the mean-field Hamiltonian in Eq. (<ref>), one reach the Hamiltonian given in Eq. (2) in the main text. Subtracting further the symmetry-preserving bond corrections in the interactions maintains the bare band structure with van Hove singularities. In the spinless model where the onsite U is absent due to Pauli exclusion, we consider the t-V_1-V_2 model given by H = -t ∑_⟨ i,j⟩( c^†_i c_j +h.c. ) +V_1 ∑_⟨ i,j⟩n̂_i n̂_j +V_2 ∑_⟨⟨ i,j ⟩⟩n̂_i n̂_j, where c^†_i creates a spinless electron on lattice site i, the electron density operators n̂_i = c^†_i c_i. Decoupling the interactions in all particle-hole channels, i.e., density n̂_i and bonds _ij = c^†_i c_j, on equal footing, the obtained mean-field Hamiltonian reads H_MF= -t∑_⟨ i,j⟩( _ij +h.c.) + V_1 ∑_⟨ i,j⟩[ n_j n̂_i + n_i n̂_j -n_i n_j ] + V_2 ∑_ i,j[ n_j n̂_i + n_i n̂_j -n_i n_j ] - V_1 ∑_⟨ i,j⟩[ ^*_ij_ij +h.c. - |_ij|^2 ] - V_2 ∑_ i,j[ ^*_ij_ij +h.c. - |_ij|^2 ] . Please note that, while the prefactors of the Hartree terms from inter-site Coulomb interactions are the same in Eq. (<ref>) and Eq. (<ref>), there is a factor of 2 difference between the prefactors of bond terms. Therefore, the bond fluctuations is substantially enhanced in the spinless model, as compared to the spinful model. This is consistent with the analysis in random phase approximation <cit.>. To demonstrate the importance of this difference, we consider spinful and spinless t-V_1-V_2 model at van Hove filling n=5/12. The mean-field ground state phase diagrams for the spinful model are presented in Figs. <ref>a-<ref>c, and those for spinless model in Figs. <ref>d-<ref>f. In Figs. <ref>a and <ref>d, both the Hartree terms and symmetry-preserving bond corrections are subtracted in the calculations. Since onsite charge order is neglected and the van Hove singularity is maintained, the system converges to bond-ordered CDW in the entire phase diagram, with V_1 and V_2 favors, respectively, real and complex CDW with loop current (LC) order. It is easy to show that these two diagrams become identical after multiplying the axes by a factor of 2. Figs. <ref>b and <ref>e subtract the real uniform components of bonds from the interactions, but take into account the direct Hartree terms. In the spinful model, the bond-ordered CDW is defeated by the onsite charge density ordered (CDO) states almost in the entire phase diagram, leaving a small regime for the real CDW with inverse Star-of-David (ISD) pattern and LC order fails to develop spontaneously anywhere. In the spinless model, however, bond-ordered CDWs, ISD and LC, survive in the rivalry with CDO in the regime where inter-site Coulomb repulsions are weak, since the bond fluctuations are substantially enhanced here as compared in the spinful model. LC CDW even present itself as a ground state in the full mean-field calculations where all particle-hole channels are treated on equal footing, as shown in Fig. <ref>f. §.§ B. Magnetic transition on the kagomé lattice at band filling n = 17/24 Here, we investigate the magnetic ground state of the t-U model on the kagomé lattice at band filling n = 17/24. The presence of flat band at Fermi level gives rise to divergent spin susceptibility at wave vector =0. In addition, onsite Coulomb repulsion U disfavors charge fluctuations. As a result, the ground states are expected to be 1a_0 × 1a_0 charge uniform magnetic ordered states. Indeed, self-consistent mean-field calculations manage to obtain three kind of states, fully polarized ferromagnetic (FM), partially polarized FM (pFM), and coplanar 120^∘ antiferromagnetic (AFM) states. The state energy per site of them with respect to the paramagnetic phase is displayed in Fig. <ref>a as a function of U, and the corresponding ordered moments are shown in Fig. <ref>b. Clearly, upon increasing U, the ground state changes from pFM to FM via a first-order transition at U≃ 3.603t.
http://arxiv.org/abs/2409.03270v1
20240905062732
SVP: Style-Enhanced Vivid Portrait Talking Head Diffusion Model
[ "Weipeng Tan", "Chuming Lin", "Chengming Xu", "Xiaozhong Ji", "Junwei Zhu", "Chengjie Wang", "Yanwei Fu" ]
cs.CV
[ "cs.CV" ]
[ [ [ September 5, 2024 ===================== type=figure < g r a p h i c s > figureIn talking head generation, given a audio and the reference image, both the GAN-based method SadTalker and the diffusion-based method V-Express have generated monotonous portrait videos, in which the primary movement is observed in the lips. In contrast, our approach is capable of generating diverse and vivid portrait videos based on varying intrinsic styles. ] [1]Equal contributions. This work was done when Weipeng Tan was an intern at Tencent Youtu Lab. § ABSTRACT Talking Head Generation (THG), typically driven by audio, is an important and challenging task with broad application prospects in various fields such as digital humans, film production, and virtual reality. While diffusion model-based THG methods present high quality and stable content generation, they often overlook the intrinsic style which encompasses personalized features such as speaking habits and facial expressions of a video. As consequence, the generated video content lacks diversity and vividness, thus being limited in real life scenarios. To address these issues, we propose a novel framework named Style-Enhanced Vivid Portrait () which fully leverages style-related information in THG. Specifically, we first introduce the novel probabilistic style prior learning to model the intrinsic style as a Gaussian distribution using facial expressions and audio embedding. The distribution is learned through the `bespoked' contrastive objective, effectively capturing the dynamic style information in each video. Then we finetune a pretrained Stable Diffusion (SD) model to inject the learned intrinsic style as a controlling signal via cross attention. Experiments show that our model generates diverse, vivid, and high-quality videos with flexible control over intrinsic styles, outperforming existing state-of-the-art methods. § INTRODUCTION Recent advancements in generative models have shed light on generating high-quality and realistic videos under various controlling conditions such as texts <cit.>, images <cit.>, videos <cit.>, etc. Among all the different subtasks of video generation, Talking Head Generation (THG), as a human-centric task which aims to generate videos of talking heads guided by conditions such as speech and images, has emerged as a significant problem due to its wide application in scenarios such as digital humans, film production, virtual reality. In spite of its importance, this is one of the most challenging tasks in video generation, resulted from its low tolerance to artifacts in general and its demand of high fidelity in lip shapes, facial expressions, and head motions. Following the commonly used generative models, the GAN-based THG methods <cit.> have achieved remarkable results in generating high-resolution videos through adversarial training between generators and discriminators, particularly in terms of visual quality and lip-sync accuracy. Diffusion model-based THG methods <cit.>, on the other hand, excel in generating high-quality and high-resolution images and videos, and it outperforms GANs in terms of the stability and consistency of the generated content, thus becoming the mainstream methods for THG. These methods largely facilitate THG by strengthening the explicit controlling conditions such as facial keypoints and head motion sequences. However, they generally ignore the important fact of talking head videos. Essentially, when different people present speeches in real-life cases, they could have significant differences in habits and emotions under various circumstances. Such a fact in turn leads to different attributes in the corresponding talking head videos, including the visemes and expressions. Consequently, these habits and emotions are embedded as the intrinsic style in talking head videos. This intrinsic style, while being highly related to whether a video is realistic, can hardly be inferred from conditions such as facial keypoints which are widely adopted by the previous methods. As a result, when there is a large gap in the intrinsic style between the reference face and the speaker of the style reference video, the previous methods struggle to reproduce the real situation accurately. To this end, we propose a novel framework named , which can effectively extract intrinsic style features with the assistance of audio information through a self-supervised method, and apply them to the generation of talking head videos in a manner suitable for diffusion models. This approach not only improves the overall quality of the generated videos, ensuring better synchronization and control but also accurately transfers facial expressions and individualized details to new faces. Specifically, our  focuses on two main problems, i.e. extracting intrinsic style embeddings from style reference videos and controlling diffusion models with such embeddings. For intrinsic style extraction, a naive solution would be following StyleTalk <cit.>, which maps 3D Morphable Model (3DMM) <cit.> expression coefficients of the style reference video to style-related features. However, since attributes like visemes and expressions vary along the video frames, the deterministic embedding would suffer from insufficient capacity to model the latent manifold of intrinsic styles. Moreover, as one of the main parts of the video, the use of corresponding audio, which contains abundant information regarding the intrinsic styles, was not explored in StyleTalk, leading to unrepresentative style embeddings. To solve these problems, we propose the novel Probabilistic Style Prior Learning as an alternative based on the transformer backbone. Concretely, the audio and visual information of each video interacts with each other in the transformer style encoder, which models the intrinsic style of this video as a Gaussian distribution with predicted mean and standard deviation. Through contrastive learning, the extracted features exhibit significant clustering across different identities and emotions, not only helping the model better understand the video content but also providing an effective way to capture and express the intrinsic style of individuals. After achieving the intrinsic style, it is integrated into the denoising process of target videos via additional cross attention, along with other conditions including the simplified facial keypoints for head movements and audio for lip shapes and movements around the mouth. Thanks to the design of the probabilistic style prior, we can resample from the predicted distributions to provide enough variation for the style-related information, thus resulting in the strong generalization ability of the trained model. To validate the effectiveness of our proposed method, we conduct extensive experiments and comparisons on the MEAD and HDTF datasets. Our method significantly outperforms other competitors across multiple metrics, including FVD, FID, PSNR, and SSIM. In addition to quantitative evaluation, we also perform comprehensive qualitative assessments. The results indicate that our method can generate highly natural and expressive talking videos, and can produce different emotions or even multiple changes in expressions within the same video according to user needs, achieving satisfactory visual effects. Overall, our contributions are summarized as follows: * We are the first to propose an audio-driven talking head generation framework based on a diffusion model that considers intrinsic style.  can generate realistic talking head videos with different intrinsic styles from the style reference videos. * We propose an intrinsic style extractor that captures and expresses the intrinsic style of individuals through a self-supervised approach. We also incorporate audio information as an auxiliary into the intrinsic style extractor, leveraging subtle variations and emotional information to enhance the intrinsic style features. This allows the model to more accurately reflect the emotions and habits of speakers. * We design a resampling method suitable for diffusion models, enhancing the stability and generalization. § RELATED WORK GAN-Based Talking Head Generation. There has been significant research on GAN-based methods for person-generic audio-driven talking head generation. Early methods <cit.> achieved lip synchronization by establishing a discriminator that correlates audio with lip movements. Other approaches <cit.> generated portrait videos by mapping audio to key facial information, such as landmarks, key points, or 3D Morphable Model (3DMM) <cit.> coefficients, before rendering the final frame. However, due to the limitations of GANs in terms of generative capacity, the results produced by these methods often suffer from artifacts like pseudo-textures or restricted motion ranges. Diffusion Model-Based Talking Head Generation. Recently, there has been a surge of research  <cit.> utilizing diffusion models to achieve high-quality portrait videos. Among these, X-Portrait <cit.> and MegActor <cit.> rely on the pose and expression from the source video to generate the target video, which limits their ability to produce videos based solely on audio. DiffTalk <cit.> was the first to modify lip movements using audio in conjunction with diffusion models, but it does not extend to driving other head parts. EMO <cit.> was the first to leverage LDM <cit.> and audio features to achieve overall motion in portraits. V-Express <cit.> controls the overall motion amplitude by adjusting audio attention weights, while Hallo <cit.> designed a hierarchical module to regulate the motion amplitude of different regions. In summary, current audio-driven diffusion model approaches have not taken into account that each portrait should exhibit a corresponding style while speaking, which is essential for generating higher-quality portrait videos. Stylized Talking Head Generation. Previous research has explored several GAN-based methods  <cit.> for extracting style information to apply in talking head generation.  <cit.> and  <cit.> directly inject labels into the network to drive the corresponding emotions.  <cit.> and  <cit.> map the facial expressions of each frame in the source video to each frame in the target video.  <cit.> and  <cit.> employ 3D Morphable Models (3DMM) to extract facial information and construct style codes that drive the desired styles. Building on these approaches, our framework is the first to propose the extraction of intrinsic style priors by integrating audio and 3D facial information. Additionally, we design a probabilistic learning to enhance style control within diffusion models. § METHOD Problem Formulation. The goal of THG is to generate a talking head video under the control of a reference image, audio, Head-Kps image sequence, and intrinsic style prior. Among these conditions, the reference image controls the background and facial identity. The audio guides the lip movements. Each Head-Kps image controls the head pose for each generated frame. The Head-Kps images are synthesized by mapping 8 facial keypoints (i.e., left and right edges, the pupils of both eyes, and the bridge of the nose) onto a black background. In addition, we propose the probabilistic style prior learning to extract the style prior from a style reference video, which is used to determine facial emotion and speaking habits. Preliminaries. In , we employ a Latent Diffusion Model (LDM) <cit.> to generate video frames. The LDM uses a diffusion and denoising process in the latent space via a Variational Autoencoder (VAE). It maps the input image x to the latent space, encoding the image as z=E(x), which helps maintain visual quality while reducing computational cost. During the diffusion process, Gaussian noise ϵ∼𝒩(0,𝐈) is gradually introduced into the latent z, degrading it into complete noise z∼𝒩(0,𝐈) after T steps. In the reverse denoising process, the target latent z is iteratively denoised from the sampled Gaussian noise using the diffusion model and then decoded by the VAE decoder D into the output image x=D(z). During training, given the latent z_0 =E(x_0 ) and condition c, the denoising loss is: ℒ_denoising=𝔼_𝐳_t,ϵ,𝐜,tϵ_θ(𝐳_t,𝐜,t)-ϵ_t^2. Among them, z_t = √(α_t)z_0 + √(1-α_t)ϵ_t represents the noisy latent variables at timestep t∈[1,T], and ϵ_t is the added noise. ϵ_θ is the noise predicted by the UNet model, modified using an attention mechanism with parameters θ. This model employs a cross-attention mechanism to fuse the condition cc with the latent features z_t , thereby guiding the image generation.  uses Stable Diffusion v1.5 (SDv1.5), a text-to-image Latent Diffusion Model (LDM), as the backbone. SDv1.5 is implemented based on U-Net <cit.>, with each Transformer <cit.> block containing both self-attention and cross-attention layers. Overview. As depicted in Figure <ref>, Our  consists of two important designs, namely Probabilistic Style Prior Learning and Style-Driven Diffusion Process. In Probabilistic Style Prior Learning, we propose the style extractor takes the 3DMM expression parameters and audio features as inputs to generate the style prior, which is represented as a Gaussian distribution. In the Style-Driven Diffusion Process, the reference image, Head-Kps sequences, audio features, and the style prior are input as control conditions into the Denoising UNet through Head-Kps Guider and attention layers. Finally, the vivid portrait frames are generated by the VAE decoder after the iteration of the denoising process. §.§ Probabilistic Style Prior Learning In order to learn representative intrinsic style indicators from style reference videos, we propose the novel probabilistic style prior learning. Built upon the transformer-based style encoder as in StyleTalk <cit.>, we adopt a novel framework to make better usage of the style-related information contained in each video. Concretely, for a video clip, we first transform it into its corresponding frame-level audio parameters α∈ℝ^N×1920 via Whisper-Tiny <cit.> and sequential expression parameters β∈ℝ^N× 64 via the 3DMM encoder, where N denotes number of frames. These two modalities are then processed with a dual-branch transformer model as shown in Figure <ref>, which outputs their counterparts ŝ^α, ŝ^β∈ℝ^N× d_s, where d_s denotes feature channels. After achieving features for each modality, we interact with them with cross attention, leading to a style-related embedding ŝ aware of both audio and visual information. With ŝ, we can then model the intrinsic style prior for each video as a Gaussian distribution. Specifically, an attention-based aggregation strategy is employed on ŝ as follows: μ_s = softmax(W_sŝ)·ŝ^T, σ_s = softmax(W_s ŝ)·(ŝ^T-μ_s)^2, s = μ_s + σ_s ·ϵ, ϵ∼𝒩(0, 𝐈), where W_s∈ℝ^1× d_s is a trainable parameter, μ_s, σ_s denotes the mean and variance of the learned style prior s. Compared with the naive style encoder used in StyleTalk <cit.>, our proposed model mainly enjoys the following merits: (1) As mentioned in Sec. 1, the audio information is vital for extracting intrinsic style, while StyleTalk cannot handle such a modality. Moreover, audio typically contains primarily information about the spoken content, thus making it non-trivial to extract information that is complimentary to the visual information contained in video frames. In comparison with StyleTalk, we design a specific structure to handle these complex data, considering both visual and audio information, leading to stronger style embedding. (2) Since the emotion of speakers would change as the video frames go on, it is sufficient to represent the intrinsic style with a deterministic feature, i.e. the same way as in StyleTalk. Our method, on the other hand, learns a better sequential embedding, which helps us model the style prior as a Gaussian distribution that is more representative. §.§ Style-Driven Diffusion Process After learning the intrinsic style prior from style reference videos, we can then control the talking head generation process with such a condition. Specifically, we build a talking head generation model based on previous methods such as V-Express <cit.>. These methods apply various techniques to pretrained Stable Diffusion (SD) for better quality, including ReferenceNet for referencing an image, Audio Projection for embedding audio information, and temporal attention layer to enhance temporal coherence. In addition to these existing methods, we further propose two novel modules named HEAD-Kps Guider and Style Projection as follows that can better facilitate the input data. HEAD-Kps Guider. Each HEAD-Kps image spatially corresponds to the respective target frame. To fully utilize this, we use the HEAD-Kps Guider to encode the HEAD-Kps images. The Head-Kps images are constructed using landmarks from the upper half of the face, consisting of 8 keypoints. The HEAD-Kps Guider is a lightweight convolutional model that encodes each HEAD-Kps image into HEAD-Kps features, which match the shape of the latent. Subsequently, before being input into the denoising U-Net, the multi-frame latent features are directly added to the corresponding encoded HEAD-Kps features. Style Projection. To utilize the intrinsic style prior calculated in Sec. 3.1 to guide the denoising process, we first resample a style prior s from the learned Gaussian distribution. Then s is injected into the diffusion UNet through an additional style attention layer along with the specific cross attention layers for audio and temporal information. §.§ Training Strategies Training of Intrinsic Style Extractor. Essentially, codes with similar intrinsic styles should cluster together in the style space. Therefore we apply contrastive learning to the style priors by constructing positive pairs (s, s^p) with the same identity and emotion, and negative pairs (s, s^n) with different identities or emotions. Then, the InfoNCE loss <cit.> with similarity metric ζ is enhanced between positive and negative sample pairs: ω(s̃) = exp(ζ(s, s̃) / τ), ℒ_con = -log(ω(s^p)/ω(s^p)+∑_s^n∈𝒮^nω(s^n)), where τ denotes a temperature parameter, 𝒮^n denotes all negative samples for s, and the similarity ζ(s_i, s_j)=1/s_i,s_j_2+1∈ (0,1] is an improved version obtained as the inverse of the ℒ_2 distance between sample pairs. We additionally add a fixed constant to stabilize the numerical range of the similarity and make the training process more stable. In the training of the intrinsic style extractor, we directly train all parameters of this lightweight model. Meanwhile, we use a random dropout trick when inputting the 3DMM expression coefficients β and audio features α by setting some of the input 3DMM expression coefficients β or audio features α to zero. This allows the model to obtain the style prior through a single modality. Finetuning Diffusion Model. The training of the diffusion model adopts a three-stage progressive training method to gradually improve the model's generative capability and stability, with the noise prediction loss as in Eq. <ref> employed in each stage. (1) First, we train the model for single-frame image generation, where the diffusion UNet, ReferenceNet, and Head-Kps guider are involved in the training. In this stage, the diffusion UNet takes a single frame as input, the ReferenceNet processes different frames randomly selected from the same video clip, and the Head-Kps guider incorporates the encoded Head-Kps features into the latent space. Both the diffusion UNet and ReferenceNet initialize their weights from the original SD. (2) Second, we train the model for continuous multi-frame image generation, which includes the temporal module and the audio layer. In this stage, f consecutive frames are sampled from a video clip and the parameters of ReferenceNet and Head-Kps guider are frozen. The temporal module initializes its weights from AnimateDiff <cit.>. (3) After that the final stage is for transferring intrinsic style. In this stage, all other modules of the model are frozen, and only the style attention module is trained. This allows the model to generate corresponding facial expressions and details based on the intrinsic style input during the portrait image generation process. § EXPERIMENTS §.§ Experiments Setting  is implemented using PyTorch <cit.> and optimized with Adam <cit.>. The intrinsic style encoder is trained on the MEAD <cit.> and HDTF <cit.> datasets. During training, we consider samples with the same identity and emotion in MEAD as positive samples, and segments from the same video in HDTF as positive samples. Additionally, we will randomly dropout expression coefficients or audios, but they will not be zeroed out simultaneously. The denoising UNet is trained on the MEAD, HDTF, and other videos from Internet. The facial regions in these videos are cropped and resized to 512×512. The total training dataset comprises approximately 300 hours of video. In the multi-frame training stage, the number of consecutive frames f is set to 8. In the training of style projection, to enhance the generalization ability, we adopt different emotions for the same identity on the MEAD dataset (e.g. generating a sad video clip from a happy reference image). For training and testing set splitting, we select 10 identities out of 46 for testing on MEAD. As for HDTF, we randomly select 25 videos for testing. Precautions are taken to ensure that there is no overlap of character identities between the training and testing sets. During inference, to ensure fairness, we utilize EulerDiscreteScheduler as diffusion sampler with the denoising steps set as 25 for all diffusion-based methods. §.§ Quantitative Comparison We compare our method with several previous works, including SadTalker <cit.>, AniPortrait <cit.>, V-Express <cit.>, and Hallo <cit.>. To demonstrate the superiority of the proposed method, we evaluate the model using several quantitative metrics. We utilize the Fréchet Inception Distance (FID) <cit.> to assess the quality of the generated frames, and further employe the Fréchet Video Distance (FVD) <cit.> for video-level evaluation. To evaluate the quality of the generated talking head videos, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are adopted. To evaluate lip-sync accuracy, we use the Mouth Landmark Distance (M-LMD) <cit.>. For assessing the accuracy of the generated facial expressions, we use the Full-Face Landmark Distance (F-LMD). As shown in Table <ref>, in the video reconstruction experiments, our method achieves the best performance across all the metrics on both MEAD and HDTF datasets. We have a significant advantage in evaluating the quality of video and single-frame images, as evidenced by the lower FVD and FID scores. The SSIM and PSNR scores indicate that the quality of the videos reconstructed by our method is significantly better than that of other methods. The M-LMD scores demonstrate that our method achieves more accurate lip-sync, while the F-LMD scores reflect that our method better restores facial expressions through intrinsic style. These results indicate that using intrinsic style can significantly enhance the quality of video generation. To demonstrate the intrinsic style transfer capability of our method, we conducted expression transfer experiments in addition to video reconstruction experiments. This experiment can only be performed on the MEAD dataset, which contains multiple emotions for a single identity. Specifically, we used the neutral expression faces from the dataset as references and used videos with distinct expressions (such as happy, sad, and angry) to drive the generation. In such an experimental setup, methods that rely solely on audio for driving expressions struggle to effectively convey the corresponding emotions. In this scenario, our method still maintains its superiority. As shown in Table <ref>, although the results of all methods deteriorate compared to the reconstruction experiments,  still leads in all metrics. §.§ Qualitative Comparison In Figure <ref>, we present a comparison of the visual effects between ours and other methods. The first two rows show the comparison of reconstruction results, while the last two rows show the comparison of intrinsic style transfer results. In the reconstruction experiments, our method achieves accurate synchronization of head movements, lip shapes, and even eye blinking, while effectively preserving the identity of the speaker. In the intrinsic style transfer experiments, when there are significant differences in expressions between the reference face and the real video, our method effectively transfers the expressions and details of the face in the style reference video, while maintaining consistency in other conditions. §.§ Ablation Study Probabilistic Style Prior Learning. In diffusion models, if we do not employ a probabilistic learning and instead use a deterministic style prior for training, it may lead to overfitting of the training results. Figure <ref> shows the results of different intrinsic style acquisition methods. Using a deterministic intrinsic style causes the model to transfer (eye reflections/facial contours) from the style reference video to the new face, leading to issues with identity deviation. By employing the probabilistic learning and resampling method, the intrinsic style prior obtained by the model from the same training video varies each time, thereby preventing the transfer of incorrect content to the generated video. Intrinsic Style Extractor with Audio Information. Table <ref> provides a quantitative evaluation of the clustering strength of intrinsic style after incorporating audio features. We define clustering strength d_cls as the ratio between inter-cluster distance d_inter and intra-cluster distance d_intra: d_cls = d_inter/d_intra. A larger value indicates a better clustering performance. We used different emotions of a single identity as categories to calculate the clustering strength of the intrinsic style obtained under three conditions: using only expression coefficients, using only audio features, and using both features together. The results indicate that expression coefficients play a crucial role in the extraction of intrinsic style and audio features can indeed serve as an auxiliary to enhance the clustering strength of intrinsic style, while using audio features alone is insufficient to obtain effective intrinsic style. What Can We Learn from Style Prior? We use t-distributed Stochastic Neighbor Embedding (t-SNE) <cit.> to project the intrinsic style priors into a two-dimensional space. Figure <ref> shows the intrinsic style priors of a speaker from the MEAD dataset. Each code is color-coded according to its corresponding emotion and intensity. The style priors with the same emotion first cluster together. Within each cluster, the style priors with the same intensity are closer to each other, and there are noticeable transitions between intrinsic style priors of different intensities. These observations indicate that our model can learn a continuous distribution of intrinsic styles. As shown in Figure  <ref>, when performing linear interpolation between two intrinsic style priors extracted from the test set, the facial expressions and details in the generated video transition smoothly. § CONCLUSION We propose  as the first talking head video generation method capable of achieving intrinsic style transfer. Through the design and training of the intrinsic style extractor, we obtain intrinsic style priors which can sufficiently represent the emotions and habits of the style reference videos. By sampling from the style prior and progressive training, we successfully transfer intrinsic styles to unseen faces. Experimental results show that  not only transfers intrinsic styles but also improves the overall quality of the generated videos, providing new insights for more advanced and comprehensive talking head video generation.
http://arxiv.org/abs/2409.03344v1
20240905084054
Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training
[ "Yu Zheng", "Wenchao Zhang", "Yonggang Zhang", "Wei Song", "Kai Zhou", "Bo Han" ]
cs.CR
[ "cs.CR" ]
IEEE XXXX Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training Yu Zheng,  Wenchao Zhang,  Yonggang Zhang^∗,  Wei Song,  Kai Zhou,  Bo Han  ∗ Yonggang Zhang is the corresponding author. Yonggang Zhang is with the Department of Computer Science, University of Technology Sydney, Australia. E-mails: [email protected] Zheng is with the Department of Information Engineering, Chinese University of Hong Kong, Shatin, Hong Kong SAR. E-mail: [email protected] Zhang is with the College of Computer Science and Technology, Xi’an University of Science and Technology, China. E-mail: [email protected]; Wei Song is with the School of Computer Science and Engineering, Northeastern University, China. E-mail: [email protected]; Kai Zhou is with the Department of Computing, Hong Kong Polytechnic University, Kowloon, China. E-mail: [email protected]; Bo Han is with the Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR. E-mails: [email protected] MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano, via Bonardi 9, 20133 Milan, Italy ^1 [email protected] ^2 [email protected] ^3 [email protected] September 9, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Differential privacy (DP) provides a provable framework for protecting individuals by customizing a random mechanism over a privacy-sensitive dataset. Deep learning models have demonstrated privacy risks in model exposure as an established learning model unintentionally records membership-level privacy leakage. Differentially private stochastic gradient descent (DP-SGD) has been proposed to safeguard training individuals by adding random Gaussian noise to gradient updates in the backpropagation. Researchers identify that DP-SGD typically causes utility loss since the injected homogeneous noise alters the gradient updates calculated at each iteration. Namely, all elements in the gradient are contaminated regardless of their importance in updating model parameters. In this work, we argue that the utility loss mainly results from the homogeneity of injected noise. Consequently, we propose a generic differential privacy framework with heterogeneous noise () by defining a heterogeneous random mechanism to abstract its property. The insight of is to leverage the knowledge encoded in the previously trained model to guide the subsequent allocation of noise heterogeneity, thereby leveraging the statistical perturbation and achieving enhanced utility. Atop , we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradients is heterogeneous and guided by prior-established model parameters. We conduct comprehensive experiments to verify and explain the effectiveness of the proposed , showing improved training accuracy compared with state-of-the-art works. Broadly, we shed light on improving the privacy-utility space by learning the noise guidance from the pre-existing leaked knowledge encoded in the previously trained model, showing a different perspective of understanding the utility-improved DP training. Differential Privacy, Heterogeneous Noise, DP-SGD, Privacy-Utility Trade-off. § INTRODUCTION Deep learning has achieved remarkable success across a wide spectrum of domains <cit.>, primarily due to the massive data used for model training. As training data has been thoroughly analyzed to optimize model performance, a significant privacy concern arises regarding the model's potential to memorize individual data points <cit.>. Indeed, numerous existing studies <cit.> have demonstrated the feasibility of identifying the presence of particular records or verbatim texts in the training dataset, thereby raising severe privacy concerns. Differential privacy (DP) <cit.>, emerged as de facto protection, can provide provable security for individuals’ privacy by adding the i.i.d noise to the sensitive data or computations. In detail, DP guarantees statistical indistinguishability between the outputs of two random mechanisms, which originate from the private inputs with or without a substituted individual data point. To protect sensitive data used in the training process, differentially private stochastic gradient descent (DP-SGD) <cit.> has been proposed and regarded as a main-steam method. The idea of DP-SGD is to add the homogeneous noise sampled from a Gaussian distribution to the aggregated gradient derived from a batch of examples in every training iteration. Accordingly, DP-SGD can thwart an adversary from attaining membership of a particular data memorized by model parameters when the adversary dissects an established model. One could adopt DP-SGD as a baseline <cit.> for supporting secure non-convex training for neural networks. Subsequently, researchers identify the inherent trade-off between privacy and utility introduced by DP-SGD. It is a well-known challenge to achieve high model utility/performance given meaningful DP guarantees <cit.> since acquiring strong protection realized by a large noise scale generally leads to unavoidable utility loss and performance degrading. For example, the number of DP-SGD training iterations may increase by 10× towards a similar utility metric compared with the pure SGD. Accordingly, a research line of works <cit.> explored to acquire a better utility by flexibly and empirically calibrate privacy budget allocation. Regarding composition theorem, they try to either reallocate/optimize the privacy budget <cit.> or modify the clip-norms <cit.> of a (set of) fixed noise distribution(s) in each iteration. These dynamic noise allocation solutions optimize the noise allocation in the range of the whole training process with a constant budget. Sharing the same spirit as DP-SGD, these methods employ homogeneous noise to perturb gradient updates. Upon studying the iteration-wise utility with/without DP noise in the process of model convergence, we observe that utility loss can be ascribed to the homogeneity of noise applied to gradients. Regardless of the diverse features learned from the training data, homogeneous noise negatively contributes to training performance (, convergence ability and accuracy) due to perturbing the original gradients derived in the backpropagation. Drawing inspiration for dynamic noise allocation approaches, we believe introducing a noise heterogeneity view to the dynamic noise allocation approach will shed light on improving the privacy-utility space. Thus, we raise a fundamental question, How do we improve the privacy-utility trade-off of DP-SGD by introducing the heterogeneous noise? §.§ Technical Overview We consider a novel route of crafting iteration-wise noise heterogeneity by making use of pre-existing knowledge contained in the neural network, which captures the feature attributes prior-learned from the training data, thus improving the utility of the established model at every iteration. The intuition is to dynamically allocate noise heterogeneity to diverse features in the back-propagation of SGD, in which the noise heterogeneity is guided by the prior learned knowledge contained in the existing model. To this end, we propose a new framework – differential privacy with heterogeneous noise (), guided by an iteration-wise guidance matrix derived from prior learned model parameters, to perturb the gradients derived in the backpropagation. Specifically, we raise the following contributions progressively. 1) Allocating noise heterogeneity via pre-existing knowledge. To generate the model-guided heterogeneity, we propose a novel dynamic noise allocation scheme, where an iteration-wise (for short, stateful) matrix ^(t-1) is computed using the pre-existing model at (t-1)-th iteration. With the notion of stateful ^(t-1), we can guide the noise heterogeneity at the t-th training iteration. Namely, the stateful adjusts the noise n used to perturb gradient updates at every iteration according to the heterogeneity derived by the ^(t-1). Consequently, the posterior random mechanism is guided by pre-existing knowledge encoded in prior model parameters at every training iteration. Specifically, we formally define our novel scheme as a random mechanism that ^(t)=𝐠^(t) +^(t-1)·𝐧, where the abstraction of ^(t-1) is independent to knowledge extraction function ℱ of learned model and indexed by states t-1,t. For theoretical analysis, we abstract the notion of heterogeneous DP learning with stateful guidance for allocating noise heterogeneity. By adopting composition <cit.> and Rényi Divergence, we provide theoretical analysis on SGD training following conventional proof style. Accordingly, the instantiation of SGD, regarded as a modified variant of standard DP-SGD, attains the standard DP guarantee. 2) Constructing heterogeneous DP-SGD. We instantiate as a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous. Specifically, the stateful ^(t-1) at the (t-1)-th training iteration is derived from decomposition on model parameters ^(t-1) at the prior training iteration, capturing the pre-existing knowledge. Knowledge involved in ^(t-1), serving as allocation guidance, propagates to the DP noise injected to gradients at the t-th training iteration, following the style of DP-SGD. Accordingly, it captures the pre-existing statistical knowledge of private training data, extracting heterogeneity applied to features. Later, the stateful guidance matrix ^(t-1) adjusts the parameters of Gaussian distribution, equivalently affecting the heterogeneity of noise added to diverse features in the back-propagation of SGD. Prior knowledge from extracted features has been reasonably DP-protected, thus not incurring extra risks in exposing private data. The plug-in SGD is generic and independent of learning models, best for both worlds in performance and privacy. For test accuracy, improves a series of state-of-the-arts, notably, from 95% to 98% over standard DP-SGD. For training over the CIFAR-10 dataset, improves by 18%-47%. We tested the convergence stability when adding small and large, showing that could mitigate model collapse. At last, we visualize the DP-protected features during the training to explain 's superior performance. §.§ Contribution Summary Overall, our contributions are summarized as follows. * To form a step forward, we explore the relationship between DP training performance and heterogeneity at an iteration. Accordingly, we shed new light on bridging model utility and DP heterogeneity allocation to enhance the performance-privacy space. * We propose a framework – , supporting utility-improved training at every iteration by applying heterogeneous noise to model updates in back-propagation. We abstract a guidance vector derived from pre-existing knowledge learned by models to guide noise heterogeneity applied to model back-propagation. Then, we formalize and then provide theoretical analyses and proofs. * Our SGD is general and efficient, which could be adopted as a plug-in module. SGD could converge in fewer training iterations and mitigate the utility loss of the established model without relying on extra manual efforts. Experiments and explanations confirm the superior improved privacy-utility trade-off. § RELATED WORKS §.§ Differential Privacy for Deep Learning Differential privacy has emerged as a solid solution to safeguard privacy in the field of deep learning. Differential privacy (DP) for deep learning can be classified into four directions: input perturbation <cit.>, output perturbation <cit.>, objective perturbation <cit.>, and utility optimization <cit.>, showing valuable insights in the aspects of theory and practice. DP could quantify to what extent individual privacy (, whether a data point contributes to the training model) in a statistical dataset is preserved while releasing the established model trained over the specific dataset. Typically, DP learning has been accomplished by applying the unbiased Gaussian noise to the gradient descent updates, a notable example of DP-SGD <cit.>. To be specific, DP-SGD adds the i.i.d. noise sampled from Gaussian distribution to model gradients to protect example-level training data involved in the training process in every iteration. The noise-adding mechanism has been widely adopted in various learning algorithms, e.g., image classification and natural language processing. PATE <cit.> is an approach to providing differentially private aggregation of a teacher-student model. Due to the property of post-processing <cit.>, the student's model is differentially private since it trains over the noisy inputs. Bayesian differential privacy <cit.> takes into account the data distribution for practicality <cit.>. By instantiating hypothetical adversaries <cit.>, various threat models are employed to show corresponding levels of privacy leakage from both the views of practitioners and theoreticians. Privacy auditions and robustness, or cryptographic protection <cit.> belong to orthogonal research directions, focusing on the evaluative estimation of the privacy guarantee or cipher-text transmission. Membership inference attack <cit.> enables detecting the presence or absence of an individual example, implying a lower bound on the privacy parameter ϵ via statistics <cit.>. Notably, Steinke  <cit.> theoretically proves the feasibility of auditing privacy through membership inference on multiple examples simultaneously, elaborating an efficient one-round solution. Combining different techniques with this work can be promising, while it is out of scope for this work. §.§ Privacy-Utility Trade-off For acquiring higher utility <cit.>, recent works explore the adaptive mechanism of DP training from different perspectives. They try to either reallocate/optimize the privacy budget <cit.> or modify the clip-norms <cit.> of a (set of) fixed noise distribution(s) in each iteration. Such a branch of work points out a promising direction of adaptivity via redesigning the randomness. Privacy budget scheduling <cit.> improves the utility of differentially private algorithms in various scenarios. Unlike the aforementioned advances of dynamic noise allocation, our exploration of adjusting noise heterogeneity by model parameters aims to improve the utility of the established model at every iteration rather than optimizing the noise allocation in the range of the whole training process with a constant budget. Handcrafted features, learned from public data, can improve model utility given a fixed privacy budget <cit.>. Rather than introducing auxiliary data, we aim to extract knowledge from protected model parameters without extra data assistance. Previous analyses have enabled an understanding of utility bounds for DP-SGD mainly in an empirical manner. Altschuler and Talwar <cit.> explored the theory foundation of privacy loss – how sensitive the output of DP-SGD is. They solve a tighter utility bound given the privacy loss as a function of the number of iterations, concluding that after a small burn-in period, running DP-SGD longer leaks no further privacy. In this work, we exploit visual explanation <cit.> and theoretical understanding to explore the essence of privacy-utility space. § PRELIMINARY §.§ General Notion of Differential Privacy Differential privacy (DP) <cit.> theoretically guarantees individual privacy that the algorithm’s output changes insignificantly (see Definition <ref>) if the inputting data has small perturbations. Pure ϵ-differential privacy is difficult to achieve in realistic learning settings, whereas the seminal work <cit.> training with SGD adopts approximate (ϵ, δ)-differential privacy, formally defined below. A randomized mechanism provides (ϵ,δ)-differential privacy if for any two neighboring datasets D and D' that differ in a single entry, ∀ S ⊆ Range(), Pr((D)∈ S)≤ e^ϵ·Pr((D')∈ S)+δ where ϵ is the privacy budget and δ is the failure probability. The sensitivity of a query function ℱ:𝔻→ℝ for any two neighboring datasets D,D' is, Δ = max_D,D'ℱ(D)-ℱ(D'), where · denotes L_1 or L_2 norm. Next, we introduce the definition of privacy loss <cit.> on an outcome o as a random variable when DP operates on two adjacent databases D and D'. Privacy loss is a random variable that accumulates the random noise added to the algorithm/model. Let : 𝔻→ℝ be a randomized mechanism with input domain D and range R. Let D,D' be a pair of adjacent datasets and be an auxiliary input. For an outcome o∈ℝ, the privacy loss at o is defined by, ℒ_𝖯𝗋𝗂^(o)≜log Pr[(,D)=o]/ Pr[(,D')=o] where ℒ_𝖯𝗋𝗂 is a random variable on r(o;,,D,D'), , the random variable defined by evaluating the privacy loss at an outcome sampled from (D). Here, the output of the previous mechanisms is the auxiliary input of the mechanism ^(t) at t. §.§ DP Stochastic Gradient Descent DP-SGD <cit.>, regarded as a landmark work, has been proposed to safeguard example-level model knowledge encoded from the training data, constrained by the privacy budget allocated at each training iteration. As reported by DP-SGD, adding i.i.d. noise inevitably brings parameter perturbation over the learned model in practice. Research efforts such as <cit.> are focused on developing techniques that can provide stronger privacy guarantees while minimizing the loss of utility from various perspectives, for example, clipping value optimization and privacy budget crafting. In DP learning, neighboring datasets D,D' represent two datasets that only differ by one training data point, while the is the DP training algorithm. Following the formality of the definition, the ϵ is an upper bound on the loss of privacy, and the δ is the probability of breaking the privacy guarantee. DP-SGD is a differentially private version of stochastic gradient descent (SGD). This approach adds noise to SGD computation during training to protect private training data. The first step is to minimize the empirical loss function ℒ(θ) parameterized by θ. Secondly, gradient ∇_θℒ(θ,x_i) is computed at each step of the SGD, given a random subset of data x_i. The noise is added to the average gradients of ∇_θℒ(θ,x_i),∀ x_i. After training, the resulting model accumulates differentially private noise of each iteration to protect private individual data. Through revisiting DP-SGD, we explore explaining the root of utility loss and bridge the concept of model-knowledge guidance and DP, making a DP training process fitting to enhance privacy-utility trade-offs better. We showcase new thinking – not employing auxiliary (e.g., public data) assistance for the higher model, and thus rethinking tolerant leakage (statistical knowledge, not membership, aligning standard DP definition) encoded in the prior DP-trained model. §.§ Rényi Differential Privacy Rényi differential privacy <cit.> has been proposed as a natural relaxation of differential privacy, particularly suitable for composing privacy guarantee of heterogeneous random mechanisms derived from algorithms. zCDP <cit.> and Rényi DP <cit.> (RDP) are defined through Rényi Divergence by Bun  <cit.> for a tight analysis, thereby providing accumulating cumulative loss accurately and strong privacy guarantees. Definition <ref> presents the Rényi Divergence <cit.> for defining the Rényi differential privacy <cit.> as Definition <ref>. For two probability distributions P and Q over ℝ, Rényi divergence of order α is 𝒟α(PQ)≜1α-1log𝔼_x∼ Q[(P(x)/Q(x))^α] Compared to standard differential privacy, Rényi differential privacy is more robust in offering an operationally convenient and quantitatively accurate way of tracking cumulative privacy loss throughout the execution of a standalone differentially private mechanism, such as iterative DP-SGD. It supports combining the intuitive and appealing concept of a privacy budget by applying advanced composition theorems for a tighter analysis. In return, an (α,ϵ)-Rényi DP implies (ϵ_δ,δ)-DP for any given probability δ>0 as Theorem <ref>. We adopt the aforementioned DP advances to formalize DP with heterogeneous noise, devise the heterogeneous noise version of DP-SGD, and develop corresponding theoretical analyses. A randomized mechanism 𝒟:↦ℛ is said to have ϵ-Rényi differential privacy (RDP) of order α or (α,ϵ)-RDP for short, if for any adjcent D,D', Rényi divergence of random mechanisms satisfies that, 𝒟_α((D)|(D')≤ϵ If is an (α,ϵ)-RDP mechanism, it also satisfies (ϵ+log 1/δ/α-1,δ)-DP for any 0<δ<1. §.§ Security Model for Centralized DP Learning As for the security model, we consider a typical client-server paradigm of DP training. The client, owning a set of private training data, trains a model conditioned on her private data, while the server receives the established model that is well-trained by the client, , in a black-box manner. The client trains a model conditioned on her data and sends the resulting model only to a remote server. Assume a server is a malicious adversary, observes the final model, and tries to learn the existence of individual data. Regarding Definition <ref>, privacy guarantee means resisting the server's inference on a particular record by viewing a differentially private model. Our security model follows the privacy goal and adversary abilities that are the same as existing works since knowledge extraction is from the protected features on the client side. does not break existing settings or use any auxiliary data, thus incurring no extra privacy leakages to the server. § NOISE HETEROGENEITY IN DP To explore the noise heterogeneity, we start by adjusting the noise scale added to different elements, followed by witnessing the training process. Through repeated attempts, we observe that noise heterogeneity, , the diverse noise scales added to the elements, can affect the training performance. Accordingly, our idea is that prior model parameters (involving extracted elements with traditional DP protection) can guide the posterior random mechanism to improve training performance. In the meantime, no privacy-sensitive element beyond DP protection is involved in yielding guidance. Unlike dynamic allocation, we offer distinctive element-wise noise at each training step rather than scaling noise in a whole training process. §.§ Define Heterogeneous DP Learning We rethink reasonable leakages in DP models and make use of the pre-existing knowledge learned in the current model parameters to improve subsequent DP training performance. Model training starts with a random ^(0) towards a convergent model ^(T), which captures knowledge learned from data iteration by iteration. Naturally, our idea is to introduce a scalar vector that is derived from the learned knowledge in in the prior training process to serve as the guidance for subsequent DP training. Consider a function ℱ to denote functionality achieved by neural networks . The ^(t), trained with the DP mechanism, denotes the deep learning model at iteration t. We formulate DP trained model at t-th iteration to be ℱ(^(t), 𝖣𝖺𝗍𝖺) given private 𝖣𝖺𝗍𝖺. We utilize the ^(t-1) at t-1-th iteration to adjust the next-step noise allocation at t-th iteration, where ^(t-1) is computed by the prior ^(t-1) at (t-1)-th iteration involving features learned in the first t-1 iterations. Concretely, Definition <ref> introduces a general notion of heterogeneous DP learning () that varies with the time t, essentially adjusting the noise vector (sampled from Gaussian distribution) operated over the learning model. Let any two neighboring datasets 𝖣𝖺𝗍𝖺 and 𝖣𝖺𝗍𝖺' differ in a single entry, ϵ be privacy budget, and δ be failure probability. Let be Gaussian noise distribution and 𝖣𝖺𝗍𝖺 be inputting private data. A time-dependent random mechanism of learning algorithm(s) ℱ at the time t is, ^(t)=ℱ(^(t), 𝖣𝖺𝗍𝖺) +^(t-1)·(μ,σ^2) (μ,σ^2) represents noise distribution with parameters μ, σ. To generate pre-existing knowledge stored in the model parameters, we can employ a knowledge-extraction method (, principal component analysis <cit.>) to extract pre-existing knowledge stored in the learned model, saying ^(t-1)∝ℱ_𝗄𝗇𝗈𝗐(^(t-1)), t∈[0,T]. Accordingly, the noise sampled from the Gaussian distribution is scaled by (, values and noise direction). The keeps varied for tracking DP model training, calibrating noise vector via pre-existing knowledge stored in the model. In summary, the expects to: 1) be tailored with heterogeneous DP noise that is added to the learning process; 2) be generic and irrelevant to the convergence route for distinctive models for iteratively reaching a model optimum; 3) have good model accuracy and convergence performance given a preset privacy budget. Intuitively, iteration-wise guidance enables utility-optimized training in every backpropagation. Dynamic privacy-budget allocation assumes a constant budget in the whole training process, whereas assumes a pre-allocated budget in each iteration for acquiring relatively fine-wise optimization. We consider (t,ϵ_t)-utility-optimized DP in Definition <ref> to capture the desirable property in DP learning. Let any two neighboring datasets D and D' differ in a single entry, ϵ be privacy budget, and δ be failure probability. A mechanism satisfies the following conditions at any training-iteration t: i (Privacy). If for any two neighboring datasets D and D', Pr(^(t)( D)∈ S)≤ e^ϵ_t·Pr(^(t)( D')∈ S)+δ_t for any iteration t∈ [0,T]. ii (Utility). Supposing an optimal Z^∗, the objective function satisfies min_ϵ_t= ϵ/Tℱ_𝖣𝗂𝖿𝖿[^(t)Z^∗ ]. iii (t-Sequential Composition). If =(^(0), …,^(t),…,^(T)), satisfies (ϵ̃, δ)-DP such that ϵ̃≤ϵ. Property (i) essentially guarantees differential privacy <cit.> at each training iteration. Property (ii) extracts the iteration-wise optimization, which expects that the difference measurement ℱ_𝖣𝗂𝖿𝖿 between the noisy model and pure model are small as possible. Given a fixed privacy budget ϵ/T, improving utility expects to reduce the difference between ^(t) and non-noisy Z^∗. Property (ii) asks for no extra privacy leakages in the under privacy composition, which is the same as the standard DP guarantee. §.§ Overview of DP Heterogeneous SGD Before constructing DP heterogeneous SGD ( SGD), we adopt the notations of DP-SGD by revisiting standard DP-SGD <cit.>. DP-SGD trains a model with parameters by minimizing the empirical loss function ℒ(). For a random example x_i, DP-SGD computes gradients 𝐠(x_i)←∇(, x_i) with clipping value C, and adds noise to ∑_i𝐠(x_i) sampled from Gaussian distribution 𝒩(0, σ^2C^2𝐈). An adversary cannot view the training process except for the DP-trained model. Motivated by DP-SGD, we explore an instantiation of to generate heterogeneous noise and then add a “wisdom” (guided by prior learned knowledge) heterogeneous noise. Accordingly, we instantiate DP-SGD <cit.> as the basis and replace its i.i.d. noise with heterogeneous noise. In DP-SGD, the standard deviation σ of (0,σ^2) is constant for each layer; however, our mechanism guided by adds different noise vectors for model updates at each iteration. With ^(t-1), the added noise to each layer is guided by the learned model in the aspects of scales and noise space at every iteration. Using SGD, we implement an instantiated scheme of training a model starting from random initialization. The first step is generating heterogeneous noise building on the covariance matrix of the model. By principal component analysis (PCA) <cit.>, the noise matrix is tuned via the covariance matrix, which aligns with the subspace in which features exist. When training with SGD, updatable gradients computed in the backpropagation are added by noise, whose scales are guided by the generated subspace. We consider extracting pre-existing knowledge from whole model parameters rather than a layer to capture the whole statistical space. In this way, the noise space is more comprehensive, and the noise scale is more adaptive to the feature space. §.§ Detailed Construction §.§.§ Construction of SGD For clarification, we explain the step-by-step construction of SGD. Step-1. Assume that the model ^(0) is initialized to be random during training. The model parameters at each iteration t represent the learning process of features in the dataset; , the training is to optimize the model parameters by capturing data attributes. The ^(t-1) takes a set of inputting data x in size S_ x (, batch size) and compute the gradient 𝐠_t←∇_^(t-1)ℒ(^(t-1),x_i), x_i∈ x The 𝐠_t is clipped with the clip value 𝖢𝗉, thus ensuring that the gradients are scaled to be of norm 𝖢𝗉. The clipped gradients are 𝐠̅_t handled with clip value 𝖢𝗉. Step-2. In our implementation, ^(t-1) can be realized by following Algorithm <ref> using ^(t-1). Since ^(t-1) is varied at each training iteration, ^(t-1)-guided noise distribution operating on gradients is varied during the whole training process. ^(t-1) contains the computed sub-space ^(t-1) and eigenvalues matrix ^(t-1) extracted from prior-learned model. From a practical view, ^(t-1) configures the direction of the noise to be added. ^(t-1) generated from singular value decomposition is utilized to scale the noise distribution. Here, independent and identically distributed noise can be sampled from a standard noise distribution , such as Gaussian and Laplace distributions. The generation of ^(t-1) does not introduce extra leakage since ^(t-1) learned in the prior t-1 iterations has been well-protected through SGD. Step-3. Following the logic of DP-SGD, ^(t-1)-guided noise is added to a batch of gradients, 𝐠̃_̃t̃←(∑𝐠̅_t +^(t-1)·(μ, σ^2·I))/S_ x ^(t-1) here is different at every backpropagation of different layers, achieving different noise levels on each layer. This layer-wise noise tuning speeds up the convergence and mitigates model collapse. It derives from the corresponding model parameters of a unique layer that is relevant to an iteration t at the current backpropagation. SGD is independent of the choices of optimizer and optimizers, which could be potentially generalized to different learning models without much effort of manual tuning. Step-4. The last step is to perform gradient decent ^(t)←^(t-1)-η𝐠̃_̃t̃ using the new noisy gradients 𝐠̃_̃t̃, where η_t is a preset scalar. For attaining higher utility, adding noise should avoid hurting important features (extracted by the model for later prediction. Finally, the model converges better since the space of model parameters (regarded as a matrix) is relatively less destroyed by using the noise sampled from the identical space. §.§.§ Construction of Noise Guidance The math tool, principal component analysis (PCA) <cit.> performs analyzing data represented by inter-correlated quantitative dependent variables. It forms a set of new orthogonal variables, called components, depending on the matrix eigen-decomposition and singular value decomposition (SVD). Given a matrix 𝐗, of column-wise mean equal to 0, the multiplication 𝐗^⊤𝐗 is a correlation matrix. Later, a diagonal matrix of the (non-zero) eigenvalues of 𝐗^⊤𝐗 is extracted together with the eigenvectors. Essentially, PCA simplifies data representation and decomposes its corresponding structures. We propose a simple yet efficient approach by examining the model parameters as a result of knowledge integration over diverse features extracted from private data. As in Algorithm <ref>, we employ the PCA decomposition <cit.> to extract knowledge learned by the training model and apply generated guidance ^(t) at iteration t to adjust noise addition at the next iteration. PCA decomposition can extract knowledge from representative data (, model parameters in our setting) by analyzing inter-correlated quantitative dependence. Normally, a neural network kernel extracting the features from the images is a matrix that moves over the input data to perform the dot product with the sub-region of input data. Denote ℝ to be the real number set. Let 𝐛=[b_1, b_2, …, b_k] be a vector, and 𝐁=[𝐛_1,𝐛_2,…, 𝐛_𝐝]^⊤∈ℝ^n× m be a matrix. 0.96 Step-1. For each layer, the client calculates ^(t)(^(t))^⊤ to attain ^(t)∈ℝ^k× k. Step-2. The client performs principle component analysis (^(t)) to give the sub-space ^(t)∈ℝ^d× k. The algorithm reduces the dimensions and encodes ^(t) into a compact representation that is good enough to analyze and represent current ^(t). Simultaneously, the client computes singular value decomposition ^̇(̇ṫ)̇=(^(t)) through PCA and transform (^(t)) to eigenvalues matrix ^(t)∈ℝ^k× k by ^̇(̇ṫ)̇(^̇(̇ṫ)̇)^⊤. The ^(t) is employed as the scalar matrix to adjust noise scales for a batch of gradients in t-th training iteration. Step-3. ^(t) is computed by multiplying ^(t) and ^(t), which are further utilized to guide the noise added to gradients in every backpropagation. §.§.§ Noise Guidance through Pre-existing Knowledge For a non-private model, converges to a stable status through uncountable routes of optimizing model parameters. Noise addition becomes complicated if referring to different optimizing tools; it is not generic anymore. DP-SGD sets a fixed noise scale at different training iterations. Noise addition on inevitably has a negative contribution to extracting features over private data compared with pure parameters. By rethinking DP training from sketch (, random to convergence), varying achieves improved allocation of parameter-wise heterogeneous noise at each training iteration with the constraint of a preset privacy budget. Such an automatic allocation is generated from the prior-iteration evaluation of the training model in a differentially private manner. From this viewpoint, injecting noise into the model parameters contributes negatively to both the knowledge and the process of knowledge integration. Compared with DP-SGD, the proposed method mitigates destroying the process of knowledge integration while keeping the learned knowledge unchanged. Different grid search of tuning hyper-parameters, SGD adjusts the intermediate training process via instantly learnable parameters rather than setting a set of possibilities. Combining grid search (vertically tuning) and SGD (horizontally tuning) may further boost the automatic optimization of DP learning in an algorithmic view. §.§ Privacy Analysis and Theoretical Explanation We first analyze the DP guarantee of SGD, which provides identical protection as standard DP-SGD as shown in Theorem <ref>. Then, building on Theorem <ref>, we instantiate SGD regarding the parameters configuration as Theorem <ref>. Let a random mechanism ^(t) be (ϵ^(t), δ)-differential privacy at the iteration t. A mechanism ^(t) parameterized by (ϵ^(t),δ) is (ϵ, δ)-differential privacy if σ =σ. Standard DP-SGD is (ϵ,δ)-differentially private if σ≥ c_2q√(Tlog(1/δ))/ϵ for any δ>0 <cit.>. The q, T are, respectively, sampling probability and the number of steps relevant to model training. The c is a constant for all DP mechanisms. Take ^(t) to be a random mechanism that is derived from (ϵ, δ)-differential privacy. The _𝗍 has the same configuration of q, T,c due to the identical training procedure. If σ is unchanged, ^(t) also satisfies σ≥ c_2q√(Tlog(1/δ))/ϵ for any δ>0. Thus, _𝗍 is (ϵ, δ)-differentially private. SGD parameterized by (0,σ̃^2) and standard DP-SGD parameterized by (0,σ^2) satisfy σ̃=σ such that the i-th entry of diagonal matrix is set to be, v_i=_i·√(k)·σ/√(∑_i=1^k_i^2). For generating noise, we need to keep σ̃^2=σ^2 to guarantee the same size of noise sampled from the distributions ,. Let n sampled from Gaussian distribution be ←(μ,σ^2). For sampling k times (until iteration k) from Gaussian distribution, we have the expectation of , 𝔼[^⊤·]= 𝔼[∑_i=1^k (n_i)^2]=tσ^2 For sampling k times from , we require the following expectation to satisfy 𝔼[ ()^⊤·]=tσ^2. This equation gives the relation ∑_i=1^k v_i^2=kσ^2. That is, a feasible solution of v_i is set to be v_i=_i·√(k)·σ/√(∑_i=1^k_i^2). Building on α-Rényi divergence and privacy loss, concentrated differential privacy (CDP) <cit.> allows improved computation mitigating single-query loss and high probability bounds for accurately analyzing the cumulative loss. It centralizes privacy loss around zero, maintaining sub-Gaussian characteristics that make larger deviations from zero increasingly improbable. In return, zero-CDP implies (ϵ_ρ,δ,δ)-DP as restated in Theorem <ref> <cit.>. A randomized mechanism is said to be ρ zero-concentrated differentially private if for any neighboring datasets D and D', and all α∈(1,∞), we have, 𝒟_α((D)|(D'))=1α-1log𝔼[e^(α-1)ℒ_𝖯𝗋𝗂^(o)]≤ρα where ℒ_𝖯𝗋𝗂^(o) is privacy loss and 𝒟_α((D)|(D')) is α-Rényi divergence between the distributions of (D) and (D'). If a random mechanism is ρ-zero-CDP, then also provides (ρ+2√(ρlog (1/δ)),δ)-DP for any δ>0. At last, since we have aligned the privacy guarantee of with the standard DP, we follow the standard composition-paradigm proof <cit.> under the definition of zCDP <cit.> through Rényi Divergence by Bun  <cit.> for a tight analysis, as shown in Theorem <ref>. Let a mechanism consist of T mechanisms: =(^(1),…,^(T)). Each SGD ^(t): 𝔻^(t)→ℝ satisfies ρ^(t)-zCDP, where the 𝔻^(t) is a subset of 𝔻. The mechanism satisfies ((max_tρ^(t))+2√((max_tρ^(t))log(1/δ)),δ)-differential privacy. Consider two neighboring datasets D,D'. By Theorem <ref>, our mechanism at each iteration adds the noise equal to being sampled from (0,σ^2). By Definition <ref> and Definition <ref>, we calculate, √(2 πσ^2 )exp[(a-1)𝒟_α((D)|(D'))] =∫_ℝe^(-α(x-ℱ(D))^2/2σ^2-(1-α)(x-ℱ(D'))^2/2σ^2)dx =∫_ℝe^(-(x-(αℱ(D)+(1-α)ℱ(D')))^2/2σ^2)dx +∫_ℝe^((αℱ(D)+(1-α)ℱ(D'))^2-αℱ(D)^2/2σ^2)dx -∫_ℝe^((1-α)ℱ(D')^2/2σ^2)dx =√(2 πσ^2 )exp(α(α-1)(ℱ(D)-ℱ(D'))^2/(2σ^2)) Thus, exp[(a-1)𝒟_α((D)|(D'))] =exp(α(α-1)(ℱ(D)-ℱ(D'))^2/(2σ^2)) =exp(α(α-1)Δ^2/(2σ^2)) By the result α(α-1)Δ^2/(2σ^2), this calculation tells that our noise mechanism follows (Δ^2/2σ^2)-zCDP at each iteration. By Definition <ref> and 𝔼[e^(α-1)ℒ_𝖯𝗋𝗂^(o),(t)] <cit.>, we have, 𝔼[( Pr((,D)=o)/ Pr((,D')=o))^α-1] ≤ e^(α-1)α·(max_tρ^(t)) By Markov’s inequality, calculate the probability, Pr[ℒ_𝖯𝗋𝗂^(O)≥ϵ ] = Pr[e^(α-1)ℒ_𝖯𝗋𝗂^(O)>e^(α-1)ϵ] ≤𝔼[e^(α-1)ℒ_𝖯𝗋𝗂^(O)]/e^(α-1)ϵ≤ e^(α-1)(α(max_tρ^(t))-ϵ) Subject to σ=√(2Tlog(1/δ))/ϵ, we use α = ϵ+(max_tρ^(t))/2·(max_tρ^(t)) as derived in <cit.>, and compute, Pr[ℒ_𝖯𝗋𝗂^(O)> ϵ ]≤ e^-(ϵ-(max_tρ^(t)))^2/(4·(max_tρ^(t)))≤δ For any S in Definition <ref>, ≤Pr[O∈ S∧ℒ_𝖯𝗋𝗂^(O)≤ϵ] +Pr[ℒ_𝖯𝗋𝗂^(O)>ϵ] ≤Pr[O∈ S∧ℒ_𝖯𝗋𝗂^(O)≤ϵ] +δ ≤∫_o Pr[(D')=o|o∈ S]e^ϵ do +δ =e^ϵPr[(D')=S]+δ still satisfies original DP definition, as in <cit.>. §.§ Linear Layer Analysis as an Example We consider a binary classification for simplification and then instantiate a linear layer correlation analysis as an example supplement. We regard SGD training as “ground truth”. We simplify model parameters as an abstraction of extracted features over the whole dataset. Define layer-wise model parameters to be in a binary classification model. Let the y∈{-1,1} be model output, (x,y) be the input-output pair. Let noise overall features be , where the norm maintains to be the same. We expect the noise addition to not affect the space of model parameters and to keep the individual information in the model parameters unleaked. Our objective is to minimize the variation of model outputs from DP training and pure model at each training iteration, , min_ |∑_i∫(+)x_iy_i' dn -∑_i∫ x_iy_i dn| Consider that noise variable n being injected into each feature could be continuous ideally. Since it is sampled from a distribution with a mean value of 0, the integration of n equals 0, which could be removed for simplification. We expect the first part to be large (denoting high utility) and the difference between the two parts to be as small as possible. Then, we define the variance to be, [∑_i(+)x_iy_i'-∑_i x_iy_i] Equation <ref> measures the difference of average correction of two models. Equation <ref> can be simplified by the expectation, 𝔼[∑_i((+)x_iy_i'- x_iy_i)] For linear transformation, we get, (+)^⊤ x_iy_i'-^⊤ x_iy_i = (+)^⊤ x_i(y_i+Δ y_i)-^⊤ x_iy_i = ^⊤ x_iΔ y_i+^⊤ x_iy_i+^⊤ x_iΔ y_i = (+)^⊤ x_iΔ y_i+^⊤ x_iy_i = (+)^⊤ x_i^⊤ x_i+^⊤ x_i^⊤ x_i =(^⊤^⊤+^⊤^⊤+^⊤^⊤)x_i^2 Specifically, if (+)^⊤x_i is close to y_i, the differentially-private (noisy for short) model accuracy is high. To attain the minimizer, we could solve Equation <ref> by . In this example analysis, attaining support for the noise-model relation is enough for the initial exploration. § EXPERIMENTAL EVALUATION AND EXPLANATION Our experiments are conducted on a commodity PC running Ubuntu with Intel Xeon(R) E5-2630 v3 CPU, 31.3 GiB RAM, and GeForce RTX 1080 Ti GPU. In this section, we report the convergence/training performance and test accuracy (varying with ϵ) by conducting an extensive comparison with state-of-the-arts <cit.> over standard benchmark datasets. By employing GridCam <cit.>, we visualize differentially private training to show the difference in representation. §.§ Experimental Setup §.§.§ Configuration and Dataset The baseline DP-SGD implementation is pyvacy (<https://github.com/ChrisWaites/pyvacy>). We configure experimental parameters with reference to <cit.>'s setting. To be specific, we configure lot size L=10,50,200,400, δ=1.0^-5 or 1.0^-6, and learning rate η=0.1 or 0.2. The noise level σ is set to be 0.5,1,3,5,7,10 for comprehensive comparison. Fairly, we use identical ϵ as in state-of-the-art and compare test accuracy. Experimental evaluations are performed on the MNIST dataset <cit.> and the CIFAR-10 dataset <cit.>. MNIST dataset includes 10 classes of hand-written digits of 28 ×28 gray-scale. It contains 60,000 training examples and 10,000 testing examples. CIFAR-10 dataset contains 10 classes of images, of 32×32 color-scale with three channels, It contains 50,000 in training examples and 10,000 in testing examples. §.§.§ Model Architecture On the MNIST dataset, we use LeNet <cit.>, which reaches accuracy of 99% in about 10 epochs without privacy. On CIFAR-10, we use two convolutional layers followed by two fully connected layers. In detail, convolution layers use 5× 5 convolutions, followed by a ReLU and 2×2 max-pooling. The latter is flattened to a vector that gets fed into two fully connected layers with 384 units. This architecture, non-privately, can get to about 86% accuracy in ∼200 epochs. §.§ Model Utility and Training Performance §.§.§ Convergence Analysis Figure <ref>, Figure <ref>, and Figure <ref> show the process of convergence on the MNIST and CIFAR-10 datasets in iterations and epochs when σ=0.5,1,3,5,7,10, respectively. The epoch-based figures show the whole training process on two datasets, while the iteration-based figures only display the first 30 iterations meticulously due to x-axis length limitation. For the very-tiny noise level σ=0.5,1, SGD reaches an almost identical convergence route as pure SGD when training over the MNIST dataset. For DP-SGD, iteration-wise accuracy decreases at the start of training. For a relatively small noise level σ=3,5, we can see that SGD converges more stable. Although SGD can not reach the identical accuracy as pure SGD, its shape (e.g., from iteration=[5,10] and epoch=[4,20]) of convergence is much more similar to SGD than DP-SGD. For σ≥5, the convergence of DP-SGD turns out to be very unstable, while SGD looks more robust. Besides, the shaking of SGD is also relatively smaller, which contributes to step-wise stability during a whole training process. On CIFAR-10, Figure <ref> shows the test accuracy by training from scratch. Recall that DP-SGD over CIFAR-10 typically requires a pretraining phase. For σ=0.5,1, SGD attains competitive training convergence compared with SGD training. For σ=3,5, SGD training still moves towards convergence, while DP-SGD could not. For σ=7,10, both SGD and DP-SGD could not converge, whereas SGD collapses later. §.§.§ Model Accuracy Table <ref> shows comparative results with prior works. To be fair, we compare the test accuracy of the trained models under the constraint of identical ϵ. We can see that improves the test accuracy of state-of-the-arts <cit.>. In most cases, our SGD could attain >98% test accuracy on the MNIST dataset, whereas other works achieve 95%∼97%. Only several works were trained over the CIFAR-10 dataset, yet with the <60% accuracy. In contrast, SGD could achieve >64.5% accuracy, showing much better results. Specifically, SGD improves 18% accuracy on <cit.>, 47% accuracy on <cit.>, and 22% accuracy on <cit.>. Training a DP model over the CIFAR-10 dataset may require a pretraining phase, whereas SGD training could alleviate this. It shows that SGD behaves better on more representative datasets (e.g., CIFAR-10>MNIST) than DP-SGD. Figure <ref> shows a box-whisker plot on accuracy given varying ϵ. Except for following identical configuration of ϵ, we show additional results with ϵ=1,2,3,4. The test accuracy is relatively stable for different ϵ in different training processes. When ϵ is very large, although test accuracy is high, DP protection may not be sufficient for practical usage. Experimental results show that SGD is more robust against large noise and supports faster convergence, especially for representative datasets. §.§ Explaining Experiments Explainable AI (XAI) has been proposed to explain why they predict what they predict. We adopt XAI to interpret the superiority/failure of various models by decomposing them into intuitive components by tracking and understanding the training performance, and visualizing the features. §.§.§ Tracking Initial-Phase Training To explain why SGD converges better, we plotted the training convergence process in the initial phase, in which the trained model is near the random initialization. Figure <ref> displays training convergence with varying lot sizes, while Figure <ref> shows training convergence when the learning rate increases to 0.2. Both Figure <ref> and Figure <ref> confirm that SGD tracks the SGD training tracks more tightly in the very beginning. Recall that a typical model training starts from the random initialization towards a stable status, which means fewer features are learned in the beginning. Thus, we expect relatively less noise to protect the “randomized” model, which learns a limited number of features, to mitigate and destroy the typical training convergence. Combining with Figure <ref>, we know that model collapse would happen when sufficient noise is assigned to enough features learned from the training data. §.§.§ Visualizing DP Training Given high-resolution and precise class discrimination, we apply Grad-CAM <cit.> to show visual results on DP training. In Grad-CAM <cit.>, the class-discriminative localization map of width u and height v for any class c is defined to be ℒ_𝖦𝗋𝖺𝖽𝖢𝖠𝖬 = 𝖱𝖾𝖫𝖴(∑_k α_k^cA^k). Here, the weight α_k^c represents a partial linearization of the downstream feature map activation A. In our experiments, we adopt GridCam <cit.> for interpreting/visualizing how DP noise affects model training. In a model training process, GridCam is employed to visualize explanations of learning features, with or without DP noise. GridCam <cit.> can coarsely locate the important regions in the image for predicting the concept, e.g., “dog” in a classification network. Figure <ref> visualizes the heat map of training with SGD compared with Figure <ref>. SGD training still maintains the representation ability to locate the important objects. That is, the reason for more satisfying accuracy is that the noise added to the gradients could not affect on models' ability for relatively accurate visualization in a statistical manner, , still protecting individual privacy. §.§.§ A Practical View of Privacy Parameters Theoretically, DP-SGD allows setting different clipping thresholds C and noise scales σ with varying numbers of training iterations t or different layers. Although its experiments adopt the fixed value σ^2= 2/ϵ^2·log (1.25/δ), SGD puts a step forward, showing a practical view of adjusting σ in every iteration and diverse noise allocation regarding every gradient update. The added noise is typically sampled from a noise distribution parameterized by σ. Besides, to explore the varying σ over diverse features, SGD still adopts a constant clipping value 𝖢𝗉 as in DP-SGD. SGD assigns σ as a variable during DP training. As for unbiased noise distribution, μ=0 holds at every execution of sampling noise. In probability theory, the sum of multiple independent normally distributed random variables is normal, and its variance is the sum of the two variances. We use this conclusion to assign the σ over diverse features at each training iteration t. If we regard all assigned σ at each iteration as a matrix, all entries in this matrix vary at different iterations. The parameter configuration at every iteration follows Theorem <ref>, supporting linearity relation to value in in SGD. Although the theoretical expectation of introducing Gaussian noise with 0 mean value remains identical to the clean model, practical training shifts the expected results to some extent. §.§.§ Understanding of Improved Model Convergence Motivated by utility improvement, we perform repeated experiments similar to  <ref> to attain the relation between model training and noise heterogeneity in empiricism. We repeatedly train an identical model given various heterogeneity (adjust noise scales to diverse model parameters for early-stage tests) and witness the corresponding phenomenon in the convergence process. Pure SGD training could attain the best accuracy and converge fastest while training with DP-SGD slows down the convergence constrained by identical remaining configurations. Even after the model's convergence, DP-SGD training can not reach the highest accuracy as pure SGD training. For testing the SGD, we adjust noise allocations via PCA by injecting them into different model parameters and gradients within an identical privacy budget constraint. Accordingly, we could attain some convergence statuses that show lower convergence performance yet better than DP-SGD. In practical training, utility loss can be interpreted to be convergence retard and degrading accuracy. Improving model utility could be explained as follows: Given an identical privacy budget, a feasible solution can always exist in a region that is upper-bounded by the ground truth and lower-bounded by fixed noise perturbation. §.§ Further Discussion We explore the limitations of our work and point out the potential future works below. 1). Speed up SGD. We observe the computation costs of PCA over a large parameter matrix are not lightweight enough. The computational cost for 𝖲𝖵𝖾𝖼 relies on the size of the inputting matrix. The block-wise computation may simplify initializing a full-rank matrix as basis vectors. Partitioning the parameter matrix into multiple blocks could speed up training in parallel; however, it may hurt the pre-existing on-the-whole knowledge stored in the current model. Another direction is to consider a computation-light method of extracting the pre-existing knowledge learned in the current model. 2). Architecture-specified construction. To acquire a new perspective of improving model utility, the proposed construction is a feasible solution but is not optimal. Although the trainable model could be regarded as a representation of knowledge extracted from diverse features and private data, different parameters are structured with the constraint of model initialization. At each backpropagation, we regard the model as a matrix in which each entry feeds with the values of model parameters, overlooking the effect of model structure. In the future, instead of a generic solution, we would like to explore an architecture-specified construction of SGD. § CONCLUSION Through theoretical and empirical understanding of privacy-utility space, we extend the research line of improving training performance for DP learning by designing a plug-in optimization for training with DP-SGD. The proposed DP-Hero is a versatile differential privacy framework incorporating a heterogeneous DP noise allocation manner. The primary innovation of DP-Hero is its ability to utilize the knowledge embedded in previously trained models to guide the subsequent distribution of noise heterogeneity, thereby optimizing its utility. Building on the foundation of DP-Hero, we introduce a heterogeneous version of DP-SGD, in which the noise introduced into the gradients varies. We have carried out extensive experiments to validate and elucidate the efficacy of the proposed DP-Hero. Accordingly, we provide insights on enhancing the privacy-utility space by learning from the pre-existing leaked knowledge encapsulated in the previously trained models. We point out a new way of thinking about model-guided noise allocation for optimizing SGD-dominated convergence under the DP guarantee. Besides, we explore explaining DP training via visual representation, reasoning the improved utility. Such an explainable view could benefit from understanding DP protection more vividly, for potentially being against attacks better. In a broader context, we expect heterogeneous DP learning to be adopted beyond (DP-)SGD-based instantiations. unsrt
http://arxiv.org/abs/2409.03125v1
20240904233331
Superfluidity of dipolar excitons in a double layer of $α-T_3$ with a mass term
[ "Oleg L. Berman", "Godfrey Gumbs", "Gabriel P. Martins", "Paula Fekete" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
= 0pt 1 1 0.05 0.95 0.5cm ↓̣↑̆↓↑ j k r q k = 0.5cm = 0.0cm Brooklyn, NY 11201, USA New York, NY 10016, USA New York, NY 10065, USA § ABSTRACT We predict Bose-Einstein condensation and superfluidity of dipolar excitons, formed by electron-hole pairs in spatially separated gapped hexagonal α-T_3 (GHAT3) layers. In the α-T_3 model, the AB-honeycomb lattice structure is supplemented with C atoms located at the centers of the hexagons in the lattice. We considered the α-T_3 model in the presence of a mass term which opens a gap in the energy dispersive spectrum. The gap opening mass term, caused by a weak magnetic field, plays the role of Zeeman splitting at low magnetic fields for this pseudospin-1 system. The band structure of GHAT3 monolayers leads to the formation of two distinct types of excitons in the GHAT3 double layer. We consider two types of dipolar excitons in double-layer GHAT3: (a) “A excitons”, which are bound states of electrons in the conduction band (CB) and holes in the intermediate band (IB) and (b) “B excitons”, which are bound states of electrons in the CB and holes in the valence band (VB). The binding energy of A and B dipolar excitons is calculated. For a two-component weakly interacting Bose gas of dipolar excitons in a GHAT3 double layer, we obtain the energy dispersion of collective excitations, the sound velocity, the superfluid density, and the mean-field critical temperature T_c for superfluidity. Superfluidity of dipolar excitons in a double layer of α-T_3 with a mass term Oleg L. Berman^1,2, Godfrey Gumbs^2,3,4, Gabriel P. Martins^1,2,3, and Paula Fekete^5 September 9, 2024 =========================================================================================== § INTRODUCTION The many-particle systems of dipolar (indirect) excitons, formed by spatially separated electrons and holes, in semiconductor coupled quantum wells (CQWs) and novel two-dimensional (2D) materials have been the subject of numerous experimental and theoretical studies. These systems are attractive in large part due to the possibility of Bose-Einstein condensation (BEC) and superfluidity of dipolar excitons, which can be observed as persistent electrical currents in each quantum well, and also through coherent optical properties <cit.>. Recent progress in theoretical and experimental studies of BEC and superfluidity of dipolar excitons in CQWs have been reviewed in Ref. <cit.>. Electron-hole superfluidity in double layers can occur not only in the BEC regime, but also in the Bardeen-Cooper-Schrieffer (BCS)-BEC crossover regime <cit.>. A number of experimental and theoretical investigations have been devoted to the BEC of electron-hole pairs, formed by spatially separated electrons and holes in a double layer formed by parallel graphene layers. These investigations were reported in Refs. <cit.>. Both BEC and superfluidity of dipolar excitons in double layers of transition-metal dichalcogenides (TMDCs) <cit.> and phosphorene <cit.> have been discussed, because the exciton binding energies in novel 2D semiconductors is quite large. Possible BEC in a long-lived dark spin state of 2D dipolar excitons has been experimentally observed for GaAs/AlGaAs semiconductor CQWs <cit.>. Recently, the electronic properties of the α-T_3 lattice have been the subject of the intensive theoretical and experimental investigations due to its surprising fundamental physical properties as well as its promising applications in solid state devices <cit.>. For a review of artificial flat band systems, see Ref. <cit.>. Raoux, et al. <cit.> proposed that an α-T_3 lattice could be assembled from cold fermionic atoms confined to an optical lattice by means of three pairs of laser beams for the optical dice lattice (α=1) <cit.>. This structure consists of an AB-honeycomb lattice (the rim) like that in graphene which is combined with C atoms at the center/hub of each hexagon. A parameter α represents the ratio of the hopping integral between the hub and the rim to that around the rim of the hexagonal lattice. By dephasing one of the three pairs of laser beams, one could vary the parameter 0 ≤α =tan ϕ≤ 1. Optically induced dressed states <cit.>, and their tunneling, transport <cit.>, and collective properties <cit.>, as well as α-T_3 based nanoribbons <cit.> have been analyzed. The BEC and superfluidity of dipolar magnetoexcitons in α-T_3 double layers in a strong uniform perpendicular magnetic field was proposed in Ref. <cit.>. We present the conditions for BEC and superfluidity of a two-component weakly interacting Bose gas of dipolar excitons, formed by electron-hole pairs in spatially separated GHAT3 layers. An applied weak magnetic field to this pseudospin-1 monolayer system results in a Zeeman-type splitting of the energy subbands <cit.>. This dispersion relation consists of three bands: CB, IB and VB. We consider two types of dipolar excitons in double-layer of GHAT3: (a) “A excitons”, formed as bound states of electrons in CB and holes in IB and (b) “B excitons”, formed as bound states of electrons in CB and holes in VB. The binding energy of A and B dipolar excitons is calculated. For a two-component weakly interacting Bose gas of dipolar excitons in a GHAT3 double layer, we obtain the energy dispersion of collective excitations, the sound velocity, the superfluid density, and the mean-field critical temperature T_c for superfluidity. Our paper is organized in the following way. In Sec. <ref>, the two-body problem for an electron and a hole, spatially separated in two parallel GHAT3 monolayers, is formulated, and the effective masses and binding energies are obtained for two types of dipolar excitons. The spectrum of collective excitations and the sound velocity for the two-component weakly interacting Bose gas of dipolar excitons in the double layer of GHAT3 are derived in Sec. <ref>. In Sec. <ref> the superfluidity of the weakly interacting Bose gas of dipolar excitons in the double layer of GHAT3 is predicted, and the mean-field critical temperature of the phase transition is obtained. The results of our calculations are discussed in Sec. <ref>. In Sec. <ref> our conclusions are reported. § DIPOLAR EXCITONS IN A DOUBLE LAYER OF Α-T_3 WITH A MASS TERM We will consider charge carriers in the conduction band, valence band, and in the intermediate band, which corresponds to the flat band in an α-T_3 layer without a mass term. In the presence of a weak magnetic field, the low-energy Hamiltonian of the charge carriers in a GHAT3 monolayer at the K and K' points is given by <cit.> ℋ̂_λ = ( [ Δ f (𝐤) cosϕ 0; f^* (𝐤) cosϕ 0 f (𝐤) sinϕ; 0 f^* (𝐤) sinϕ - Δ ]) , where the origin in 𝐤-space is defined to be around the K point, 𝐤 = (k_x, k_y) and tanθ_𝐤 = k_y/k_x, ϕ = tan^-1α, f (𝐤) = ħ v_F(λ k_x - i k_y) = λħ v_F k e^-iλθ_𝐤, with λ = ± 1 being the valley index at the K and K' points, 2 Δ is the gap in the energy spectrum of a GHAT3 layer due to the mass term in the Hamiltonian. In an α-T_3 layer honeycomb lattice, there is an added fermionic hub atom C at the center of each hexagon. Let the hopping integral be t_1 between the hub atom and either an A or B atom on the rim and t_2 between nearest neighbors on the rim of the hexagon. The ratio of these two nearest neighbor hopping terms is denoted as t_2/t_1 = α, where the parameter α satisfies 0 ≤α≤ 1. The largest value when α is 1 is for the dice lattice, whereas its value of 0 corresponds to graphene for decoupled hub from rim atoms <cit.>. At small momenta near K and K' points the dispersion for the charge carriers in the conduction band ϵ_CB(k) is given by the relation <cit.> ϵ_CB(k) ≈Δ + ħ^2 k^2/2 m_CB , where 𝐤 = 𝐩/ħ and 𝐩 are the wave vector and momentum of a quasiparticle, m_CB is the effective mass of the charge carriers in the conduction band, given by m_CB = (1+ α^2)Δ/2v_F^2 , where v_F is the Fermi velocity in a GHAT3 layer, and φ = tan^-1α <cit.>. At small momenta near K and K' points, the dispersion for the charge carriers in the valence band ϵ_VB(k) is given by the relation <cit.> ϵ_VB(k) ≈ - Δ - ħ^2 k^2/2 m_VB , with m_VB the effective mass of the charge carriers in the valence band, given by m_VB = (1+ α^2)Δ/2v_F^2α^2 . At small momenta near K and K' points, the dispersion for the charge carriers in the intermediate band, corresponding to the flat band in an α-T_3 layer without a mass term, ϵ_IB(k) is given by the relation <cit.> ϵ_IB(k) ≈ - ħ^2 k^2/2 m_IB , where m_IB is the effective mass of the charge carriers in the intermediate band, given by m_IB = (1+ α^2)Δ/2v_F^2(1 - α^2) . It is worthy noting that there are spin degeneracy and valley degeneracy for the energy of the charge carriers in a GHAT3 layer. In the system under consideration in this paper, electrons are confined in a 2D GHAT3 monolayer, while an equal number of positive holes are located in a parallel GHAT3 monolayer at a distance D away as demonstrated in Fig. <ref>. This electron-hole system in two parallel GHAT3 layers is treated as a 2D system without interlayer hopping. Due to the absence of tunneling of electrons and holes between different GHAT3 monolayers, electron-hole recombination is suppressed by a dielectric barrier with dielectric constant ϵ _d that separates the GHAT3 monolayers. Therefore, the dipolar excitons, formed by electrons and holes, located in two different GHAT3 monolayers, have a longer lifetime than direct excitons. The electron and hole are attracted via electromagnetic interaction V(r_eh), where r_eh is the distance between the electron and hole, and they could form a bound state, i.e., an exciton, in three-dimensional (3D) space. Therefore, in order to determine the binding energy of the exciton a two body problem in restricted 3D space has to be solved. However, if one projects the electron position vector onto the GHAT3 plane with holes and replaces the relative coordinate vector r_eh by its projection 𝐫 on this plane, the potential V(r_eh) may be expressed as V(r_eh)= V(√(r^2+D^2)), where r is the relative distance between the hole and the projection of the electron position vector onto the GHAT3 plane with holes. A schematic illustration of the dipolar exciton in a GHAT3 double layer is presented in Fig <ref>. By introducing in-plane coordinates 𝐫_1=(x_1,y_1) and 𝐫_2=(x_2,y_2) for the electron and the projection vector of the hole, respectively (where 𝐫=𝐫_1-𝐫_2), the dipolar exciton can be described by employing a two-body 2D Schrödinger equation with potential V(√(r^2+D^2)). So that the restricted 3D two-body problem can be reduced to a 2D two-body problem on a GHAT3 layer with the holes. The dipolar excitons with spatially separated electrons and holes in two parallel GHAT3 monolayers can be created by laser pumping with an applied external voltage. While an electron in the conduction band and a hole in the valence or intermediate band are excited due to absorbtion of a photon, voltages are applied with opposite sign to confine electrons on one layer and holes on another so that dipoles point in one direction only. In our case, “both” the energy bands and the exciton modes referred to the K-point, not one to the Γ-point and the other to the K point. We note that in the dispersion equations appearing in Refs. <cit.> the origin of the k-space was specified to be around the K point, (and not the Γ point) as did several authors investigating α-T_3. So, our choice of origin not being the center of the Brillouin zone has precedence. For graphene, the plasmon dispersion relation and low-energy bands, presented by Ref. <cit.> were both consistently measured from the K point taken as origin and not the center of the Brillouin zone. We consider excitons, formed by the an electron and a hole from the same valley, because an electron and a hole from different valleys cannot be excited by absorption of photon due to conservation of momentum. The reason is that photons carry momenta much smaller than the difference between K and K' in reciprocal space. The effective Hamiltonian of an electron and a hole, spatially separated in two parallel GHAT3 monolayers with the interlayer distance D has the following form Ĥ_ex=-ħ ^2/2m_eΔ _𝐫_1-ħ ^2/2m_hΔ _𝐫_2 + V (r) , where Δ _𝐫_1 and Δ _𝐫_2 are the Laplacian operators with respect to the components of the vectors 𝐫_1 and 𝐫_2, respectively, and m_e and m_h are the effective masses of the electron and hole, respectively. For CV excitons m_e = m_CB and m_h = m_VB; and for CI excitons m_e = m_CB and m_h = m_IB, where m_CB, m_VB, and m_IB are given by Eqs. (<ref>), (<ref>), and (<ref>), correspondingly. The problem of the in-plane motion of an interacting electron and hole forming the exciton in a GHAT3 double layer can be reduced to that of one particle with the reduced mass μ = m_em_h/(m_e+m_h) in a V(r) potential and motion of the center-of-mass of the exciton with the mass M = m_e + m_h. We introduce the coordinates of the center-of-mass 𝐑 of an exciton and the coordinate of the relative motion 𝐫 of an electron and hole as 𝐑 = (m_e𝐫_1 + m_h𝐫_2)/(m_e+ m_h) and 𝐫 = 𝐫_1 - 𝐫_2, correspondingly. The Hamiltonian Ĥ_ex can be represented in the form: Ĥ_ex= Ĥ_R + Ĥ_r, where the Hamiltonians of the motion of the center-of-mass Ĥ_R and relative motion of electron and a hole Ĥ_r. The solution of the Schrödinger equation for the center-of-mass of an exciton Ĥ_Rψ (𝐑) = ℰψ (𝐑) is the plane wave ψ (𝐑) = e^i𝐏·𝐑/ħ with the quadratic energy spectrum ℰ = P^2/(2M), where 𝐏 is the momentum of the center-of-mass of an exciton. We consider electrons and holes to be located in GHAT3 parallel layers, embedded in a dielectric with the dielectric constant ϵ_d. The potential energy of electron-hole Coulomb attraction is V(r) = -κ e^2/ϵ_d√(r^2 + D^2) , where κ=9× 10^9 N× m^2/C^2, ϵ_d is the dielectric constant of the insulator (SiO_2 or h-BN), surrounding the electron and hole GHAT3 monolayers, forming the double layer. For the h-BN barrier we substitute the dielectric constant ϵ_d= 4.89, while for the SiO_2 barrier we substitute the dielectric constant ϵ_d=4.5. For h-BN insulating layers, ϵ_d = 4.89 is the effective dielectric constant, defined as ϵ_d =√(ε^)√(ε^∥) <cit.>, where ε^= 6.71 and ε^∥ =3.56 are the components of the dielectric tensor for h-BN <cit.>. Assuming r≪ D, we approximate V(r) by the first two terms of the Taylor series and obtain V(r)=-V_0+γ r^2 , where V_0=κ e^2/ϵ_dD, γ =κ e^2/ 2ϵ_dD^3 . The solution of the Schrödinger equation for the relative motion of an electron and a hole Ĥ_rΨ (𝐫) = EΨ (𝐫) with the potential (<ref>) is reduced to the problem of a 2D harmonic oscillator with the exciton reduced mass μ. Following Refs. [Maksym,Iyengar] one obtains the radial Schrödinger equation and the solution for the eigenfunctions for the relative motion of an electron and a hole in a GHAT3 double layer in terms of associated Laguerre polynomials, which can be written as Ψ_N L (𝐫)=N!/a^|L|+1√(n! n^'!)2^-|L|/2sgn(L)^Lr^|L|e^-r^2/(4a^2)× L_N^|L|(r^2/(2a^2))e^-iLφ/(2π )^1/2 , where N=min(n,n^'), L=n-n^', n,n^'=0,1,2,3,… are the quantum numbers, φ is the polar angle, and a=[ ħ /(2√(2μγ))]^1/2 is a Bohr radius of a dipolar exciton. The corresponding energy spectrum is given by E_N L≡ E_e(h) = - V_0 + (2N+1+|L|)ħ( 2γ/μ) ^1/2 . At the lowest quantum state N = L = 0 as it follows from Eq. (<ref>) the ground state energy for the exciton is given by E_00= - V_0 + ħ( 2γ/μ) ^1/2 . The important characteristics of the exciton is the square of the in-plane gyration radius r_X^2. It allows one to estimate the condition when the excitonic gas is dilute enough. One can obtain the square of the in-plane gyration radius r_X of a dipolar exciton <cit.>, which is the average squared projection of an electron-hole separation onto the plane of a GHAT3 monolayer as r_X^2≡⟨ r^2⟩ = ∫Ψ_00^* (𝐫) r^2Ψ_00 (𝐫) d^2 r = 2π/2π a^2∫_0^+∞ r^2 e^- r^2/2 a^2 r d r = 2 a^2 . We consider dipolar excitons, formed by an electron in the conduction band and a hole in the valence band (CV excitons) and formed by and electron in the conduction band and a hole in the intermediate valence band (CI excitons). For CV excitons one has μ_CV = m_CBm_VB/m_CB + m_VB = Δ/2v_F^2; M_CV = m_CB + m_VB = (1 + α^2)^2Δ/2 v_F^2α^2 . For CI excitons one has μ_CI = m_CBm_IB/m_CB + m_IB = (1 + α^2) Δ/2v_F^2(2 - α^2); M_CI = m_CB + m_IB = (1 + α^2)(2 - α^2) Δ/2 v_F^2(1 - α^2) . § THE COLLECTIVE EXCITATIONS SPECTRUM AND SUPERFLUIDITY FOR THE TWO-COMPONENT SYSTEM OF DIPOLAR EXCITONS We consider the dilute limit for dipolar exciton gas in a GHAT3 double layer, when n_Aa_B A^2≪ 1 and n_Ba_B B^2≪ 1, where n_A(B) and a_B A(B) are the concentration and effective exciton Bohr radius for A(B) dipolar excitons, correspondingly. In the dilute limit, dipolar A and B excitons are formed by electron-hole pairs with the electrons and holes spatially separated in two different GHAT3 layers. We will treat the two-component weakly interacting Bose gas of dipolar excitons in a GHAT3 double layer by applying the approach analogous to one used for dipolar excitons in a transition metal dichalcogenide (TMDC) double layer <cit.>. Since the dipolar excitons, formed by the charge carriers in different valleys, are characterized by the same energy, the exciton states are degenerate with respect to the valley degree of freedom. Therefore, we consider the Hamiltonian of the weakly interacting Bose gas of dipolar excitons, formed in a single valley. We will take into account the degeneracy of the exciton states with respect to spin and valley degrees of freedom by the introducing the spin and valley degeneracy factor s = 16 below. The Hamiltonian Ĥ of the 2D A and B weakly interacting dipolar excitons can be written as Ĥ=Ĥ_A+Ĥ_B+Ĥ_I , where Ĥ_A(B) are the Hamiltonians of A(B) excitons defined as Ĥ_A(B)=∑_𝐤E_A(B)(k)a_𝐤A(B)^†a_𝐤A(B)+g_AA(BB)/2S∑_𝐤𝐥𝐦a_𝐤A(B)^†a_𝐥A(B)^†a_A(B)𝐦a_A(B)𝐤+𝐥-𝐦 , and Ĥ_I is the Hamiltonian of the interaction between A and B excitons presented as Ĥ_I=g_AB/S∑_𝐤𝐥𝐦a_𝐤 A^†a_𝐥B^†a_B𝐦a_A𝐤+ 𝐥-𝐦 , where a_𝐤A(B)^† and a_𝐤A(B) are Bose creation and annihilation operators for A(B) dipolar excitons with the wave vector 𝐤, correspondingly, S is the area of the system, E_A(B)(k)≡ϵ _A(B) = ε _(0)A(B)(k)+𝒜_A(B) is the energy spectrum of non-interacting A(B) dipolar excitons, respectively, ε _(0)A(B)(k)=ħ ^2k^2/(2M_A(B)), M_A(B) is an effective mass of non-interacting dipolar excitons, 𝒜_A(B) is the constant, which depends on A(B) dipolar exciton binding energy and the corresponding gap, g_AA(BB) and g_AB are the interaction constants for the repulsion between two A dipolar excitons, two B dipolar excitons and for the interaction between A and B dipolar excitons, respectively. In dilute system with large interlayer separation D, two dipolar excitons, located at distance R, repel each other via the dipole-dipole interaction potential U(R)=κ e^2D^2/(ϵ _dR^3). Following the procedure described in Ref. <cit.>, the interaction parameters for the exciton-exciton repulsion in very dilute systems can be obtained implying the exciton-exciton dipole-dipole repulsion exists only at the distances between excitons greater than the distance from the exciton to the classical turning point. The many-particle Hamiltonian for a weakly interacting Bose gas can be diagonalized within the Bogoliubov approximation <cit.>, replacing the product of four operators in the interaction term by the product of two operators. The Bogoliubov approximation is valid if one assumes that most of the particles belong to BEC. In this case, in the Hamiltonian one can keep only the terms responsible for the interactions between the condensate and non-condensate particles, while the terms describing the interactions between non-condensate particles, are neglected. Following the procedure, described in Refs. <cit.>, applying the Bogoliubov approximation <cit.>, generalized for a two-component weakly interacting Bose gas <cit.> and introducing the following notation, G_AA = g_AA n_A = g n_A , G_BB = g_BB n_B = g n_B , G_AB = g_AB√(n_An_B) = g √(n_An_B) , ω_A (k) = √(ε_(0)A^2(k) + 2 G_AAε_(0)A(k)) , ω_B(k) = √(ε_(0)B^2(k) + 2 G_BBε_(0)B(k)) , one obtains two modes of the spectrum of Bose collective excitations ε_j(k) ε_j(k) = √(ω_A^2(k) + ω_B^2(k) + (-1)^j-1√((ω_A^2(k) - ω_B^2(k))^2 + (4G_AB)^2ε_(0)A(k)ε_(0)B(k))/2) , where j=1, 2. In our approach, the condition G_AB^2 = G_AAG_BB holds. At small momenta p = ħ k, when ε_(0)A(k) ≪ G_AA and ε_(0)B(k) ≪ G_BB, expanding the spectrum of collective excitations ε_j(k) up to the first order with respect to the momentum p, one obtains two sound modes in the spectrum of the collective excitations ε_j(p) = c_jp, where c_j is the sound velocity written as c_j = √(G_AA/2M_A + G_BB/2M_B + (-1)^j-1√((G_AA/2M_A - G_BB/2M_B)^2 + G_AB^2/M_AM_B)) , At j = 1, the spectrum of collective excitations is determined by the non-zero sound velocity c_1, while at j = 2 the sound velocity vanishes with c_2 = 0. At large momenta, for the conditions when ε _(0)A(k)≫ G_AA and ε _(0)B(k)≫ G_BB, one obtains two parabolic modes of collective excitations with the spectra ε _1(k)=ε _(0)A(k) and ε _2(k)=ε _(0)B(k), if M_A<M_B and if M_A>M_B with the spectra ε _1(k)=ε _(0)B(k) and ε _2(k)=ε _(0)A(k). § SUPERFLUIDITY OF THE WEAKLY-INTERACTING BOSE GAS OF DIPOLAR EXCITONS Since when j = 2 the sound velocity vanishes, below we take into account only the branch of the spectrum of collective excitations at j = 1, neglecting the branch at j = 2. According to Refs. <cit.>, it is clear that we need a finite sound velocity for superfluidity. Since the branch of the collective excitations at zero sound velocity for the collective excitations corresponds to the zero energy of the quasiparticles (which means that no quasiparticles are created with zero sound velocity), this branch does not lead to the dissipation of energy resulting in finite viscosity and, therefore, does not influence the Landau critical velocity. This is the reason for eliminating the zero sound velocity case in our considerations here. The weakly-interacting gas of dipolar excitons in the double layer of GHAT3 satisfies the Landau criterion for superfluidity <cit.>, because at small momenta the energy spectrum of the quasiparticles in the weakly-interacting gas of dipolar excitons at j = 1 is sound-like with the finite sound velocity c_1. In the moving weakly-interacting gas of dipolar excitons the quasiparticles are created at velocities above the velocity of sound, and the critical velocity for superfluidity reads as v_c = c_1. The difference between the ideal Bose gas and two-component weakly interacting Bose gas of dipolar excitons is that while the spectrum of ideal Bose gas has no branch with finite sound velocity, the dipolar exciton system under consideration has one branch in the spectrum of collective excitations with finite sound velocity at j = 1 due to exciton-exciton interaction. Therefore, at low temperatures, the two-component system of dipolar excitons exhibits superfluidity due to exciton-exciton interactions, while the ideal Bose gas does not demonstrate superfluidity. We defined the density of the superfluid component ρ _s(T) as ρ _s(T)=ρ -ρ _n(T), where ρ =M_An_A+M_Bn_B is the total 2D density of the dipolar excitons and ρ _n(T) denotes the density of the normal component. The density ρ _n(T) of the normal component can be defined using standard procedure <cit.>. The assumption that the dipolar exciton system moves with a velocity 𝐮 implies that the superfluid component moves with the velocity 𝐮. The energy dissipation at nonzero temperatures T is characterized by the occupancy of quasiparticles in this system. Since the density of quasiparticles is small at low temperatures, the gas of quasiparticles can be treated as an ideal Bose gas. In order to obtain the density of the superfluid component, one can define the total mass flow for a Bose gas of quasiparticles in the frame, in which the superfluid component is assumed to be at rest, as 𝐉= s ∫d^2p/(2πħ )^2𝐩 f[ ε _1(p)-𝐩·𝐮] , where s = 16 is the spin and valley degeneracy factor, f[ ε _1(p))] =( exp[ ε _1(p)/(k_BT)] -1) ^-1 is the Bose-Einstein distribution function for the quasiparticles with the dispersion ε _1(p), and k_B is the Boltzmann constant. Expanding the expression under the integral in Eq. (<ref>) up to the first order with respect to 𝐩·𝐮/(k_BT), one has: 𝐉=- s 𝐮/2∫d^2p/(2πħ )^2 p^2∂ f[ ε _1(p)] /∂ε _1(p) . The density ρ _n of the normal component in the moving weakly-interacting Bose gas of dipolar excitons is defined as <cit.> 𝐉=ρ _n𝐮 . Employing Eqs. (<ref>) and (<ref>), one derives the normal component density as ρ _n(T)=-s/2∫d^2p/(2πħ )^2p^2∂ f[ ε _1(p)] /∂ε _1(p) . At low temperatures k_BT≪ M_A(B)c_j^2, the small momenta (ε _(0)A(k)≪ G_AA and ε _(0)B(k)≪ G_BB) make the dominant contribution to the integral on the right-hand side of Eq. (<ref>). The quasiparticles with such small momenta are characterized by the sound spectrum ε _1(k)=c_1k with the sound velocity defined by Eq. (<ref>). By substituting ε _1(k)=c_1k into Eq. (<ref>), we obtain ρ _n(T)=3s ζ (3)/2πħ ^2c_1^4k_B^3T^3 , where ζ (z) is the Riemann zeta function (ζ (3)≃ 1.202). The mean field critical temperature T_c of the phase transition at which the superfluidity occurs, implying neglecting the interaction between the quasiparticles, is obtained from the condition ρ_s(T_c) = 0 <cit.>: ρ_n (T_c) = ρ = M_A n_A + M_Bn_B . At low temperatures k_BT≪ M_A(B)c_1^2 by substituting Eq. (<ref>) into Eq. (<ref>), one derives T_c=[ 2πħ ^2ρ c_1^4/3ζ (3)s k_B^3] ^1/3 . While Bose-Einstein condensation occurs at absolute zero even in a two-dimensional (2D) system, it is well known that in a 2D bosonic system, Bose-Einstein condensation does not occur at finite temperature, and only the quasi-long-range order appears. In this paper, we have obtained the mean field critical temperature T_c of the phase transition at which superfluidity appears without claiming BEC in a 2D system at finite temperature. In this work, we have considered BEC only at absolute zero temperature. § DISCUSSION In this section we now discuss the results of our calculations. In Fig. <ref>, we present the results for the exciton binding energy ℰ_b(α,Δ,D) for CV and CI excitons as functions of the gap Δ for chosen parameter α = 0.6 and interlayer separations D = 25 nm. According to Fig. <ref>, ℰ_b(α,Δ,D) is an increasing function of Δ, whereas for CV excitons the exciton binding energy is slightly larger than that for CI excitons. In Fig. <ref>, we present our results for the exciton binding energy ℰ_b(α,Δ,D) for CV and CI excitons as functions of the parameter α for chosen gap Δ = 0.5 ħ v_F/a and interlayer separations D = 25 nm. According to Fig. <ref>, ℰ_b(α,Δ,D) does not depend on α for CV excitons, whereas it is an increasing function of α for CI excitons. At α≲ 0.7ℰ_b(α,Δ,D) for CV excitons is larger than for CI excitons, while at α≳ 0.7ℰ_b(α,Δ,D) for CI excitons is larger than for CV excitons. In Fig. <ref>, we present the results of our calculations for the exciton binding energy ℰ_b(α,Δ,D) for CV and CI excitons as functions of the interlayer separation D for chosen parameter α = 0.6 and gap Δ = 0.5 ħ v_F/a. According to Fig. <ref>, ℰ_b(α,Δ,D) is a decreasing function of D, whereas for CV excitons the exciton binding energy is slightly larger than for CI excitons. In Fig. <ref>, we present plots of the effective masses for CV and CI dipolar excitons as functions of the gap Δ for chosen α = 0.6 for (a) center-of-mass exciton mass M on the left-hand side and (b) reduced exciton mass μ, on the right. According to Fig. <ref>, both M and μ for the CV and CI excitons are increasing functions of Δ, while for CV excitons both M and μ are slightly larger than for CI excitons. Figure <ref> shows the effective masses of a dipolar exciton for CV and CI excitons as functions of α for chosen Δ =0.5 ħ v_F/a for (a) center-of-mass exciton mass M in the left panel and (b) reduced exciton mass μ, on the right. According to Fig. <ref>, for CV excitons M is a decreasing function of α, whereas μ does not depend on α. For CI excitons, both M and μ increase as α is increased. For α≲ 0.7, both M and μ for CV excitons are larger than for CI excitons, but when α≳ 0.7ℰ_b(α,Δ,D) both M and μ for CV excitons are smaller than for CI excitons. Figure <ref> demonstrates the dependence of the sound velocity c ≡ c_1 on the hopping parameter α for chosen Δ = ħ v_F /a, interlayer separations D = 25 nm at fixed concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, c does not depend much on α when α≲ 0.5, while for α≳ 0.5, the sound velocity c is a decreasing function of α. In Fig. <ref>, we plot the sound velocity c ≡ c_1 versus the gap Δ for chosen parameter α = 0.6, interlayer separations D=25 nm for chosen concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, the sound velocity c is a decreasing function of Δ. In Fig. <ref>, we show the sound velocity c ≡ c_1 as a function of the interlayer separation D for hopping parameter α=0.6 and gap Δ=0.5 ħ v_F/a, for fixed concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, the sound velocity c is an increasing function of D. In Fig. <ref>, we illustrate the dependence of the sound velocity c ≡ c_1 on the concentrations n_A and n_B of A and B excitons, respectively for chosen hopping parameter α=0.6 and gap Δ=0.5 ħ v_F/a, at fixed interlayer separation D= 25 nm. According to Fig. <ref>, the sound velocity c is an increasing function of both concentrations n_A and n_B. In Fig. <ref>, we present the mean-field phase transition critical temperature T_c(n_A, n_B,α, Δ, D) as a function of the parameter α for chosen gap Δ=0.5 ħ v_F/a, interlayer separations D=25 nm at the fixed concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, T_c is a decreasing function of α at α≲ 0.9, while at α≳ 0.9 the critical temperature T_c is an increasing function of α. In Fig. <ref>, we present the mean-field phase transition critical temperature T_c(n_A, n_B,α, Δ, D) as a function of the gap Δ for chosen parameter α=0.6, interlayer separations D=25 nm at the fixed concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, the criticaltemperature T_c is a decreasing function of Δ. In Fig. <ref>, we demonstrate the mean-field phase transition critical temperature T_c(n_A, n_B,α, Δ, D) as a function of the interlayer separation D for chosen parameter α=0.6 and gap Δ=0.5 ħ v_F/a, at the fixed concentrations n_A=50× 10^11 cm^-2 and n_B=50× 10^11 cm^-2 of A and B excitons, respectively. According to Fig. <ref>, the critical temperature T_c is an increasing function of D. In Fig. <ref>, we present density plots for the mean-field phase transition critical temperature T_c(n_A, n_B,α, Δ, D) as a function of the concentrations n_A and n_B of A and B excitons, respectively for chosen parameter α=0.6 and gap Δ=0.5 ħ v_F/a, at the fixed interlayer separation D=25 nm. According to Fig. <ref>, the the critical temperature T_c is an increasing function of both the concentrations n_A and n_B. At a formal level, the weakly interacting Bose gas of A and B dipolar excitons in a GHAT3 double layer is similar to the two-component weakly interacting Bose gas of trapped cold atoms in a planar harmonic trap. The spectrum of collective excitations in the Bogoliubov approximation for dipolar excitons in a GHAT3 double layer is similar to one for a two-component BEC of trapped cold atoms, studied in Refs <cit.>. The gap parameter Δ, has a dual role, since it appears as chemical potential in the Hamltonian, as also in the mass of the excitons through the band curvature. According to Figs. <ref> and <ref>, the dipolar exciton binding energy is an increasing function of the gap Δ, while is the mean field phase transition temperature T_c is a decreasing function of the gap Δ. Therefore, there should be an optimal value for Δ, which would correspond to relatively high T_c at the relatively high dipolar exciton binding energy. The latter condition provides formation of the superfluid phase by the relatively stable dipolar excitons. Note that electron-hole superfluids can be formed not only in the BEC regime but also in the BCS-BEC crossover regime <cit.>. Quantum Monte Carlo simulations analyzing the BCS-BEC crossover regime for electron-hole systems have been performed <cit.>. In this Paper we concentrate on dilute electron-hole system, which corresponds to the BEC, which matches experimentally achievable densities in the electron-hole systems in 2D materials. BCS regime requires higher concentrations beyond the model of weakly interacting Bose gas. The studies of BCS regime and BEC-BCS crossover for an electron-hole superfluid in a GHAT3 double layer seem to be promising direction for future studies. The considered system of dipolar excitons in a GHAT3 double layer has also a strong similarity, with photon condensation in a cavity, The collective modes and possibility of Kosterlitz-Thouless phase transition to the superfluid phase <cit.> has been studied for a photon condensation in a cavity in Ref. <cit.>. If we consider only one type of excitons in a GHAT3 double layer, assuming the concentration of the excitons of another type to be zero, the expressions for the spectrum of collective excitations reported in this Paper can be reduced to the expressions similar to Ref. <cit.>. The Kosterlitz-Thouless phase transition to the superfluid phase <cit.> can be inferred from the variation of the superfluid density, which has been computed in this Paper. Note that in this Paper we did not consider vortices, as within the mean field approximation it was assumed that the number of quasiparticles is relatively. However, beyond the mean field approximation it is possible to consider the properties of vortices in the system of dipolar excitons. Thus, the dynamical creation of fractionalized vortices and vortex lattices can be considered by applying the approach, developed for the BEC of cold atoms in Ref. <cit.>. The Josephson phenomena for two trapped condensates of dipolar excitons can be studied by applying the approach similar to the one, developed for non-Abelian Josephson effect between two F=2 spinor Bose-Einstein condensates of cold atoms in double optical traps <cit.>. § CONCLUSIONS This paper is devoted to an investigation of the existence of BEC and superfluidity of dipolar excitons in double layers of GHAT3 which was proposed and analyzed. We have derived the solution of a two-body problem for an electron and a hole for the model Hamiltonian representing double layer GHAT3. We predict the formation of two types of dipolar excitons, characterized by different binding energies and effective masses, in the double layer of GHAT3. We have calculated the binding energy, effective mass, spectrum of collective excitations, superfluid density and the mean-field critical temperature of the phase transition to the superfluid state for the two-component weakly interacting Bose gas of A and B dipolar excitons in double layer GHAT3. We have demonstrated that at fixed exciton density, the mean-field critical temperature for superfluidity of dipolar excitons is decreased as a function of the gap Δ. Our results show that T_c is increased as a function of the density n and is decreased as a function of the gap Δ and the interlayer separation D. The occupancy of the superfluid state at T<T_c can result in the existence of persistent dissipationless superconducting oppositely directed electric currents in each GHAT3 layer, forming a double layer. According to the presented results of our calculations, while the external weak magnetic field, responsible for the formation of the gap Δ in the double layer of α-T_3 increases the exciton binding energy, the mean-field transition temperature to the superfluid phase is increased as the weak magnetic field and Δ are decreased. Therefore, the dipolar exciton system in a double-layer of GHAT3 can be applied to engineer a switch, where transport properties of dipolar excitons can be tuned by an external weak magnetic field, forming the gap Δ. Varying a weak magnetic field may lead to a phase transition between the superfluid and normal phase, which sufficiently changes the transport properties of dipolar excitons. § ACKNOWLEDGEMENT(S) G.G. would like to acknowledge the support from the Air Force Research Laboratory (AFRL) through Grant No. FA9453-21-1-0046 99Lozovik Yu. E. Lozovik and V. I. Yudson, Sov. Phys. JETP Lett. 22, 26 (1975); Sov. Phys. JETP 44, 389 (1976). Snoke D. W. Snoke, Science 298, 1368 (2002). Butov L. V. Butov, J. Phys.: Condens. Matter 16, R1577 (2004). Eisenstein J. P. Eisenstein and A. H. MacDonald, Nature 432, 691 (2004). Snoke_review D. W. Snoke, in Quantum Gases: Finite Temperature and Non-equilibrium Dynamics, edited by N. P. Proukakis, S. A. Gardiner, M. J. Davis, and M. H. Szymanska, Cold Atom Series Vol. 1 (Imperial College Press, London, 2013), p. 419. Hamilton S. Saberi-Pouya, S. Conti, A. Perali, A. F. Croxall, A. R. Hamilton, F. M. Peeters, and D. Neilson, 101, 140501(R) (2020) BLG O. L. Berman, Yu. E. Lozovik, and G. Gumbs, Phys. Rev. B 77, 155433 (2008). Sokolik Yu. E. Lozovik and A. A. Sokolik, JETP Lett. 87, 55 (2008); Phys. Lett. A 374, 326 (2009). Bist R. Bistritzer and A. H. MacDonald, Phys. Rev. Lett. 101, 256406 (2008). BKZg O. L. Berman, R. Ya. Kezerashvili, and K. Ziegler, Phys. Rev. B 85, 035418 (2012). Perali A. Perali, D. Neilson, and A. R. Hamilton, Phys. Rev. Lett. 110, 146803 (2013). Fogler M. M. Fogler, L. V. Butov, and K. S. Novoselov, Nature Commun. 5, 4555 (2014). MacDonald_TMDC F.-C. Wu, F. Xue, and A. H. MacDonald, Phys. Rev. B 92, 165121 (2015). BK O. L. Berman and R. Ya. Kezerashvili, 93, 245410 (2016). BK2 O. L. Berman and R. Ya. Kezerashvili, 96, 094502 (2017). Conti S. Conti, M. Van der Donck, A. Perali, F. M. Peeters, and D. Neilson, 101, 220504(R) (2020). BGK O. L. Berman, G. Gumbs, and R. Ya. Kezerashvili, 96, 014505 (2017). Peeters S. Saberi-Pouya, M. Zarenia, A. Perali, T. Vazifehshenas, and F. M. Peeters, 97, 174503 (2018). Rapaport Y. Mazuz-Harpaz, K. Cohen, M. Leveson, K. West, L. Pfeiffer, M. Khodas, and R. Rapaport, Proc. Nat. Acad. Sci. USA 116, 18328 (2019). f1 A. Raoux, M. Morigi, J.-N. Fuchs, F. Piéchon, and G. Montambaux, 112, 026402 (2014). f2 B. Sutherland, Phys. Rev. B 34, 5208 (1986). f3 E. Illes, J. P. Carbotte, and E. J. Nicol Phys. Rev. B 92, 245410 (2015). f4 S. K. F. Islam and P. Dutta, Phys. Rev. B 96, 045418 (2017). f5 E. Illes and E. J. Nicol, Phys. Rev. B 94, 125435 (2016). f6.1 B. Dey and T. K. Ghosh, Phys. Rev. B 98, 075422 (2018). f6 B. Dey and T. K. Ghosh, Phys. Rev. B 99, 205429 (2019). f7 T. Biswas and T. K. Ghosh, Journal of Physics: Condensed Matter 30, 075301 (2018). f8 A. D. Kovacs, G. David, B. Dora, and J. Cserti, Phys. Rev. B 95, 035414 (2017). f9 T. Biswas and T. K. Ghosh, Journal of Physics: Condensed Matter 28, 495302 (2016) [arXiv: 1605.06680]. RKKY D. O. Oriekhov, V. P Gusynin - arXiv preprint arXiv:2001.00272, 2020. t1 Danhong Huang, Andrii Iurov, Hong-Ya Xu, Ying-Cheng Lai, and Godfrey Gumbs Phys. Rev. B 99, 245412 (2019). t2 Y. Li, S. Kita, P. Munoz, O. Reshef, D. I. Vulis, M. Yin, M. Loncar, and E. Mazur, Nat. Photon 9, 738 (2015). t3 H.-Y. Xu, L. Huang, D. H. Huang, and Y.-C. Lai, Phys. Rev. B 96, 045412 (2017). Review Daniel Leykam, Alexei Andreanov, and Sergej Flach, Advances in Physics: X, 3, 677 (2018). 21 M. Sherafati and S. Satpathy, Phys. Rev. B 84, 125416 (2011). AI1 A. Iurov, G. Gumbs, and D. Huang, 99, 205135 (2019). AI2 A. Iurov, L. Zhemchuzhna, D. Dahal, G. Gumbs, and D. Huang, 101, 035129 (2020). AI3 D. Huang, A. Iurov, H.-Y. Xu, Y.-C. Lai, and G. Gumbs, 99, 245412 (2019). AI5 N. Weekes, A. Iurov, L. Zhemchuzhna, G. Gumbs, and D. Huang, 103, 165429 (2021). AI6 A. Iurov, L. Zhemchuzhna, G. Gumbs, D. Huang, P. Fekete, F. Anwar, D.Dahal, and N. Weekes, Sci. Rep. 11, 20577 (2021). ABG Y. Abranyos, O. L. Berman, and G. Gumbs, 102, 155408 (2020). BGR A. Balassis, G. Gumbs, and O. Roslyak, Nanomaterials 11, 1720 (2021). BDGIHR A. Balassis, D. Dahal, G. Gumbs, A. Iurov, D. Huang, and O. Roslyak, Journal of Physics: Condensed Matter 32, 485301 (2020). Malcolm J. D. Malcolm and E. J. Nicol, 93, 165433 (2016). PhD E. Illes, Properties of the alpha-T3 model, PhD Thesis University of Guelph (2017). Wunsch B. Wunsch, T. Stauber, F. Sols, and F. Guinea, New J. Phys. 8, 318 (2006). CaihBN Y. Cai, L. Zhang, Q. Zeng, L. Cheng, and Y. Xu, Solid State Commun. 141, 262 (2007). Maksym P. A. Maksym and T. Chakraborty, 65, 108 (1990). Iyengar A. Iyengar, J. Wang, H. A. Fertig, and L. Brey, 75, 125430 (2007). BKKL O. L. Berman, R. Ya. Kezerashvili, G. V. Kolmakov, and Yu. E. Lozovik, 86, 045108 (2012). Lifshitz E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics, Part 2 (Pergamon Press, Oxford, 1980). Abrikosov A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinskii, Methods of Quantum Field Theory in Statistical Physics (Prentice-Hall, Englewood Cliffs, NJ, 1963). Timmermans P. Tommasini, E. J. V. de Passos, A. F. R. de Toledo Piza, M. S. Hussein, and E. Timmermans, 67, 023606 (2003). BKL_i O. L. Berman, R. Ya. Kezerashvili, and Yu. E. Lozovik, 78, 035135 (2008). BKL_ii O. L. Berman, R. Ya. Kezerashvili, and Yu. E. Lozovik, Phys. Lett. A 372, 6536 (2008). Sun B. Sun and M. S. Pindzola, J. Phys. B 43 055301 (2010). Pitaevskii L. Pitaevskii and S. Stringari, Bose-Einstein Condensation (Clarendon Press, Oxford, 2003). Lopez P. López Ríos, A. Perali, R. J. Needs, and D. Neilson, 120, 177701 (2018). Kosterlitz J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973); D. R. Nelson and J. M. Kosterlitz, 39, 1201 (1977). Vyas V. M.Vyas, P. K.Panigrahia, and J.Banerji, Phys. Lett. A 378, 1434 (2014). Ji A.-C. Ji, W. M. Liu, J. L. Song, and F. Zhou, 101, 010402 (2008). Qi R. Qi, X.-L. Yu, Z. B. Li, and W. M. Liu, 102, 185301 (2009). Passos C.-Y. Lin, E. J. V. de Passos, A. F. R. de Toledo Piza, D.-S. Lee, and M. S. Hussein, 73, 013615 (2006). Griffin A. Griffin, Excitations in a Bose-condensed liquid (Cambridge University Press., Cambridge, UK, 1993). Daflovo F. Daflovo, S. Giorgini, and L. P. Pitaevskii, 71, 463 (1999).
http://arxiv.org/abs/2409.03424v1
20240905111034
Weight Conditioning for Smooth Optimization of Neural Networks
[ "Hemanth Saratchandran", "Thomas X. Wang", "Simon Lucey" ]
cs.CV
[ "cs.CV" ]
H. Saratchandran et al. Australian Institute for Machine Learning, University of Adelaide Sorbonne Université, CNRS, ISIR [email protected] Weight Conditioning for Smooth Optimization of Neural Networks Hemanth Saratchandran1 Thomas X Wang2 Simon Lucey1 September 9, 2024 ============================================================== § ABSTRACT In this article, we introduce a novel normalization technique for neural network weight matrices, which we term weight conditioning. This approach aims to narrow the gap between the smallest and largest singular values of the weight matrices, resulting in better-conditioned matrices. The inspiration for this technique partially derives from numerical linear algebra, where well-conditioned matrices are known to facilitate stronger convergence results for iterative solvers. We provide a theoretical foundation demonstrating that our normalization technique smoothens the loss landscape, thereby enhancing convergence of stochastic gradient descent algorithms. Empirically, we validate our normalization across various neural network architectures, including Convolutional Neural Networks (CNNs), Vision Transformers (ViT), Neural Radiance Fields (NeRF), and 3D shape modeling. Our findings indicate that our normalization method is not only competitive but also outperforms existing weight normalization techniques from the literature. § INTRODUCTION Normalization techniques, including batch normalization <cit.>, weight standardization <cit.>, and weight normalization <cit.>, have become fundamental to the advancement of deep learning, playing a critical role in the development and performance optimization of deep learning models for vision applications <cit.>. By ensuring consistent scales of inputs and internal activations, these methods not only stabilize and accelerate the convergence process but also mitigate issues such as vanishing or exploding gradients. In this paper, we put forth a normalization method termed weight conditioning, designed to help in the optimization of neural network architectures, including both feedforward and convolutional layers, through strategic manipulation of their weight matrices. By multiplying a weight matrix of a neural architecture by a predetermined matrix conditioner, weight conditioning aims to minimize the condition number of these weight matrices, effectively narrowing the disparity between their smallest and largest singular values. Our findings demonstrate that such conditioning not only influences the Hessian matrix of the associated loss function when training such networks, leading to a lower condition number but also significantly enhances the efficiency of iterative optimization methods like gradient descent by fostering a smoother loss landscape. Our theoretical analysis elucidates the impact of weight matrices on the Hessian's condition number, revealing our central insight: optimizing the condition number of weight matrices directly facilitates Hessian conditioning, thereby expediting convergence in gradient-based optimization by allowing a larger learning rate to be used. Furthermore, weight conditioning can be used as a drop-in component along with the above normalization techniques, yielding better accuracy and training on a variety of problems. We draw the reader's focus to fig. <ref>, which delineates the pivotal role of weight conditioning in enhancing batch normalization's effectiveness. Illustrated on the left, the figure compares training outcomes for a GoogleNet model on the CIFAR100 dataset under three distinct conditions: Batch Normalization (BN), Batch Normalization with Weight Standardization (BN + WS), and Batch Normalization with Weight Conditioning (BN + WC). Each variant was subjected to a training regimen of 40 epochs using Stochastic Gradient Descent (SGD) at an elevated learning rate of 2. Remarkably, the BN + WC configuration demonstrates a superior accuracy enhancement of nearly 15% over its counterparts, showcasing its robustness and superior adaptability to higher learning rates than typically documented in the literature. On the figure's right, we extend this comparative analysis to include ResNet18 and ResNet50 models, trained on CIFAR100, via SGD with a conventional learning rate of 1e-3 and over a span of 200 epochs. Consistently, BN + WC exhibits a pronounced performance advantage over the alternative normalization strategies, reinforcing its efficacy across diverse neural network frameworks. To further demonstrate the versatility of weight conditioning, we rigorously tested its efficacy across a spectrum of machine learning architectures, including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Neural Radiance Fields (NeRF), and 3D shape modeling. In each scenario, we juxtaposed weight conditioning against established normalization methods cited in current research, unveiling its potential to significantly boost the performance of these advanced deep learning frameworks. Our main contributions are: * We introduce a novel normalization strategy, termed weight conditioning, which strategically modifies the weight matrices within neural networks, facilitating faster convergence for gradient based optimizers. * Through rigorous theoretical analysis, we validate the underlying principles of weight conditioning, offering a solid foundation for its implementation and understanding. * We present comprehensive empirical evidence showcasing the effectiveness of weight conditioning across a variety of machine learning models, highlighting its broad applicability and impact on model performance optimization. § RELATED WORK §.§.§ Normalization in deep learning: Normalization techniques have become pivotal in enhancing the training stability and performance of deep learning models. Batch normalization, introduced by Ioffe and Szegedy <cit.>, normalizes the inputs across the batch to reduce internal covariate shift, significantly improving the training speed and stability of neural networks. Layer normalization, proposed by Ba et al. <cit.>, extends this idea by normalizing inputs across features for each sample, proving particularly effective in recurrent neural network architectures <cit.> and transformer networks <cit.>. Weight normalization, by Salimans and Kingma <cit.>, decouples the magnitude of the weights from their direction, facilitating a smoother optimization landscape. Lastly, weight standardization, introduced by Qiao et al. <cit.>, standardizes the weights in convolutional layers, further aiding in the optimization process, especially when combined with batch normalization. Together, these techniques address various challenges in training deep learning models, underscoring the continuous evolution of strategies to improve model convergence and performance. § NOTATION Our main theoretical contributions will be in the context of feedfoward layers. Therefore, we fix notation for this here. Let F denote a depth L neural network with layer widths {n_1,…,n_L}. We let X ∈^N× n_0 denote the training data, with n_0 being the dimension of the input. The output at layer k will be denoted by F_k and is defined by F_k = F_L-1W_L + b_L, k = L ϕ(F_k-1W_k + b_k), k ∈ [L-1] X, k = 0 where the weights W_k ∈^n_k-1× n_k and the biases b_k ∈^n_k and ϕ is an activation applied component wise. The notation [m] is defined by [m] = {1,…,m}. We will also fix a loss function ℒ for minimizing the weights of F. In the experiments this will always be the MSE loss or the Binary Cross Entropy (BCE) loss. Note that ℒ depends on F. § MOTIVATION In this section, we give some brief motivation for the theoretical framework we develop in the next section. A simple model:Consider the quadratic objective function given by ℒ(θ) := 1/2θ^TAθ - b^Tθ where A is a symmetric n × n matrix of full rank and b is an n × 1 vector. The objective function ℒ is used when solving the eqn. Ax = b. One can see that the solution of this equation if given by x = A^-1b which is precisely the minimum θ^* of ℒ. Thus minimizing the objective function ℒ with a gradient descent algorithm is one way to find a solution to the matrix equation Ax = b. We want to consider the gradient descent algorithm on ℒ and understand how the convergence of such an algorithm depends on characteristics of the matrix A. Observe that ∇ℒ(θ) = Aθ - b and H(ℒ)(θ) = A where H(ℒ)(θ) denotes the Hessian of ℒ at θ. If we consider the gradient descent update for this objective function with a learning rate of η, eqn. (<ref>) implies θ^t+1 = θ^t - η∇ℒ(θ^t) = θ^t - η (Aθ^t - b) Taking the singular value decomposition (SVD) of A we can write A = Udiag(σ_1,⋯ ,σ_n)V^T where U and V are unitary matrices and σ_1 ≥⋯≥σ_n are the singular values of A. The importance of the SVD comes from the fact that we can view the gradient descent update (<ref>) in terms of the basis defined by V^T. Namely, we can perform a change of coordinates and define x^t = V^T(θ^t - θ^*) The gradient update for the ith-coordinate of x^t, denoted x_i^t becomes x^(t+1)_i = x^t_i - ησ_ix_i^t = (1- ησ_i)x_i^t = (1-ησ_i)^t+1x_0. If we write V = [v_1,⋯ ,v_n] with each v_i ∈^n× 1 we then have θ^t - θ^* = Vx^t = ∑_i=1^nx_i^0(1-ησ_i)^t+1 v_i. Eqn. (<ref>) shows that the rate at which gradient descent moves depends on the quantities 1-ησ_i. This implies that in the direction v_i, gradient descent moves at a rate of (1-ησ_i)^t from which it follows that the closer 1-ησ_i is to zero the faster the convergence. In particular, provided η is small enough, the directions corresponding to the larger singular values will converge fastest. Furthermore, eqn. (<ref>) gives a condition on how the singular values of A affect the choice of learning rate η. We see that in order to guarantee convergence we need | 1 - ησ_i| < 1 for all 1 ≤ i ≤ n which implies 0 < ησ_i < 2 for all 1 ≤ i ≤ n. This means we must have η < 2/σ_i for each i and this will be satisfied if η < 2/σ_1 since σ_1 is the largest singular value. Furthermore, we see that the progress of gradient descent in the ith direction v_i is bounded by ησ_i < 2σ_i/σ_1 < 2σ_1/σ_n. Since σ_1 ≥σ_n we thus see that gradient descent will converge faster provided the quantity σ_1/σ_n is as small as possible. This motivates the following definition. Let A be a n × m matrix of full rank. The condition number of A is defined by κ(A) := σ_1(A)/σ_k(A) where σ_1(A) ≥⋯≥σ_k(A) > 0 and k = min{m, n}. Note that because we are assuming A to be full rank, the condition number is well defined as all the singular values are positive. Preconditioning: Preconditioning involves the application of a matrix, known as the preconditioner P, to another matrix A, resulting in the product PA, with the aim of achieving κ(PA) ≤κ(A) <cit.>. This process, typically referred to as left preconditioning due to the multiplication of A from the left by P, is an effective method to reduce the condition number of A. Besides left preconditioning, there is also right preconditioning, which considers the product AP, and double preconditioning, which employs two preconditioners, P_1 and P_2, to form P_1AP_2. Diagonal matrices are frequently chosen as preconditioners because their application involves scaling the rows or columns of A, thus minimally adding to the computational cost of the problem. Examples of preconditioners are: 1. Jacobi Preconditioner: Given a square matrix A the Jacobi preconditioner D consists of the inverse of the diagonal elements of A, A → diag(A)^-1A <cit.>. 2. Row Equilibration: Given a n × m matrix A, row equilibration is a diagonal n× n matrix with the inverse of the 2-norm of each row of A on the diagonal, A → (|| A_i:||_2)^-1A, where A_i: denotes the ith-row of A <cit.>. 3. Column Equilibration: Given a n × m matrix A, column equilibration is a diagonal m× m matrix with the inverse of the 2-norm of each column of A on the diagonal, A → A(|| A_:i||_2)^-1, where A_:i denotes the ith-column of A. 4. Row-Column Equilibration: This is a double sided equilibration given by row equilibration on the left and column equilibration on the right. The interested reader can consult <cit.> for more on preconditioner. In Sec. <ref> we will explain how row equilibration helps reduce the condition number. If we precondition A yielding PA and consider the new objective function ℒ_P(θ) = θ^TPAθ - b^TP^Tθ Then provided we have that κ(PA) ≤κ(A) the above discussion shows that gradient descent on the new objective function ℒ_P will converge faster. For this problem the preconditioning can also be thought of in terms of the matrix eqn. Ax = b. We seek to multiply the system by P yielding the new system PAx = Pb and provided κ(PA) ≤κ(A), this new system will be easier to solve using a gradient descent. General objective functions: The above discussion focused on a very simple objective function given by eqn. (<ref>). In general, objective functions are rarely this simple. However, given a general objective function ℒ, about a point θ_0 we can approximate ℒ by a second order Taylor series yielding ℒ(θ) ≈1/2(θ - θ_0)^TH(θ_0)(θ - θ_0) + (∇ℒ(θ_0))(θ - θ_0) + ℒ(θ_0) where H, the Hessian matrix of ℒ, is a symmetric square matrix that captures the curvature of ℒ. This expansion offers insights into the local behavior of gradient descent around θ_0, particularly illustrating that if H is full rank, the convergence speed of the descent algorithm is influenced by the condition number of H. However, for many neural network architectures, which typically have a vast number of parameters, directly computing H is impractical due to its 𝒪(n^2) computational complexity, where n represents the parameter count. To address this challenge, the subsequent section will introduce the concept of weight conditioning, a technique aimed at reducing the condition number of H without the necessity of direct computation. § THEORETICAL METHODOLOGY In this section, we explain our approach to conditioning the weight matrices within neural networks, aiming to enhance the convergence efficiency of gradient descent algorithms. Fixing a loss function ℒ we saw in the previous sec. <ref> that we can approximate ℒ locally by a second order quadratic approximation via the Taylor series about any point θ_0 as ℒ(θ) ≈1/2(θ - θ_0)^TH(θ_0)(θ - θ_0) + (∇ℒ(θ_0))(θ - θ_0) + ℒ(θ_0) where H is the Hessian matrix associated to ℒ. Note that H is a square symmetric matrix. The discussion in sec. <ref> made two key observations: * The convergence rate of gradient descent on ℒ is significantly influenced by the condition number of H, provided H is full rank. * Inspired by the first point, the convergence rate can be accelerated by diminishing the condition number of H. It was mentioned at the end of sec. <ref> that in general we cannot expect to have direct access to the Hessian H as this would require a computational cost of 𝒪(n^2), where n is the number of parameters, which can be extremely large for many deep learning models. In this section we introduce weight conditioning, which is a method of conditioning the weights of a neural networks that leads to a method of bringing down the condition number of H without directly accessing it. §.§ Weight conditioning for feed forward networks We fix a neural network F(x;θ) with L layers, as defined in sec. <ref>. Given a collection of preconditioner matrices P = {P_1,⋯, P_l} where P_k ∈^n_k-1× n_k we define a preconditioned network F^pre(x;θ) as follows. The layer maps of F^pre(x;θ) will be defined by: F^pre_k = ϕ_k((P _kW_k)^T(F^pre_k-1)(x) + b_k), k = [1, L] x, k = 0. We thus see that the layer-wise weights of the network F^pre are the weights W_k of the network F preconditioned by the preconditioner matrix P_k. Fig. <ref> provides a visual depiction of the preconditioned network F^pre(x;θ), illustrating how the preconditioner matrices are applied to each layer's weights to form the updated network configuration. Given a feed forward neural network F(x;θ) we call the process of going from F(x;θ) to F^pre(x;θ) weight conditioning. It's important to observe that both networks, F and F^pre, maintain an identical count of parameters, activations, and layers. The sole distinction between them lies in the configuration of their weight matrices. Our objective is to rigorously demonstrate that an optimal choice for weight conditioning a network is to use row equilibration. Recall from sec. <ref>, that row equilibration preconditioner for a weight matrix W_k ∈ℝ^n_k-1× n_k is defined as a diagonal matrix E_k ∈ℝ^n_k-1× n_k-1. Each diagonal element of E_k is determined by the inverse of the 2-norm of the i-th row vector of W_k, i.e. || (W_k)_i:||_2^-1. All our statements in this section hold for the column equilibrated preconditioner and the row-column equilibrated preconditioner. Therefore, from here on in we will drop the usage of the word row and simply call our preconditioner an equilibrated preconditioner. There are two main reasons we are choosing to use equilibration as the preferred form to condition the weights of the neural network F. The first reason is that it is a diagonal preconditioner, therefore requiring low computational cost to compute. Though more importantly, as the following theorem of Van Der Sluis <cit.> shows, it is the optimum preconditioner amongst all diagonal preconditioners to reduce the condition number. Let A be a n× m matrix, P an arbitrary diagonal n × n matrix and E the row equilibrated matrix built from A. Then κ(EA) ≤κ(PA). For the fixed L layer neural network F, let F^eq denote the equilibrated network with weights E_kW_k, where E_k is the equilibrated preconditioner corresponding to the weight matrix W_k. We have the following proposition. κ(E_kW_k) ≤κ(W_k) for any 1 ≤ k ≤ L. In other words the weight matrices of the network F^eq have at least better condition number than those of the network F The matrix W_k can be written as the product I_n_k-1W_k where I_n_k-1∈^n_k-1× n_k-1 is the n_k-1× n_k-1 identity matrix. Applying thm. <ref> we have κ(E_kW_K) ≤κ(I_n_k-1W_K) = κ(W_k). For the following theorem we fix a loss function ℒ see Sec. <ref>. Given two neural networks F and G we obtain two loss functions ℒ_F and ℒ_G, each depending on the weights of the respective networks. We will denote the Hessian of these two loss functions at a point θ by H_F(θ) and H_G(θ) respectively. Let F(x;θ) be a fixed L layer feed forward neural network. Let F^eq(x;θ) denote the equilibrated network obtained by equilibrating the weight matrices of F. Then κ(H_F^eq(θ)) ≤κ(H_F(θ)) for all parameters θ at which both H_F^eq and H_F have full rank. The proof of Thm. <ref> is given in Supp. material Sec. 1. Thm. <ref> shows that weight conditioning by equilibrating the weights of the network F thereby forming F^eq leads to a better conditioned Hessian of the loss landscape. From the discussion in sec. <ref> we see that this implies that locally around points where the Hessian has full rank, a gradient descent algorithm will move faster. Weight conditioning can also be applied to a convolutional layer. Please see Supp. material Sec. 1 for a detailed analysis on how this is done. § EXPERIMENTS §.§ Convolutional Neural Networks (CNNs) CNNs are pivotal for vision-related tasks, thriving in image classification challenges due to their specialized architecture that efficiently learns spatial feature hierarchies. Among the notable CNN architectures in the literature, we will specifically explore two significant ones for our experiment: the Inception <cit.> architecture and DenseNet <cit.>, both of which continue to be highly relevant and influential in the field. §.§.§ Experimental setup: We will assess four normalization strategies on two CNN architectures. Our study compares BN, BN with weight standardization (BN + WS), BN with weight normalization (BN + W), and BN with equilibrated weight conditioning (BN + E). Training employs SGD with a 1e-3 learning rate on CIFAR10 and CIFAR100, using a batch size of 128 across 200 epochs. For more details on the training regiment, see Supp. material Sec. 2. For results on ImageNet1k Supp. material Sec. 2. §.§.§ Inception: In our experiment, we employed a modified Inception architecture, as proposed in <cit.>, featuring eight inception blocks. Weight conditioning was effectively applied to just the initial convolutional layer and the final linear layer. For detailed insights into the architecture and normalization applications, see Sec. 2 of the Supp. material. Results on the CIFAR10 dataset, depicted in Fig. <ref>, highlight that the Inception model with BN + E exhibits the quickest loss reduction and accelerated convergence in Top-1% accuracy among all four normalization strategies. Similarly, Fig. <ref> illustrates that, on the CIFAR100 dataset, BN + E achieves the lowest training loss and highest Top-1% accuracy, outperforming other methods in convergence speed. Across both datasets, as summarized in Tab. <ref>, BN + E consistently delivers superior Top-1% and Top-5% accuracies, affirming its effectiveness. §.§.§ DenseNet: For this experiment we tested four normalizations on the DenseNet architecture <cit.>. We found that it sufficed to apply equilibrated weight conditioning to the first convolutional layer and the last feedforward layer as in the case for Inception. An in-depth description of the architecture employed in our study, and how we applied each normalization is given in Sec. 2 of the Supp. material. Fig. <ref> and Fig. <ref> display CIFAR10 and CIFAR100 dataset results, respectively. For CIFAR10, BN + E demonstrates quicker convergence and higher Top-1% accuracy as shown in the train loss curves and accuracy figures. CIFAR100 results indicate BN + E outperforms other normalizations in both convergence speed and Top-1% accuracy. Tab. <ref> summarizes the final Top-1% and Top-5% accuracies for all normalizations, with BN + E leading in performance across both datasets. §.§ Vision Transformers (ViTs) Vision Transformers (ViTs) <cit.> have emerged as innovative architectures in computer vision, showcasing exceptional capabilities across a broad spectrum of tasks. In general, we found that most work in the literature applied layer normalization to vision transformers which was much more robust for training than batch normalization. We found that if we removed layer normalization training was impeded significantly leading to gradients not being able to be backpropgated. Therefore, for this experiment we will consider 3 normalization scenarios, layer normalization (LN), layer normalization with weight normalization (LN + W) and layer normalization with equilibrated weight conditioning (LN + E). §.§.§ ViT-Base (ViT-B): The ViT-B architecture <cit.>, with its 86 million parameters, exemplifies a highly overparameterized neural network. We investigated three variations of ViT-B, each modified with a different normalization: LN, LN + W, and LN + E; the implementation details are provided in Sec. 2 of the Supp. material. These models were trained on the ImageNet1k dataset using a batch size of 1024 and optimized with AdamW. Tab. <ref> shows the final Top-1% and Top-5% accuracies for the ViT-B architecture with the above three different normalizations. The table shows that LN + E outperformed the other normalization schemes. §.§.§ ViT-Small (ViT-S): For experiments on smaller transformer networks on the CIFAR100 dataset, please see Sup. material sec. 2. §.§ Neural Radiance Fields (NeRF) Neural Radiance Fields (NeRF) <cit.> have emerged as a pioneering approach in 3D modeling, using Multi-Layer Perceptrons (MLPs) to reconstruct 3D objects and scenes from multi-view 2D images. We utilized the standard NeRF model from <cit.>, noting that unlike transformers or CNNs, NeRF architectures are relatively shallow, typically comprising 8 hidden layers, where common normalization techniques can hinder training. We explored the performance of a standard NeRF against an equilibrated weight conditioned NeRF (E-NeRF). For an in-depth look at the NeRF setup and our application of weight conditioning, refer to Sec. 2 in the Supp. material. Both models were trained on the LLFF dataset <cit.>. Fig. <ref> presents outcomes for the Fern and Room instances from LLFF, demonstrating that E-NeRF outperforms the vanilla NeRF by an average of 0.5-1 dB. Tab. <ref> gives the test PSNR averaged over three unseen scenes over the whole LLFF dataset. E-NeRF on average performs better over the whole dataset. §.§ Further Experiments Further experiments can be found in Supp. material Sec. 3: Applications to 3D shape modelling, cost analysis and ablations. § LIMITATIONS Weight conditioning entails the application of a preconditioner to the weight matrices of a neural network during the forward pass, which does extend the training duration per iteration of a gradient optimizer compared to other normalization methods. We leave it as future work to develop methods to bring down this cost. § CONCLUSION In this work, we introduced weight conditioning to improve neural network training by conditioning weight matrices, thereby enhancing optimizer convergence. We developed a theoretical framework showing that weight conditioning reduces the Hessian's condition number, improving loss function optimization. Through empirical evaluations on diverse deep learning models, we validated our approach, confirming that equilibrated weight conditioning consistently aligns with our theoretical insights. § ACKNOWLEDGEMENTS Thomas X Wang acknowledges financial support provided by DL4CLIM (ANR- 19-CHIA-0018-01), DEEPNUM (ANR-21-CE23-0017-02), PHLUSIM (ANR-23-CE23- 0025-02), and PEPR Sharp (ANR-23-PEIA-0008”, “ANR”, “FRANCE 2030”). Part of this work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011013332R1). splncs04
http://arxiv.org/abs/2409.03500v1
20240905131216
Disclosure of AI-Generated News Increases Engagement but Does Not Reduce Aversion, Despite Positive Quality Ratings
[ "Fabrizio Gilardi", "Sabrina Di Lorenzo", "Juri Ezzaini", "Beryl Santa", "Benjamin Streiff", "Eric Zurfluh", "Emma Hoes" ]
cs.CY
[ "cs.CY", "cs.AI" ]
HGAMN: Heterogeneous Graph Attention Matching Network for Multilingual POI Retrieval at Baidu Maps Jizhou Huang, Haifeng Wang, Yibo Sun, Miao Fan, Zhengjie Huang, Chunyuan Yuan, Yawen Li =========================================================================================================== § ABSTRACT The advancement of artificial intelligence (AI) has led to its application in many areas, including journalism. One key issue is the public's perception of AI-generated content. This preregistered study investigates (i) the perceived quality of AI-assisted and AI-generated versus human-generated news articles, (ii) whether disclosure of AI's involvement in generating these news articles influences engagement with them, and (iii) whether such awareness affects the willingness to read AI-generated articles in the future. We employed a between-subjects survey experiment with 599 participants from the German-speaking part of Switzerland, who evaluated the credibility, readability, and expertise of news articles. These articles were either written by journalists (control group), rewritten by AI (AI-assisted group), or entirely generated by AI (AI-generated group). Our results indicate that all news articles, regardless of whether they were written by journalists or AI, were perceived to be of equal quality. When participants in the treatment groups were subsequently made aware of AI's involvement in generating the articles, they expressed a higher willingness to engage with (i.e., continue reading) the articles than participants in the control group. However, they were not more willing to read AI-generated news in the future. These results suggest that aversion to AI usage in news media is not primarily rooted in a perceived lack of quality, and that by disclosing using AI, journalists could attract more immediate engagement with their content, at least in the short term. § INTRODUCTION The landscape of journalism is evolving significantly with the advancement of artificial intelligence (AI). Traditionally, journalists have used computers primarily for research and data analysis, but this role is rapidly expanding. Modern AI, particularly generative AI, now matches or even surpasses human capabilities in tasks like text composition. This has led to the emergence of “automated” or “robot” journalism <cit.>, a key aspect of the broader computational journalism trend that underscores the increasing role of computation and data in news production <cit.>. AI applications, especially in natural language generation, have become integral in practical journalism. These applications include news writing for media organizations such as the Associated Press and Forbes, summarizing scientific data via platforms like the Open Research Knowledge Graph, and narrative writing using tools like ChatGPT-4. These advancements enable AI to produce content that is virtually indistinguishable from human writing, demonstrating AI's potential in various writing-intensive domains. The integration of AI-generated content in journalism presents several implications. On one hand, it offers opportunities for faster, multilingual, and expanded content production with potentially fewer errors, which could enhance news quality and help combat misinformation <cit.>. AI can also handle routine reporting tasks, allowing journalists to focus on investigative stories. On the other hand, however, there are concerns about job losses in the journalism sector and doubts regarding algorithms' ability to fulfill the “watchdog” role traditionally associated with journalists <cit.>. A crucial issue is the public's perception of AI-assisted journalism. Understanding attitudes towards AI-generated journalism is essential as it may directly impact the public's political knowledge. Indeed, if the public distrusts AI-generated content and chooses not to engage with it, they may miss out on important political information, leading to a less informed citizenry. A recent study revealed that only 29% of Swiss respondents would read fully AI-generated news, 55% would read news if AI assisted in its creation, and 84% would prefer news created without AI involvement <cit.>. This highlights skepticism towards AI-generated news amongst the Swiss population, consistent with surveys from other countries <cit.>. Such aversion could be problematic because it could undermine the credibility and effectiveness of news organizations. Therefore, transparent and responsible use of AI in journalism is essential to build trust and ensure that the public remains informed and engaged with credible news sources. In this preregistered study, we investigate the perceived quality of AI-assisted and AI-generated versus human-generated news articles prior to disclosing whether the articles were created by humans, AI-assisted processes, or entirely by AI, and whether subsequent disclosure of the source influences self-reported willingness to engage with those articles as well as the willingness to read AI-assisted or AI-generated articles in the future. Our results indicate that all kinds of news articles, regardless of whether they were written by journalists or (assisted by) AI, were perceived to be of similar quality in terms of credibility, readability, and expertise. When participants in the treatment groups were subsequently informed about AI's role in generating the articles, they expressed a higher willingness to read the full news article than participants in the control group, who saw human-written articles. This increased engagement, however, did not translate to an increased openness towards reading AI-generated news in the future. These findings imply that aversion towards AI usage in the news media is not primarily linked to concerns that AI-generated content is of lower quality. Moreover, the results suggest that transparency in the use of AI by journalists could enhance immediate reader engagement. Future work should delve deeper into understanding the underlying mechanisms driving both results, exploring factors such as the novelty of AI involvement, trust in AI, and the psychological effects of transparency. Additionally, research should investigate long-term impacts on reader behavior and trust to develop strategies for effectively integrating AI in journalism, ensuring that it complements rather than undermines the journalistic process. § PREVIOUS RESEARCH Several strands of literature have explored the perception of human versus AI-generated news. The first strand involves research on public opinions about human versus computer-generated news, relying on respondents' prior experiences or perceptions <cit.>. A recent survey conducted across six countries (Argentina, Denmark, France, Japan, the UK, and the USA) revealed that public opinion on the use of generative AI in journalism is mixed <cit.>. While people expect AI-produced news to be cheaper and more up-to-date, they also anticipate it to be less trustworthy and transparent. Most respondents are wary of AI-generated news, especially on hard topics like politics and international affairs, but show more acceptance for soft news like fashion and sports. A large majority favors disclosure when AI is used, but opinions differ on which specific uses should be labeled. Uncertainty remains, with many respondents unsure or neutral about AI's role in journalism. A limitation of these kinds of opinion-based studies is that they could reflect respondents' preconceptions or lack of familiarity with AI-generated texts, which may not align with actual behaviors when confronted with AI news. The second strand consists of surveys where respondents directly evaluate news stories labeled as either “human-written” or “computer-written.” For instance, <cit.> found that respondents rated human-written news higher in readability but lower in credibility compared to AI-generated news. However, when participants were only given a single article without information on its origin, the differences in their evaluations were minimal, indicating that the perception of the author (human or AI) plays as significant a role in the assessment as the content itself. The third strand focuses on experiments where participants are presented with identical articles but with different stated authors, to measure the effect of the source. For example, <cit.> found that while news consumers rated the credibility of both human and computer-generated articles equally, journalists perceived human-labeled articles as significantly more trustworthy. This effect varied with the topic, as sports articles were rated as less trustworthy than financial ones. <cit.> used a 2 × 2 design to explore perceptions of article quality based on authorship labels, finding that both journalists and the public rated AI-written articles higher when they were correctly labeled as such. Another experiment by <cit.> varied the attributed author of a human-written text, finding no significant difference in perceived credibility across different labels. Additionally, <cit.> explored how labeling news headlines as “AI-generated” affects accuracy ratings and sharing intentions, finding that such labels decreased perceived accuracy and sharing intent, even when the text was human-written. The fourth strand investigates the effect of the message itself by providing texts written by different authors without stating the source, thereby reducing source-based biases. <cit.> found that computer-written articles were perceived as more credible, whereas human-written articles scored higher in readability. This approach highlights the challenge of distinguishing between human and AI-generated content. <cit.> extended this analysis across multiple mediums (text, audio, video), finding significant differences favoring human-generated content only in audio and video formats. In a follow-up study, <cit.> discovered that human newscasters were perceived as more credible than AI newscasters. Some studies merge the third and fourth strands by measuring both source and message effects. For instance, <cit.> examined the effects of declared source, actual article content, and topic, finding that while declared source effects generally favored human-written articles, computer-written articles scored higher in credibility and expertise in fact-based topics. <cit.> explored European readers' perceptions of automated journalism, finding that automated and human content were perceived as equally credible. Finally, a meta-analysis incorporating results from studies across all four strands found no significant difference in credibility between human-written and AI-generated news, with human-written texts only having a slight advantage in perceived quality and a notable lead in readability <cit.>. However, all studies in this analysis were conducted before 2020, and the rapid advancements in generative AI may have altered these dynamics. Overall, existing research suggests that while AI-generated news can be perceived as being of equal quality to human-generated news, knowing that an article was created by AI can negatively influence readers’ perceptions. This implies that preconceived notions may lead to distorted preferences for human-generated news. However, previous studies have not separated quality evaluations from the willingness to read AI-generated news, limiting our understanding of public attitudes in this area. Moreover, recent innovations in AI text generation may have made some of the existing findings outdated, underscoring the need for additional research. § RESEARCH QUESTIONS AND HYPOTHESES Our study addresses the following research questions: RQ1: How do respondents rate the quality (expertise, readability, and credibility) of AI-assisted and AI-generated news articles, compared to human-generated news articles? RQ2: What impact does disclosing AI’s role in generating an article have on respondents’ willingness to read AI-assisted or AI-generated news after they have rated the article for quality? By answering these two research questions, we extend the literature in important ways. First, we combine a detailed measurement of “quality” with three realistic levels of AI-involvement in news generation, reflecting current capabilities and practices. Second, we make sure that respondents' views on AI are not asked in the abstract, which may reflect inaccurate preconceptions, but are rooted in specific examples we provide. Third, we separate quality evaluations from willingness to read AI-generated news by disclosing the degree of AI involvement after participants evaluated the articles' quality, which ensures that awareness of AI's role does not affect quality ratings. The first set of hypotheses focus on quality assessments of news articles with different degrees of AI involvement. Given that a meta-analysis of experimental studies found that human-written news was generally rated higher for quality <cit.>, our pre-registered hypotheses state that news articles generated with AI involvement are rated lower for all dimensions of quality. However, we acknowledge that recent advances in generative AI may challenge these expectations. H1: Respondents in the AI-assisted group rate the article's expertise lower than respondents in the control group. H2: Respondents in the AI-assisted group rate the article's readability lower than respondents in the control group. H3: Respondents in the AI-assisted group rate the article's credibility lower than respondents in the control group. H4: Respondents in the AI-generated group rate the article's expertise lower than respondents in the control group. H5: Respondents in the AI-generated group rate the article's readability lower than respondents in the control group. H6: Respondents in the AI-generated group rate the article's credibility lower than respondents in the control group. Our second set of hypotheses is designed to explore people’s attitudes once they learn the true role of AI in the article they have read, after rating them for quality. <cit.> found that “news consumers get more pleasure out of reading human-written as opposed to computer-written content”. Hence, we expect that respondents in the AI-assisted and AI-generated groups will show a lower willingness to continue reading their articles after being informed of the true generation process. H7: Respondents in the AI-assisted group express a lower willingness to continue reading the article than respondents in the control group. H8: Respondents in the AI-generated group express a higher willingness to continue reading the article than respondents in the control group. Our final hypotheses address attitudes towards AI usage in news articles in general, that is, not regarding the particular articles they have read and rated. Respondents will be asked to what degree they would be willing to read AI-generated news in the future. This approach is more in line with opinion-based studies asking abstract questions. However, in our study we ask this question after respondents have engaged with examples of the kinds of articles we are asking about. There is evidence that greater technological competence, AI familiarity, and AI knowledge increases trust in AI <cit.>, which suggests that engagement with AI articles may decrease skepticism towards them. Therefore, we expect that respondents in the AI-assisted and AI-generated display express willingness to read AI-generated news articles in the future. H9: Respondents in the AI-assisted group express a higher willingness to read AI-generated news in the future than respondents in the control group. H10: Respondents in the AI-generated group express a higher willingness to read AI-generated news in the future than respondents in the control group. § EXPERIMENTAL DESIGN To explore the public's perceptions of AI-generated and AI-assisted news articles compared to traditional human-generated articles, we conducted an online between-subjects survey experiment. We recruited 599 participants from the age of 18 from the German-speaking part of Switzerland through the survey company Bilendi, using quotas based on age and gender to ensure balance across these dimensions. Sample size was determined by a power analysis, assuming an effect size of Cohen's d = 0.3 for the primary outcomes and d = 0.15 for the secondary outcomes. First, participants completed a pre-survey to determine their socio-demographic characteristics, including age, gender, education level, and political orientation. Participants were then randomly assigned to one of three conditions: a control group that read excerpts from human-generated articles (“human-generated” group), a first treatment group that read excerpts that were rewritten with the help of ChatGPT (“AI-assisted” group), and another treatment group that read excerpts that were entirely generated by ChatGPT (“AI-generated” group). To ensure consistency and create a set of texts that is comparable across the three groups, all articles are derived from the human-generated articles that are shown to the control group. We used the following procedure to generate the articles. For AI-assisted articles, we copy-pasted the original article into ChatGPT and asked it to rewrite the article without losing any information. This resulted in articles for the AI-assisted group that are similar to the originals, but are rewritten by the AI system. For AI-generated articles, we only provided ChatGPT with the title and lead of the original article. We then asked ChatGPT to generate a short article in the same style as the source of the human-generated articles. Each participant read two excerpts, randomly drawn from a pool of ten articles on Swiss politics, with each excerpt having a hard cutoff at 150 words to maintain consistency, avoid bias due to article length, and simulate a paywall for the purposes of our third outcome (willingness to “continue reading” the article). The random selection of two articles from a pool of ten texts for each group minimizes the risk that the outcomes depend on specific topics. Moreover, for each article, we asked respondents to answer a simple question on the article's content. This allows us to check that respondents have read the text in sufficient detail. The first outcome measures how participants evaluate the quality of the articles. Following <cit.>, we ask respondents to rate three dimensions of quality: expertise, readability, and credibility. Each dimension is based on specific items, expressed as adjectives <cit.>: “clear”, ”coherent”, “comprehensive”, “concise”, and “well-written” for expertise; “boring”, “enjoyable”, “interesting”, “lively”, and “pleasing' for readability; and “biased”, “fair”, and “objective” for credibility. Participants rated the articles on each of these items on a 1-5 Likert-scale, and we then aggregate the scores for each dimension. Following our pre-registered procedure, we then computed a single measure of quality because Cronbach's alpha was above 0.7. However, results are robust to using separate scores for each dimension. Following the initial rating, participants were then informed about the true creation process of the articles. After learning how the texts they read were actually created, respondents were asked if they would be willing to continue reading the full articles after reviewing the excerpts again. In addition, all respondents were asked to report if they would be willing to read AI-generated news articles in the future, on a 1-5 Likert-scale. Following our pre-registration, we use linear regression models to analyze the data, controlling for socio-demographic variables and using the Benjamini-Hochberg procedure to adjust for multiple comparisons. § RESULTS Figure 1 presents the mean ratings for the first outcome, which assesses the perceived quality (credibility, expertise, and readability) of the articles across the three groups: control, AI-assisted, and AI-generated. The mean credibility ratings were 3.39 for the control group, 3.37 for the AI-assisted group, and 3.38 for the AI-generated group. In addition to the minimal variation, the overlapping confidence intervals suggest no significant differences between the groups. For expertise, the mean ratings were 3.56 for the control group, 3.63 for the AI-assisted group, and 3.62 for the AI-generated group, with overlapping confidence intervals again indicating no significant differences. Readability ratings were 3.15 for the control group, 3.14 for the AI-assisted group, and 3.19 for the AI-generated group, with no significant differences observed, as evidenced by the overlapping confidence intervals. Table 1 provides a more detailed analysis of the first outcome. The table presents regression coefficients for the AI-assisted and AI-generated groups, with controls for age, gender, education, and political orientation. Each column corresponds to a separate regression model aligned with one of the hypotheses. The coefficients for the AI-assisted and AI-generated groups across expertise, readability, and credibility are not statistically significant, reinforcing the findings from Figure 1 that there are no substantial differences in the perceived quality of the articles between the groups. Among the control variables, only age and political orientation yielded statistically significant coefficients in some models; however, the effect sizes for these variables are very small. These results remain unchanged when estimating the models only for the subset of respondents who passed the manipulation check. These results indicate that there are no significant differences in perceived credibility, expertise, and readability between AI-assisted, AI-generated, and human-generated articles. Consequently, we reject the first set of hypotheses (H1-H6), which expected lower perceptions of these attributes in the AI-assisted and AI-generated groups compared to the control group. In other words, news articles generated either with the assistance of AI or entirely by AI are perceived to match the quality of traditional articles written by journalists. Figure 2 illustrates the mean ratings for the second set of outcomes, which measure participants' willingness to continue reading the articles they were exposed to and their willingness to read AI-generated news in the future. The control group, which read article excerpts written by journalists, had a mean rating of 2.48 for willingness to continue reading, while the AI-assisted group had a rating of 3.08, and the AI-generated group 3.18. The confidence intervals for the AI-assisted and AI-generated groups do not overlap with those of the control group, indicating statistically significant increases in willingness to continue reading for these groups. Additionally, no significant difference is observed between the AI-assisted and AI-generated groups. This leads to the rejection of hypotheses H7 and H8, which predicted lower willingness to continue reading in the AI-assisted and AI-generated groups. For willingness to read AI-generated news in the future, the control group had a mean rating of 2.37, the AI-assisted group 2.34, and the AI-generated group 2.46. The overlapping confidence intervals suggest no significant differences between the groups. Table 2 provides a detailed overview of the regression results for the second set of outcomes. They reinforce the findings from Figure 2, indicating a higher willingness to continue reading in these groups. For future reading willingness, the coefficients for both the AI-assisted and AI-generated groups are not significant, corroborating the findings from Figure 2 that there is no significant difference in future reading preferences between the different groups. We therefore reject of hypotheses H9 and H10, which predicted higher willingness to read AI-generated news in the future. These results suggest that knowledge of AI involvement and exposure to it do not significantly influence future reading preferences. In summary, the data show no significant differences in perceived expertise, readability, and credibility between articles created with the help of AI, those fully generated by AI, and those written by humans. However, participants in the AI-assisted and AI-generated groups exhibit a significantly higher willingness to continue reading the articles. These findings suggest that while AI involvement in news production does not negatively impact the perceived quality of news articles when readers are unaware of the original article generation process, it positively influences readers' willingness to continue reading once the process is revealed. However, no significant effect was detected regarding the willingness to read AI-generated news in the future across the different groups. The rejection of the first set of hypotheses aligns with the notion that modern AI technologies produce content of comparable quality to that of human journalists, while the significant findings in the second outcome highlight an unexpected openness to engaging with AI-generated content, but only in the short term. § CONCLUSION The primary aim of this paper was to examine the perceived quality of AI-assisted and AI-generated news articles compared to human-generated ones, and to assess how subsequent disclosure of AI involvement influences readers' willingness to engage with these articles and their openness to AI-generated news in the future. The research design involved a preregistered experimental study where participants evaluated the perceived quality of news articles without knowing their source, followed by disclosure of whether the articles were AI-assisted, AI-generated, or human-written, to measure changes in willingness to engage with the content and future openness to AI-generated news. We found no significant differences in perceived credibility, expertise, and readability between articles created by AI, those assisted by AI, and those written by humans. However, participants showed a significantly higher willingness to continue reading articles when aware of AI involvement, whether assisted or fully generated, compared to human-written articles. This suggests that AI does not reduce perceived article quality and enhances readers' engagement once AI origin is disclosed. However, no significant effect was observed regarding future willingness to read AI-generated news, indicating that AI's role may positively influence immediate engagement, possibly through a novelty effect, but not long-term preferences. The results regarding our second set of outcomes are particularly noteworthy. We found significant and positive effects on respondents' willingness to continue reading the articles after the true source was revealed, applicable to both treatment groups (AI-assisted and AI-generated). This suggests that readers may not be opposed to engaging with AI-generated or AI-assisted texts and may even be curious to learn more about AI. However, it is important to consider that the increased interest in reading the full article may not be a lasting effect changing preferences for AI-generated news. Participants might have become curious about AI involvement after learning the true authorship, but they have not become more (nor less) open towards reading AI news in general. Future research could explore this increased interest using more targeted intervention research designs, which could provide valuable insights for news producers on how to enhance specific target audiences' interest and increase article engagement. Regarding respondents' willingness to read AI-generated news articles in the future, we found no significant differences between the treatment groups. Simply reading two AI-assisted or AI-generated texts does not appear to alleviate concerns about AI involvement in news production, as overall willingness scores remained below 2.5 across all groups. These findings may indicate a gradual familiarization process with AI among readers. Over time and with further technological advancements, AI might become more integrated into journalism, particularly in news production, with readers becoming more accustomed to it. Therefore, future research should continue monitoring public attitudes toward AI to detect potential familiarization effects. Future research could focus on tracking changes in public perception and engagement with AI-generated news over time through longitudinal studies, which would provide insights into whether initial curiosity leads to long-term acceptance or trust. Exploring the role of transparency in disclosing AI involvement would also be important, as different types of transparency might influence how readers perceive and trust AI-generated content. Intervention studies could test different strategies to enhance reader engagement to determine what factors increase willingness to engage with AI-produced news. Expanding this research to include diverse cultural and linguistic contexts would be important, to assess whether these findings are generalizable across different populations. Additionally, studying the impact of AI-generated news on public knowledge could reveal how well readers retain and understand information from AI versus human-written articles. § ACKNOWLEDGEMENTS This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement nr. 883121). We thank Fabio Melliger for excellent research assistance. apsr
http://arxiv.org/abs/2409.02753v1
20240904143116
Does the Vulnerability Threaten Our Projects? Automated Vulnerable API Detection for Third-Party Libraries
[ "Fangyuan Zhang", "Lingling Fan", "Sen Chen", "Miaoying Cai", "Sihan Xu", "Lida Zhao" ]
cs.SE
[ "cs.SE", "cs.CR" ]
Journal of Class Files, Vol. XX, No. XX, XX 2024 Zhang et al.: Does the Vulnerability Threaten Our Projects? Automated Vulnerable API Detection for Third-Party Libraries § ABSTRACT Developers usually use third-party libraries (TPLs) to facilitate the development of their projects to avoid reinventing the wheels, however, the vulnerable TPLs indeed cause severe security threats. The majority of existing research only considered whether projects used vulnerable TPLs but neglected whether the vulnerable code of the TPLs was indeed used by the projects, which inevitably results in false positives and further requires additional patching efforts and maintenance costs (e.g., dependency conflict issues after version upgrades). To mitigate such a problem, we propose , which can effectively identify causing vulnerabilities in TPLs and further identify all of TPLs used by Java projects. Specifically, we first collect the initial patch methods from the patch commits and extract accurate patch methods by employing a patch-unrelated sifting mechanism, then we further identify the for each vulnerability by employing an augmentation mechanism. Based on them, we leverage backward call graph analysis to identify all for each vulnerable TPL version and construct a database consisting of 90,749 (2,410,779 with library versions) with 1.45% false positive proportion with a 95% confidence interval (CI) of [1.31%, 1.59%] from 362 TPLs with 14,775 versions. The database serves as a reference database to help developers detect of TPLs used by projects. Our experiments show eliminates 5.78% false positives and 2.16% false negatives owing to the proposed sifting and augmentation mechanisms. Besides, it outperforms the state-of-the-art method-level vulnerability detection tool in analyzing direct dependencies, Eclipse Steady, achieving more effective detection of vulnerable APIs. Furthermore, to investigate the real impact of vulnerabilities on real open-source projects, we exploit to conduct a large-scale analysis on 3,147 projects that depend on vulnerable TPLs, and find only 21.51% of projects (with 1.83% false positive proportion and a 95% CI of [0.71%, 4.61%]) were threatened through , demonstrating that can potentially reduce false positives significantly. Vulnerability Detection, Software Composition Analysis, Static Analysis Does the Vulnerability Threaten Our Projects? Automated Vulnerable API Detection for Third-Party Libraries Fangyuan Zhang, Lingling Fan*, Sen Chen, Miaoying Cai, Sihan Xu, and Lida Zhao Fangyuan Zhang and Miaoying Cai are with DISSec, NDST, College of Computer Science, Nankai University, China. Emails: {fangyuanzhang, miaoyingcai}@mail.nankai.edu.cn. Lingling Fan (Corresponding author) and Sihan Xu are with DISSec, NDST, College of Cyber Science, Nankai University, China. Emails: {linglingfan, xusihan}@nankai.edu.cn. Sen Chen is with the College of Intelligence and Computing, Tianjin University, China. Email: [email protected]. Lida Zhao is with School of Computer Science and Engineering, Nanyang Technological University. Email: [email protected]. today ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Java developers frequently incorporate third-party libraries (TPLs) to speed up software development. However, the utilization of TPLs may introduce security threats <cit.>. According to an open-source security and risk analysis report released by Synopsys <cit.>, 97% of the 2,409 codebases contained open-source components, and 81% of them contained at least one known vulnerability. To mitigate such a severe problem, software composition analysis (SCA) <cit.> is typically used to identify vulnerable TPLs. A couple of SCA tools have been suggested including Eclipse Steady <cit.>, Dependabot <cit.>, OSSIndex <cit.>, OWASP Dependency Check <cit.>, etc. However, from the detection side, nearly all SCA tools can only determine whether vulnerable TPLs are depended on by projects, but cannot tell whether vulnerable APIs are actually invoked, resulting in false positives introduced by analysis at the library level. From the patch side, vulnerabilities introduced by TPLs can have unpredictable effects on the developers' projects. Once the vulnerabilities are detected, updating to a new version is the most straightforward way. However, it may cause dependency conflict issues <cit.> and compatibility issues <cit.>, which will require substantial maintenance costs. Consequently, it is imperative to precisely determine whether the project is threatened by known vulnerabilities. In other words, if the vulnerability has a real negative impact on the project in practice, developers can generate a patch immediately to avoid an exploit of the vulnerability. If the vulnerability has no effect on the project, the handling of vulnerable TPLs is not urgent and can be incorporated into the regular development cycle. Thus, the real impact analysis of vulnerable TPLs at the method level is urgently needed no matter from the perspective of detection or patching <cit.>. As far as we know, Eclipse Steady <cit.> is the only open-source work that provides a forward reachability analysis at the fine-grained method level for users. However, according to our analysis, we conclude the following deficiencies in Steady: (1) The inaccuracy of patch method extraction. Steady considers the methods whose abstract syntax trees have been changed in patch commits as patch methods, however, patch-unrelated methods may exist in patch commits, leading to false positives. (2) The incompleteness of vulnerable root method identification. Steady obtains directly from patch commits, however, some may exist in the commits that are not recognized or marked as patch commits. The incomplete identification would cause false negatives of vulnerable paths. (3) Low efficiency of vulnerable path analysis. Steady conducts forward reachability analysis for each TPL with low efficiency due to complex dependency analysis. Therefore, in this paper, we aim to address the aforementioned problems to evaluate the real impact of vulnerable TPLs on projects. However, we are facing the following challenges: (1) How to extract accurate patch methods from patch commits? As we all know, not all modified methods in a patch commit are patch methods. Therefore, we need to sift patch-unrelated methods out on the patch commit, to extract precise patch methods. (2) How to obtain comprehensive and precise from patch commits? Due to the incompleteness of patch commits provided <cit.>, it is not comprehensive to only handle the patch commits. (3) How to accurately scan the vulnerable code of libraries in the projects with less resource overhead? To ensure fewer resources spent during scanning, we need a comprehensive set of detected of known vulnerable TPLs. To fill the gap, we propose (Vulnerable API Scanner), an effective vulnerable API detection approach, to assess the impact of OSS vulnerabilities in Java projects. We first collect public patch commits based on the vulnerability knowledge database and map the changed source code files involved in patch commits with class files in TPLs. We collect diff methods from patch commits as initial patch methods and then sift out patch-unrelated methods to extract accurate patch methods. We propose an augmentation mechanism to identify based on these patch methods. Then we perform backward call-graph analysis on and construct a vulnerable API database mapping with the relation among the vulnerable library versions, CVEs, and , which includes 90,749 unique (2,410,779 with library versions) from 362 TPLs with 14,775 vulnerable versions involving 502 CVEs. Based on the results, developers can figure out whether vulnerable libraries need to be patched at this time and prioritize the patches, thereby reducing additional patching efforts and maintenance costs. To demonstrate the effectiveness of , we conducted comprehensive experiments. We took an in-depth analysis of the patch-unrelated methods sifted out by the patch-unrelated sifting mechanism, introduced by the augmentation mechanism, and vulnerable APIs in the vulnerable API database. Moreover, we summarized 5 patterns of added patch methods, to analyze the fixed intention of introducing them. Based on statistical results, we sifted out 1,352 patch-unrelated methods with 98.06% precision and augmented 249 which were absent in patch commits with 93.57% precision. And the vulnerable API database constructed by contains a total of 90,749 unique vulnerable APIs with a false positive proportion of 1.45% and a 95% CI of [1.31%, 1.59%]. Furthermore, to demonstrate the effectiveness of our novel mechanisms, we conducted an ablation study on and - with different mechanisms, and the result shows eliminates 5.78% false positives and 2.16% false negatives. Subsequently, we compared with the state-of-the-art tool, Eclipse Steady. The experimental results have shown that outperforms Steady in analyzing direct dependencies, achieving more comprehensive method-level detection (#Cases: 214 vs. 95). Specifically, Steady (Avg time: 769s) exists 61.71% false negatives, while (Avg time: 353s) yielded 2.97% false positives and 20.45% false negatives. Besides, our large-scale analysis on 3,147 real-world projects shows that only 21.51% of projects (with 1.83% false positive proportion and a 95% CI of [0.71%, 4.61%]) were potentially threatened by of TPLs, indicating the effectiveness of . In summary, we make the following contributions: * We proposed , an effective and efficient tool that can detect from TPLs used by Java projects, reducing false positives of vulnerabilities. * We proposed two mechanisms to achieve accurate and complete vulnerable API identification for vulnerable libraries, i.e., a sifting mechanism to sift out patch-unrelated methods and an augmentation mechanism to augment the , which eliminates 5.78% false positives and 2.16% false negatives. * We constructed a reusable database including 90,749 (2,410,779 with library versions) with 1.45% false positive proportion with a 95% CI of [1.31%, 1.59%] based on the identification results of , which assists in achieving more efficient vulnerability detection than forward reachability analysis. * We compared with the state-of-the-art tool, Eclipse Steady. The experimental result demonstrates that achieves more effective method-level identification in analyzing direct dependencies. § BACKGROUND & CONCEPTS §.§ Background The Maven Ecosystem. The Maven ecosystem <cit.> plays a crucial role in the Java landscape. It contains nearly 2,000 repositories and over 37 million packages. Each maven package is distinctly identified by the combination of GroupId, ArtifactId, and Version (GAV). Maven provides a simple and consistent approach by utilizing the configuration file (pom.xml) to effectively manage project dependencies, streamline the build process, and facilitate release development. Furthermore, since a maven package can be utilized as a TPL by other projects, it can be considered a project as well as a Java TPL. Vulnerable Libraries and the Associated Risks. Vulnerable libraries are TPLs that contain vulnerabilities. Using vulnerable libraries introduces potential security risks to the projects. For instance, the Log4Shell vulnerability <cit.> existed in Apache log4j, which is a widely used Java-based logging library, affecting numerous projects. Software Composition Analysis. Software Composition Analysis (SCA) <cit.> involves analyzing the libraries and identifying their vulnerabilities. Vulnerable library identification is a subset of SCA, which typically relies on hash comparisons or configuration files (e.g., pom.xml) to identify TPLs, and detect vulnerable libraries based on vulnerability databases (e.g., NVD <cit.>). Vulnerability reachability analysis focuses on determining whether there is a path from the software to the vulnerable code in TPLs. This analysis often uses forward call analysis to ascertain whether the software can access the vulnerable code within the libraries. §.§ Key Term Definition We introduce some key concepts or terms used in the paper to make it easy to understand, as illustrated in Figure <ref>. Adjacent Vul. version vs. Patch release version. When an open-source TPL is affected by a vulnerability (also known as CVE), the vulnerability knowledge base usually gives the vulnerable version range of the TPL. “Patch release version” means that it is the first release version to fix this vulnerability, i.e., V_i+1. “Adjacent vulnerable version” is the vulnerable version adjacent to the patch release version, i.e., V_i. Patch commits used by developers to fix this vulnerability exist between these two versions. Initial Patch Method. Initial patch methods are the methods that have undergone code changes (i.e., added, deleted, or modified) in the patch commits. Patch Method. Patch methods are methods that may be relevant to addressing vulnerabilities. Since not all initial patch methods play a role in patching, it is necessary to sift out patch-unrelated methods (Section <ref>) from the initial patch methods to generate precise patch methods. If a patch method is present only in the patch release version (i.e., V_i+1) and not in the adjacent vulnerable version (i.e., V_i), we consider it as an added patch method. Vul. Root Method. Vulnerable root methods are those methods that are directly related to the vulnerability. Most of them are extracted from patch commits of vulnerabilities directly. Vul. APIs. Vulnerable APIs are the methods that are directly or indirectly threatened by the vulnerability in the vulnerable TPL, including the and the methods that directly/indirectly invoke . For projects, APIs in TPLs are divided into 2 categories: and non-. §.§ Problem Definition As shown in Figure <ref>, our goal is to identify all for each vulnerable TPL version based on patch commits of CVEs and vulnerable root method identification and construct a database that maintains the mapping relation: vulnerable library versions ↔ CVEs ↔ (libV-CVE-Vul.API), based on which we aim to detect whether the project invokes of TPLs, to assess the real impact of OSS vulnerabilities on projects. § APPROACH In this paper, we propose to detect whether the projects are threatened by the in TPLs. Figure <ref> shows the overview of our approach, consisting of 4 components: (1) Patch method extraction, which collects initial patch methods from patch commits and sifts out patch-unrelated methods to extract accurate patch methods. (2) Vulnerable root method identification, which identifies through locating the patch methods at the version level and employing an augmentation mechanism based on the extracted patch methods. (3) Vulnerable API identification, which utilizes call-graph analysis to identify for each library version, and constructs a database storing the mapping relations of vulnerable library versions (LibV), CVEs, and vulnerable APIs (Vul.API), presented as libV-CVE-Vul.API. (4) Used vulnerable API detection, which detects the in the libraries used by a given project. §.§ Patch Method Extraction This section describes the steps to extract the accurate patch methods. Specifically, we first collect methods that have undergone code changes in the patch commits (i.e., initial patch methods), and then sift out patch-unrelated methods. §.§.§ Initial patch method collection To collect the methods related to patching vulnerabilities, we first need to obtain patch commits of each CVE. Specifically, we collected vulnerabilities (identified by CVE ID) and their associated patch commits from Snyk Vulnerability DataBase <cit.> and GitHub Advisory Database <cit.>. We chose them as the vulnerability data collection sources for two reasons: (1) They maintain detailed information about CVEs and the corresponding patches, such as CVE ID, the vulnerable version ranges of TPLs, and patch-related links, which cover the CVE-related references provided by NVD. Besides, for most fixed CVEs, the two databases provide patch commit references on GitHub <cit.>, which facilitates the collection and analysis of patch commits. (2) They map CVEs to vulnerable libraries, allowing us to identify libraries with vulnerable versions based on CVE IDs. Based on the two databases, we collected 2,640 CVEs and 1,551 affected libraries belonging to the Maven ecosystem. We filtered out CVEs without patch commits in patch-related links and those where the affected libraries did not have patch release versions. Finally, we gathered 1,116 CVEs and 957 affected libraries to collect initial patch methods. For each patch commit, we extracted code differences by using the abstract syntax tree (AST), as it can accurately identify real code changes and filter out irrelevant modifications like adding or deleting identical code, changing the position of methods, or adding blank lines. This approach is more effective and accurate than traditional code-based change extraction. Specifically, we employed GumTree <cit.>, a tool for generating code differences in AST, to obtain valid changed methods in patch commits. We first obtained the Java source code files before and after the commits based on the GitHub repository and used GumTree to generate the mappings between two ASTs. The identified code changes are divided into three types, i.e., insert, delete, and update. According to the tree structure representing methods in the AST, we got the signature of methods where different nodes were located. Finally, we obtained methods with valid code changes in the patch commits (i.e., initial patch methods) with different change types (inserted, deleted, and modified). After filtering out CVEs whose patch commits involve languages other than Java (e.g., JavaScript), we consequently obtained the initial patch methods for 1,075 CVEs, 453 unique affected libraries and 1,350 patch commits. §.§.§ Patch-unrelated method sifting Since not all initial patch methods are related to vulnerability fixing, we aim to sift out patch-unrelated methods from initial patch methods, to obtain patch methods. To achieve this, we initially extracted the changed (i.e., inserted, deleted, or updated) statements within each initial patch method. We then assessed whether these changed statements were unrelated to the patch. If all the changed statements within an initial patch method are patch-unrelated, the method will be sifted out, otherwise, it is recognized as a patch method. To achieve precise sifting of patch-unrelated methods, we adopted a conservative strategy for identifying irrelevant statements. Specifically, we summarized three patterns of patch-unrelated statements: (1) Debugging code statements, such as , log-related function calls (e.g., ), and error handling statements which only changed the exception messages, i.e., ; (2) AST-equivalent statements after name normalization. In detail, we initially collected the functions, class member variables, and formal parameters of functions that were solely renamed to generate a renaming set. We defined various renaming scenarios: when the function name changed but the function body remained unchanged, when a member variable merely altered its name but retained the same type and initialization, or when a formal parameter of a function only modified its name while maintaining the same type. In such instances, we categorized these functions, member variables, and formal parameters as being renamed. If a statement only includes modifications to the names of called functions, parameters of called functions, or the object of calling functions, we check whether the modified name exists in the renaming set. If it does, this statement is considered an AST-equivalent statement before and after the patch commit. Besides, if only the name of the assigned variable has been modified in an assignment statement (e.g., ), the statement will also be regarded as AST-equivalent; (3) Statements that solely compose the Getter/Setter functions, such as , , and ( is a class member variable). Note that we do not assert that the Getter/Setter functions are inherently patch-unrelated. Instead, our goal is to identify and sift out Getter/Setter functions that solely consist of those specific statements. §.§ Vulnerable Root Method Identification The patch methods are extracted based on patch commits, however, the patch release version or the adjacent vulnerable version of libraries (shown in <Ref>) may not contain the methods that were patched. Therefore, in this section, we aim to identify (denoted by VulRoot) by locating the patch methods at the version level instead of the commit level and augmenting them to obtain comprehensive . §.§.§ Version-level patch method localization Since a commit only records a timestamped change to the current code in the repository, the changed methods in a single patch commit may not appear in the release versions of the library. For example, a library has several release versions V_1, V_2, V_3, V_4..., V_n, where n is the number of versions, V_2 and V_3 are the vulnerable versions. There may be multiple commits between V_3 and V_4 aiming to patch the vulnerability in V_3, however, the changed methods in one commit might not be maintained in V_4 or exist in V_3, and should not be identified as a valid patch. Therefore, we need to locate the patch methods at the version level to ensure they exist in the release versions. Specifically, We gathered all library versions from the Maven repository <cit.> and extracted patch releases and adjacent vulnerable versions based on vulnerable version ranges. If the patch release version or adjacent vulnerable version is not available in the repository, we filtered it out together with the associated CVEs from our database. Then we extracted the diff methods from pairwise class files between the adjacent vulnerable version and the patch release version and checked whether the methods that were patched exist in these diff methods. To obtain more accurate , we employ the following strategies to discard or retain patch methods for further augmentation: (1) Patch methods that exist in neither version (i.e., the patch release version and the adjacent vulnerable version) will be discarded; (2) Patch methods that exist in both versions are directly considered as . (3) Patch methods that only exist in the patch release version are newly added patch methods for the adjacent vulnerable version and will be retained for augmentation. During the process of patch method localization in library release versions, we observed the absence of all patch methods for some CVEs, thus, we excluded these CVEs and obtained 362 libraries with 14,775 versions involved in 502 CVEs. §.§.§ Augmentation mechanism It is common to add a new class or method during vulnerability fixing. Then, the added patch methods are typically called by others, aiming to fix the vulnerability in those methods. Existing work (e.g., Eclipse Steady <cit.>) has overlooked the impact of added patch methods when identifying . They argued that these added patch methods are secure and can be ignored. However, our observation reveals that ignoring added patch methods can lead to overlooking vulnerable methods that invoked added patch methods but were absent in patch commits. For example, in Figure <ref>, if the patch method m_patch only exists in the patch release version but not in the adjacent vulnerable version, we regarded it as an added patch method. If a method m invoked the added patch method m_patch in the patch release version but did not invoke this patch in the adjacent vulnerable version, the method m from the adjacent vulnerable version is still considered vulnerable, even if it did not appear in the patch commit. Therefore, such methods should also be augmented as . [language=diff,caption=Patch Commit of CVE-2011-2730 <cit.>,label=lst:exp1,numbers=left] boolean isJspExpressionActive(PageContext p) ... if (sc.getMajorVersion() >= 3) - if (sc.MajorVersion() > 2 || sc.MinorVersion() > 3) - /* Application declares Servlet 2.4+:JSP 2.0 active. - * Skip our own expression support.*/ - return false; + if (sc.MajorVersion() == 2 sc.MinorVersion() < 4) + /* Application declares Servlet 2.3-:JSP 2.0 not active. + * Activate our own expression support.*/ + return true; - return true; + return false; [language=diff,caption=Method Diff between 3.0.5 and 3.0.6 in org.springframework:spring-web,label=lst:exp2,numbers=left] Object evaluate(Parameters) throws JspException return isExpressionLanguage(attrValue) + isJspExpressionActive(pageContext) ? doEvaluate(): attrValue; Considering the situation that methods that invoked the added patch method in the patch release version may be due to the introduction of new functionalities rather than fixing the vulnerability; therefore, our augmentation mechanism is based on the following constraint: A method is considered a VulRoot due to augmentation only if the method invoked the added patch method in the patch release version but not in the adjacent vulnerable version. In other words, there are no other changes in the augmented VulRoot except for the call relationship to the added patch methods. For a real case, the TPL “org.springframework:spring-web” is affected by the CVE-2011-2730 <cit.>, causing multiple versions (the versions before 2.5.6.SEC03, and 3.0.0∼3.0.6) to be vulnerable. CVE-2011-2730 is caused by evaluating Expression Language (EL) expressions in tags twice, which allows remote attackers to obtain sensitive information. As shown in Listing <ref>, the developers only activate their expression support when the application declares Servlet 2.3- (Lines 8-11) and set “springJspExpressionSupport” to false by default (Line 14), avoiding the potential double EL evaluation problem on pre-Servlet-3.0 containers, which indicates that this method acts as a bug fix. Although this patch method is shown as modified in the patch commit, however, we found that it only existed in patch versions (2.5.6.SEC03 and 3.0.6). Therefore, the method “” is an added patch method for vulnerable versions. To further confirm the impact of the added patch method on fixing the vulnerability, we checked its call relationships in the patch release version (V3.0.6). We found that five methods directly called this added method and all of them existed in the adjacent vulnerable version (V3.0.5). For example, in Listing <ref>, the method “” called the added patch method “” (Line 3) in V3.0.6 to fix CVE-2011-2730, and it still existed in V3.0.5 without invoking the added patch method. Therefore, this method located in V3.0.5 is vulnerable and should be augmented to the list of vulnerable root methods. Unfortunately, all of the patch commits did not record such call relationship, thus existing work only based on patch commits cannot identify the in-depth , while augments the with such vulnerable methods via multi-version analysis. [t] Vulnerable Root Method Augmentation mycommfont m_0: an added patch method. P_cg: the call graph of patch release version, V_cg: the call graph of the adjacent vulnerable version. R: Vulnerable root methods based on m_0. Visit ∅ Q Queue() Q.push(m_0) Visit Visit ∪ {m_0} Q ≠∅ m Q.pop() S_m getCaller(m, P_cg) *[h]Get direct callers of m. S_m = ∅ continue c ∈ S_m isInGraph(c, V_cg) R R ∪{c} *[h]Incorporate it into the results. c ∉ Visit Q.push(c) Visit Visit ∪{c} R ≠∅ R Algorithm <ref> details the augmentation procedure. Given an added patch method m_0, the call graph of the adjacent vulnerable version and patch release version (V_cg and P_cg respectively), outputs the augmented R based on m_0. In detail, we leverage the function call relationship of the added patch methods in the patch release version, to mine the methods in the call chain that exist in the adjacent vulnerable version (Lines 5-18). In particular, for each added patch method m_0, if it is invoked by other methods in the patch release version, we will check whether these callers exist in the adjacent vulnerable version (Line 11). If exists, the caller will be augmented into the set of (Line 12), otherwise, it will be added into the queue for further mining (Lines 13-15). Note that, once we obtain the results of , we will exit the while loop directly (Lines 17-18), to avoid increasing the negative impact of the possible errors of the added patch methods. After the above process, the set of augmented is constructed. §.§ Vulnerable API Identification Based on the final identified in Section <ref>, in this section, we aim to mine the via call graph, which is defined in Section <ref>. We mine all the because if a project invokes an API of a library that eventually reaches or calls the vulnerable root method, then this API should also be regarded as vulnerable. In fact, according to our observation, the are hardly invoked by projects directly. Therefore, we also mine and maintain all the for each vulnerable library version for further analysis. Specifically, for each vulnerable library, we mined for the affected by the based on backward call graph analysis. Firstly, we generated the call graph of the library by employing context-insensitive points-to analysis provided by the static framework Tai-e <cit.> and considered all the methods as the entry points to obtain a complete call graph. Subsequently, starting from the , we traversed their called traces in the call graph and recorded all the methods executed in the traces. In such a manner, we obtained all the for each vulnerable library version. Database construction. Based on the identified , we constructed a vulnerable API database with the mapping relation: library version to CVEs to , denoted by libV-CVE-Vul.API. Specifically, we crawled all the vulnerability data and patch commits corresponding to the vulnerability from Snyk Vulnerability DB and GitHub Advisory (as of Feb. 2023) and downloaded the vulnerable libraries from Maven <cit.> to support our database. Since some versions are not available from Maven or some patch class files do not exist in the libraries, we filtered them out. We employ the approach above for each CVE in the vulnerable library, to obtain a set of and construct the vulnerable API database. Table <ref> provides detailed information about the database. The column “#Vul. API (excl. root)” represents the number of obtained from the backward analysis of call graphs. “#Vul. root method” represents the number of , including ones directly obtained from patch commits (“#Commit”) and the augmented ones (“#Augm.”) mined by . We used two counting methods for across different library versions: single counting ('API-once') and multiple counting ('API-multi.'). Identical APIs were determined by normalizing their function bodies and comparing hash values. The database contains 90,749 unique (2,410,779 across library versions) from 362 unique libraries with 14,775 library versions, involved in 502 CVEs. On average, our augmentation mechanism has supplemented 5.9 augmented per library and 2.5 per library version, related to 49 CVEs. §.§ Used Vulnerable API Detection In this section, we describe how to detect whether the from TPLs are used in projects. For a given Java project with its used libraries, we generate its call graph by employing the context-insensitive points-to analysis of Tai-e <cit.>, which is the bedrock to determine whether it invokes . If it depends on a library version in the vulnerable API database, we search out the used from the database for this library. Specifically, for each method in the call graph of the project, we analyze whether it invokes the in the library, if true, marks the used by developers. Besides, it also reports the vulnerable dependency, the used in the library, the call frequency of , and the involved CVEs. Suppose all the methods in the project do not call the , in that case, the project uses the vulnerable library without using the vulnerable code, which should not be regarded as vulnerable usage. § EVALUATION In this section, we evaluated on real-world projects to answer the following research questions: RQ1: Can effectively identify vulnerable root methods and ? RQ2: Can outperform state-of-the-art tools in detecting vulnerable projects threatened by vulnerable third-party libraries? RQ3: How do the sifting and augmentation mechanisms contribute to vulnerable API detection for ? RQ4: How is the status quo of vulnerable libraries used in open-source projects? §.§ RQ1: Effectiveness Evaluation §.§.§ Setup Given that are derived from the backward call graph analysis, it can be reasonably assumed that APIs directly or indirectly calling the may also contain vulnerabilities. As a result, the accuracy of depends on the accuracy of . This experiment aims to investigate the effectiveness of in identifying and , and take an in-depth analysis of root causes. Specifically, this experiment is based on our database containing 362 vulnerable TPLs (14,775 library versions), involving 502 CVEs. Due to the lack of ground truth for correlated with CVEs, we manually analyze the sifted patch-unrelated methods and augmented to assess the effectiveness of sifting and augmentation mechanisms. Furthermore, we additionally provide the ground truth for to validate the vulnerable API database and also perform an error analysis to estimate the effectiveness of the vulnerable API database provided by . [language=diff,caption=Patch patterns with examples, label=lst:patterns,numbers=left] // Pattern 1: Checker + boolean checkPathSecurity(String path) + contain_ = path.contains("../"); + end_ = path.endsWith(".log") + if (!StringUtils.isBlank(path)) + if ( start_ !contain_ end_ ) + return true; + return false; // Pattern 2: Filter + String filterSensitive(String url) + String resultUrl = url; + if (containsIgnoreCase(url, _SENSITIVE)) + resultUrl = replaceIgnoreCase(url,_SENSITIVE,_FALSE); + return resultUrl; // Pattern 3: Configuration + boolean isSupportActive(PageContext pc) + ServletContext sc = pc.getServletContext(); + String EXP_SUPPORT_CONTXT = "springJspExpressionSupport" + String Support = sc.getInitParam(EXP_SUPPORT_CONTXT); + if (Support != null) + return Boolean.valueOf(Support); + if (sc.getVersion() >= 3) + Int maj_v = sc.getEffectiveMajorVersion() + Int min_v = sc.getEffectiveMinorVersion() + if (maj_v==2 min_v<4) + return true; + return false; // Pattern 4: Enhancer + String randomString(int byteLength) + byte[] bytes = new byte[byteLength]; + SECURE_RANDOM.nextBytes(bytes); + CharSet sc = StandardCharsets.ISO_8859_1; + return new String(bytes, sc); // Pattern 5: Assistance + ObjectMapper createVaadinConnectObjectMapper( + ApplicationContext c) + ObjectMapper objMapper = + Jackson2ObjectMapperBuilder.json().build(); + JacksonProperties jacksonProperties = + c.getBean(JacksonProperties.class); + if (jacksonProperties.getVisibility().isEmpty()) + objMapper.setVisibility(PropertyAccessor.ALL, + JsonAutoDetect.Visibility.ANY); + return objtMapper; §.§.§ Result Table <ref> shows that can identify 90,749 unique (2,410,779 with library versions). Details are aforementioned in the dataset construction of Section <ref>. In the following, we aim to demonstrate the validity of the augmented , the sifted patch-unrelated methods and . (1) Result of augmented . Table <ref> shows the number of libraries (library versions) and CVEs affected by augmented . Columns 3-6 indicate that more are mined compared with those only extracted from patch commits. Since there is no single library version with more than 20 unique vulnerable root methods augmented, the value of “libV.” is set to 0. We manually analyzed each method and summarized five patch patterns (<Ref>). These patterns highlight the scenarios in which developers address the vulnerabilities by introducing new patch methods. P_1: Checker. To fix vulnerabilities reported in CVEs, developers sometimes add check mechanisms (e.g., add logic statements) to check the legitimacy of the input or improve the original check mechanism. For example, Listing <ref> shows an added method “” in CVE-2022-26884 <cit.> that checks whether the parameter “path” transferred conforms to security, e.g., whether it contains “../” which does not meet security requirements and may lead to security problems. P_2: Filter. Some added methods aim to filter out unexpected input with specific conditions. In such a pattern, legitimate input will be retained, and illegitimate ones will be discarded. For example, in Listing <ref>, to fix CVE-2022-40955, developers added a new method “” in the patch commit <cit.> to filter out invalid and sensitive cases and keep the url meeting security requirements. P_3: Configuration. To avoid the vulnerabilities caused by the lack of default configuration or misuse of configuration, developers tend to standardize or improve existing configurations. As described in CVE-2011-2730, the spring-framework <cit.> suffered from Expression Language Injection. Developers addressed the potential Double EL Evaluation issue by defaulting the relevant parameter `springJspExpressionSupport` to false in their patch commit <cit.>. P_4: Enhancer. Developers usually introduce a series of algorithms and operations to enhance existing programs for security, such as introducing more robust algorithms and safer authentications. Listing <ref> shows an added method “” identified in the patch commit <cit.>, which provides a randomly generated default value, enhancing the client-side session encryption secret after the update. P_5: Assistance. Some added methods may not directly fix vulnerabilities, but their relevance can be assessed through correlation analysis of commit messages and methods. For example, the added method “” in the patch commit <cit.>, shown in Pattern 5 of Listing <ref>, creates a custom ObjectMapper to help address the vulnerability. For the 5 types of added patch patterns, we further investigated the number of that are augmented due to each type as well as the CVEs involved. Figure <ref> shows the result. Among the 49 CVEs supplemented with , 13 CVEs and 17 CVEs are fixed by adding a Checker and Enhancer in patch commits, respectively, which shows that they are the common fix solutions. Moreover, we augment 249 unique methods into in total, and 115 methods (the most) are augmented by Enhancer. As for the analysis of augmented in patch commits, our validation strategy unfolds in two steps: first, we validate whether the added patch method associated with augmented root methods achieves the patching effect; second, we check whether the augmented root method was defective before invoking the added patch method. If the added patch method is patch-unrelated, or if the augmented root method was secure in the adjacent vulnerable version of the library, we determine that this augmented root method was an FP. We validate the above steps for 49 CVEs affected by the augmentation mechanism. Out of the 249 augmented , 16 functions (involving 6 CVEs) were confirmed as FPs, achieving 93.57% precision. (2) Result of sifted patch-unrelated methods. <Ref> displays the number of involved CVEs and sifted patch-unrelated methods, including 1,352 sifted methods associated with 179 CVEs. Since the sifting mechanism involves a large number of methods, we conducted manual analysis on 50 randomly selected CVEs to evaluate the effectiveness (i.e., precision and recall) of the sifting mechanism. The sample set consisted of 807 initial patch methods, after manual analysis, 298 methods were identified as patch-unrelated and served as ground truth. Note that since we are evaluating the effectiveness of the patch-unrelated sifting mechanism, we consider correctly sifting out patch-unrelated methods as a true positive. Therefore, incorrectly sifting out the patch-related method is considered a false positive, and incorrectly identifying and retaining a patch-unrelated method is considered a false negative. Overall, our sifting mechanism identifies 258 methods patch-unrelated, achieving an impressive precision of 98.06%, with only 5 methods mistakenly considered as invalid patches. The reason is that developers move code snippets from one place to another (e.g., into an if clause), which changes the code semantics and causes false positives of . As for the false negatives, 45 patch-unrelated methods were not recognized successfully, resulting in a recall rate of 84.90%. The reasons are as follows: (1) Certain methods have undergone intricate modifications, limiting the sifting mechanism. (2) Method changes before and after patch commits are semantically equivalent. As employs ASTs to extract the changed code, it cannot recognize semantic equivalence. For example, in the patch commit <cit.>, the function “” was split into two functions, with one calling the other. However, fails to recognize it as patch-unrelated, leading to a false negative. Furthermore, we employed Wilson's score confidence interval <cit.> to calculate the real false positive rate (FPR) and false negative rate (FNR) of the sifting mechanism, which requires solving for p in the following formula: |p-p̂|=z·√(p̂·(1-p̂)/n) where p is the real FPR or FNR, representing the probability of FPs or FNs in the overall population; p̂ is the estimated FPR or FNR, representing the proportion of FPs or FNs calculated from the sample n; and z=1.96 is the critical coefficient for a 95% confidence interval. Thus, the FPR of the sifting mechanism is 0.98% with a 95% confidence interval (CI) of [0.42%, 2.28%], and the FNR is 15.10% with a 95% CI of [11.48%, 19.61%]. [t] Construction of Ground Truth for Vulnerable APIs mycommfont A_db: vulnerable API database, R_sam: sampled vulnerable root methods, R_err: the false positives of vulnerable root methods in R_sam. A_sam: sampled vulnerable APIs, A_err: the false positives of vulnerable APIs in A_sam. A_sam ←∅ A_err ←∅ vulAPI ∈ A_db *[h]Get associated vul. root methods of vulAPI. vulRoots getSourceRoots(vulAPI) R_sam∩ vulRoots == ∅ continue A_sam A_sam∪{vulAPI} isErrAPIFlag True vulRoot ∈ vulRoots vulRoot ∉ R_err isErrAPIFlag False break isErrAPIFlag A_err A_err∪{vulAPI} A_sam, A_err (3) Result of . In light of the need to validate the experiments' results, including the effectiveness of (RQ1), the comparison experiment (RQ2), the ablation study (RQ3), and the large-scale analysis (RQ4), we have established a common ground truth for evaluating these experiments. Specifically, given the large number of CVEs associated with the vulnerable APIs in RQ1, the overlapping APIs detected in RQ3, and the detection results in RQ4, it is impractical to analyze each vulnerable API individually. Therefore, we chose to conduct a sampling analysis on them and selected CVEs relevant to the detection results of RQ2 and the non-overlapping vulnerable APIs detected in RQ3. Finally, the ground truth's data sources were from 58 CVEs. These 58 CVEs involved 26,720 unique , which were directly or transitively reached from 270 unique . The are generated based on and function call relationships. Since we have utilized the advanced tool for generating call graphs as the foundation of our research, and validating the accuracy of these call graphs is beyond the scope of our research, we assume that the function call relationships are accurate and determine the validity of based on the validity of . Our validation strategy for these large number of vulnerable APIs in the ground truth is as follows: (1) we first manually analyze based on patch commits and vulnerability descriptions, following the method described in references <cit.>, to obtain the labels of the 270 . (2) As shown in <Ref>, we automatically extract the associated for each vulnerable API. If all of these are not vulnerable, we consider that vulnerable API to be a false positive. This process results in obtaining the labels for the 26,720 unique as ground truth. Ultimately, we identified 386 of 26,720 unique (including 25 of 270 unique ) as false positives, and the vulnerable API database has a false positive proportion of 1.45% with a 95% CI of [1.31%, 1.59%]. 0.95 Answer to RQ1: can effectively augment which are absent in patch commits with 93.57% precision and sift out patch-unrelated methods with 98.06% precision. Eventually, we construct a database consisting of 90,749 vulnerable APIs (2.4M with library versions) with 1.45% false positive proportion with a 95% CI of [1.31%, 1.59%] from 362 TPLs. §.§ RQ2: Comparison with existing work In this section, we demonstrate the effectiveness of by comparing it with the state-of-the-art tool, Eclipse Steady <cit.>, which is the only open source tool providing a forward reachability analysis at the method level so far. §.§.§ Dataset Collection We collected Java projects from GitHub with different numbers of stars. In total, we crawled 13,708 real-world projects with stars ranging from 70,000 to 0, among which 6,416 can be successfully compiled (using “mvn compile”), while others failed to be compiled due to the use of private libraries or some unpassed plugins. We further filtered projects that did not depend on the vulnerable library versions in our database, and eventually obtained 3,147 real-world potentially vulnerable projects. Steady manages its vulnerability data within Project KB, which includes CVE-related information, including vulnerability descriptions, affected libraries, affected library versions, patch library versions, patch links, and more. To ensure a fair comparison with Steady, we selected the CVEs that are both maintained by Steady and as the comparison dataset, i.e., 213 CVEs in total. We obtained the vulnerable libraries versions affected by these CVEs on GitHub Advisory Database <cit.>, 171 libraries with 6,153 library versions in total, and finally located 1,045 projects which depended on them. §.§.§ Setup Steady supports static analysis and dynamic-based analysis to analyze the vulnerable code reachability, while the dynamic-based methods require JUnit or application-specific tests, which are often unavailable or insufficient in public Maven projects. Therefore, we compare with Steady in terms of static analysis. Steady takes a project as input and initially identifies TPLs directly or transitively dependent on the project using Project KB <cit.>, and employs either Soot <cit.> or WALA <cit.> to facilitate static analysis. To eliminate the side-effect caused by different static analysis frameworks between and Steady, we choose Soot as the call graph construction framework of Steady and set up the same configuration as we have done with using Soot. As for recording the detection time for Steady and , since only the vulnerability reachability analysis part is focused on, we exclude the time spent on identifying vulnerable libraries and directly record the time spent on reachability analysis. We run Steady and on the aforementioned 1,045 projects, and compare the effectiveness of detecting vulnerable projects. One project identified as vulnerable means that there exists at least one execution path from the project to the vulnerable API of the vulnerable library. §.§.§ Result Table <ref> shows the comparison results between and Steady. The “Overlapped” row represents the results identified by both and Steady. Considering the overall performance, both and Steady can identify vulnerable projects in a finer-grained manner, sharply reducing the vulnerable projects from 1,045 to 177 and 66 respectively. Specifically, identified more vulnerable cases than Steady (214 vs. 95), with averaging 353s per project for detection, and Steady averaged 769s. Besides, 40 cases are both identified by two tools. To validate the precision of identified cases scanned by and Steady, we used the ground truth for vulnerable APIs proposed in RQ1 to check whether the vulnerable APIs used by projects were false positives. If there is at least one vulnerable API that is confirmed to be the true positive, the detected case is considered a true positive. Moreover, if one tool identifies a case as a true positive, while another tool does not detect this case, then this case is considered a false negative for the latter. Consequently, we identified 166 (61.71%) FNs in the scanning results of Steady, while yielded 8 (2.97%) FPs and 55 (20.45%) FNs. We thus further take an in-depth analysis to investigate the reasons and insights, which are summarized as follows. ∙ Identified by both tools (40 cases). For vulnerable projects identified by both tools, these detected cases are all true positives. Furthermore, we found that these projects all directly invoked vulnerable libraries, i.e., directly invoked the or other APIs of the library which finally reached the via call graph. Besides, the of these used libraries were all extracted from patch commits, and this is the simplest case that existing patch commit analysis focused on. Therefore, Steady and both can identify them. ∙ Only identified by Steady (55 cases). For projects that were only identified as vulnerable by Steady, some projects invoked vulnerable libraries indirectly. Since Steady started analysis from the project and further analyzed the direct- and transitive-invoked libraries to detect whether the project became vulnerable through the dependencies, it thus can identify such cases. While focused on distilling the vulnerable APIs of each vulnerable library, i.e., only considering the vulnerable libraries directly depend on projects, it cannot identify whether the project can reach vulnerable APIs/code from such transitive dependency. ∙ Only identified by (174 cases). For the projects only identified by , we found these projects invoked vulnerable library APIs that are not marked as vulnerable by Steady. There are four possible reasons: (1) Due to the missing information of vulnerable libraries affected by the same CVE, Steady exhibits false negatives in the identification of vulnerable libraries, where such vulnerable libraries are mistakenly classified as safe. For example, the library “dom4j-2.0.0” is suffered by CVE-2020-10683, but Steady fails to identify it as a vulnerable TPL. (2) Steady identified vulnerabilities based on all the modified and deleted methods in the patch commits. However, if the patch commit added a patch method that is not directly invoked in any other patch commits but is later invoked by methods in the vulnerable library version in another commit, Steady may not recognize it as vulnerable. In contrast, can mark it as vulnerable owing to its augmentation mechanism. For example, the project “jbufu/openid4java” directly depends on the library “xercesImpl-2.8.1” which is affected by CVE-2012-0881. reported that it invoked the vulnerable API from “xercesImpl-2.8.1”, however, Steady showed that it did not reach the vulnerable code related to CVE-2012-0881. After our investigation, we found it indirectly invoked the vulnerable root method augmented by . Since Steady only extracts the diff methods from patch commits as vulnerable methods, it cannot cope with such a situation, resulting in false negatives. (3) When the libraries contain both vulnerable structures and patch structures, Steady is uncertain about whether they include vulnerable code, resulting in missing some identified results. Steady stored the AST associated with vulnerability to determine whether the current library version contains vulnerable code. Due to some internal errors, the version that is vulnerable is not recognized by Steady. (4) The depth of call analysis in forward vulnerability reachability analysis is shallow compared to backward call graph analysis. Forward reachability analysis traces paths from external code to the vulnerability point, emphasizing breadth, while backward call graph analysis starts from the vulnerability point and traces its calling paths outward, focusing more on depth. Consequently, forward reachability analysis lacks the comprehensiveness of backward analysis, as achieving the same depth would require a significant resource investment. As for 8 false positives generated by , they involved 4 CVEs and 4 libraries. The misidentification of these cases stems from the fact that the root methods associated with reported APIs are unrelated to the vulnerabilities. Since the patch involves the addition of member variables with the result of necessitating complex modifications in the initial methods, erroneously determined these root methods were vulnerable before patching. 0.95 Answer to RQ2: can enhance the current tool chains by detecting security threats more effectively through deep call chains at the price of potentially missing some cases due to transitive dependencies. §.§ RQ3: Ablation Study on different mechanisms To showcase the contribution of the proposed sifting and augmentation mechanisms, we set up an ablation study on them. Specifically, we execute and - with different mechanisms enabled on the same projects respectively, shown in Table <ref>. The contribution of the augmentation mechanism is not separately studied because it is based on the sifting mechanism. We then compare the results of the individual scans against each other. §.§.§ Dataset Collection Our proposed sifting and augmentation mechanisms affected 179 and 49 CVEs, respectively, involving 183 libraries with 6,529 library versions. To evaluate the impact of these two mechanisms, we selected 1,191 projects that are dependent on the 183 libraries from the 3,147 potential vulnerable projects mentioned in Section <ref>, which enables us to assess the contribution of these mechanisms. §.§.§ Result After scanning these potentially vulnerable projects, identified 284 projects that utilized . However, - without any mechanisms, and with only the sifting mechanism, detected 293 and 272 projects calling , respectively. Table <ref> shows the vulnerable API detection result of the ablation study. The “#Vul. APIs” column displays the number of detected . We assessed the accuracy of the detected APIs by analyzing the precision of the corresponding . decreased 71 (5.78%) false positives by employing the sifting mechanism, and 25 (2.16%) false negatives by utilizing the augmentation mechanism. Besides, and - (both without any and augmentation mechanism) identified 1,158 overlapping APIs, with an 8.13% false positive proportion with a 95% CI of [6.11%, 10.74%]. Next, we first provide detailed explanations for how achieves the reduction in FPs and FNs through these two mechanisms, and subsequently validate the overlapping detected APIs. ∙ FP reduction analysis. Since - without any mechanisms identified the diff methods before and after the patch commits as patch methods directly, its vulnerable API database may include many non-. However, our proposed sifting mechanism can sift out patch-unrelated methods with high precision, reducing the generation of some non-for both and - with the sifting mechanism. Therefore, the sifting mechanism can eliminate 71 (5.78%) FPs detected by - without any mechanisms. For example, the project “gavincook/githubOfflineInstaller” depended on the TPL “dom4j-2.0.0-RC1” influenced by CVE-2020-10683. - without any mechanisms shows that it invoked 3 which indirectly called the root method “”. However, through meticulous manual verification, we found that it was not vulnerable in the patch commit <cit.>, causing FPs of - (without any mechanisms). ∙ FN reduction analysis. Table <ref> reveals that detected 25 additional compared to - without augmentation mechanism, indicating that augmentation mechanism can eliminate 25 (2.16%) FNs. The augmentation mechanism enables to generate more accurate . For example, the project “fabric8io/shootout-docker-maven” utilized the TPL “tomcat-embed-core-7.0.91” affected by CVE-2021-30640. In the patch commit <cit.>, developers introduced a patch method named “” to implement the necessary escaping. Through our augmentation mechanism, two methods invoking this newly added patch method in the patch release version (V7.0.109) were absent in any other patch commits. This absence resulted in - failing to identify related to these augmented , leading to FNs in scanning projects. ∙ Validation for overlapping APIs. We used the ground truth to validate the overlapped vulnerable APIs detected by both tools. Among the 58 CVEs in ground truth, 36 were involved in the ablation study, covering 541 vulnerable APIs out of 1,158 overlapping APIs. We found that 44 out of 541 APIs from ground truth were false positives, and then performed an error analysis using Wilson’s score confidence interval <cit.> to estimate the false positive proportion. Thus, the detected overlapping APIs have an 8.13% false positive proportion, with a 95% CI ranging from 6.11% to 10.74%. 0.95 Answer to RQ3: effectively reduces FPs by 5.78% through sifting mechanism and FNs by 2.16% through augmentation mechanism, leading to more accurate and comprehensive vulnerable API detection. §.§ RQ4: Large-scale Analysis Based on the 3,147 projects mentioned in Section <ref>, we further conducted a large-scale study by leveraging , to reveal the fact of using potentially from the vulnerable libraries in real-world projects. §.§.§ Impact analysis of potentially Based on the collected dataset, we aim to investigate the impact of potentially on real-world projects. The results are shown in Table <ref>. We found that 1,753 projects did not use any of the modules in the vulnerable libraries in our database, 717 projects only used the non-vulnerable modules in the vulnerable libraries, and 677 projects were potentially affected by vulnerable libraries. Moreover, we used the ground truth for proposed by RQ1, to validate the scanning results for conducting a sampling analysis. These CVEs involve 35 libraries, 134 library versions, and 219 projects using vulnerable modules. Among these 219 projects, TP=215 and FP=4. Furthermore, we conducted an error analysis using Wilson’s score confidence interval and found that approximately 21.51% of all projects have utilized potentially vulnerable modules in the vulnerable libraries. The false positive proportion is 1.83% with a 95% CI of [0.71%, 4.61%]. This means that for most projects, even if calling the vulnerable TPL, they are still not affected by the vulnerable library. For example, the project “elibom/jogger” directly relies on two vulnerable dependencies: jetty-server-8.1.15 and httpclient-4.5.2, and it invoked 9 APIs from jetty-server-8.1.15, but none of these APIs were deemed vulnerable. Thus, it can suspend the processing of these three vulnerable libraries. Our analysis indicates that vulnerable TPLs may not have a substantial impact on most projects. We explore the reasons from the following points: (1) For the vulnerability itself, the vast majority of vulnerabilities threaten only one or specific modules of the software. We attempt to maximize the impact range of vulnerabilities in the TPL through backward call graph analysis, to ensure that all the modules potentially affected by vulnerabilities are identified. (2) For the project itself, it often uses only specific modules from a TPL, not the entire library, meaning it may not invoke potentially and thus avoid certain vulnerabilities. In large projects that rely on multiple vulnerable libraries, it is crucial to identify if any vulnerable modules are used. This can help developers plan patches and prioritize vulnerability mitigation. §.§.§ Top vulnerable libraries and We further investigate the most frequently vulnerable libraries and potentially invoked by projects based on the collected dataset. Table <ref> shows the result. The library “com.alibaba:fastjson:1.2.47” , a JSON processor, tops with the list with a maximum frequency of 170 invocations of . This is primarily due to the widespread usage of “”, which serves as a fundamental functional component of the library. As TPLs such as “com.alibaba:fastjson” are commonly used by numerous developers, the impact of vulnerabilities in TPLs can be highly unpredictable. Furthermore, as the frequency of calling potentially increases, the risks within projects escalate accordingly. Take the project “luanqiu/java8_demo” as an example. This project directly relies on “com.alibaba:fastjson:1.2.47” affected by CVE-2022-25845 and has invoked the potentially vulnerable API “” 45 times, indicating that resolving this vulnerable TPL is crucial to mitigate its impact. This example highlights the importance of promptly addressing vulnerability risks in TPLs when fundamental functional APIs are potentially vulnerable. 0.95 Answer to RQ4: By leveraging , we found that only 21.51% of projects (with 1.83% false positive proportion and a 95% CI of [0.71%, 4.61%]) were potentially affected by vulnerable TPLs, which indicates that most coarse-grained detection tools produce many false positives, highlighting the need for more precise analysis. § THREATS TO VALIDITY The threats to our work come from the following aspects: (1) Possible bias of project dataset selection. Since we crawled projects in GitHub according to star numbers, there may be some project deviations. To alleviate it, we tried our best to crawl a large number of real-world projects whose star numbers range from about 70,000 to 0, to make the experiments more representative. (2) Possible inaccuracy of vulnerable versions of libraries. There may be inaccuracies in the vulnerable version ranges provided by Snyk Vulnerability DB and GitHub Advisory Database, based on NVD. This can lead to mistakenly identifying a safe version as vulnerable <cit.>. To address this, we determined by examining adjacent vulnerable versions for patch commits. If is not found in earlier vulnerable versions, it indicates that the version is not actually affected, thus minimizing threats and ensuring the validity of our results. (3) Not consider other semantically equivalent refactoring in the sifting mechanism. Since we implement the sifting mechanism based on the AST, which focuses on syntax and structure, it cannot comprehensively capture the context of the code. We will consider detecting all semantically equivalent refactoring in our future work. (4) Possible bias of the ground truth acquisition strategy. We avoided dynamic testing due to its complex setup and high costs, especially for large codebases. Although vulnerabilities were demonstrated in previous work <cit.>, the provided repositories didn’t fully support our validation needs for ablation and comparison experiments. Instead, we used a manual validation approach similar to VERJava <cit.> and Nguyen et al. <cit.>. Besides, since we assume the call graph generation is accurate and determine the validity of based on the validity of , this strategy may affect the validity of our results. (5) Limitation of the static analyzer. Although we used the state-of-the-art call graph generation tool Tai-e, it still has certain limitations because Tai-e <cit.> is a static analysis framework, which inherently struggles to accurately handle dynamic features, polymorphism, and runtime dependencies, which prevents Tai-e from generating completely precise call graphs. (6) Possible bias arising from different vulnerability data utilized by and Eclipse Steady. Steady manages its vulnerability data within Project KB <cit.>, which does not completely match the data we collected. This discrepancy may introduce bias in the comparison experiment results. (7) The we have augmented are not always vulnerable, which may affect the accuracy of . § RELATED WORK The most related work to our paper is software composition analysis (SCA) <cit.> of Java projects. Plate et al. <cit.> proposed a dynamic analysis to determine if the project could reach vulnerable methods in TPLs. It was implemented by the dynamic and static instrumentation techniques for unit tests and integration tests, respectively. Ponta et al. <cit.> advanced this approach and presented a code-centric and usage-based tool, named Eclipse Steady, to identify the reachability of vulnerable methods or code. Specifically, they first conducted a dynamic analysis to assess the reachability of vulnerable constructs. Then, they used the set of constructs that have actually been executed as the starting point for static analysis. Combining dynamic and static analysis, they found all constructs potentially reachable for vulnerability analysis. Despite the progress, their dynamic analysis required unit tests or integration tests as the input for vulnerability analysis, which limited its scalability and effectiveness due to the availability and quality of test code. Wang et.al <cit.> proposed a bug-driven alerting system that focuses on security bugs. In their approach, they directly considered the methods modified in patches as buggy library methods. INSIGHT <cit.> explores the cross-ecosystem impact of vulnerabilities, specifically determining whether a Python or Java project utilizes a vulnerable C library based on the forward cross-language vulnerability reachability analysis. Wu et.al <cit.> conducted an empirical study aiming to explore the impact of vulnerabilities in upstream libraries on downstream projects. They considered all modified functions in the vulnerability patch as vulnerable functions in libraries. By constructing call graphs for downstream projects and upstream vulnerable libraries, they investigated whether there exists paths in the projects that can invoke the vulnerable functions from the libraries. Relying on dependency management tools such as Apache Maven and Apache Ivy, Pashchenko et al. <cit.> identified dependencies with known vulnerabilities. They built the paths from projects to their vulnerable dependencies, to address the over-inflation problem when reporting vulnerable dependencies. In addition, both commercial SCA services (e.g., Snyk <cit.>, SourceClear <cit.>) and open-source SCA tools (e.g., GitHub Dependabot <cit.>, OWASP Dependency Check <cit.>) detected vulnerable TPLs based on vulnerability information from NVD <cit.>. Although some SCA commercial tools (e.g., SourceClear <cit.>, and BlackDuck <cit.>) support vulnerability reachability analysis, they do not provide open source alternatives, posing a hindrance to executing them. Moreover, their methodology for vulnerability reachability analysis like Steady's, uses call graph analysis to check if the project invokes . Therefore, we only compared with Steady. There are also other researches that focused on SCA of Android apps, usually known as TPL identification <cit.>. Most of them focused on identifying the libraries or library versions used by Android apps via similarity-based or clustering-based methods. Some studies investigated vulnerable TPLs used by projects by detecting whether the projects contained vulnerable TPLs or vulnerable TPL versions <cit.>. Specifically, OSSPolice <cit.> maintained a feature database of TPLs, and utilized a similarity-based method to identify whether the used library version was vulnerable by comparing it with the vulnerable libraries affected by CVE. Yasumatsu et al. <cit.> conducted a similar work by using LibScout <cit.> to extract the library versions used by APK and comparing them with vulnerable versions. Based on TPLs' feature generation and vulnerability collection, Zhan et al. <cit.> built a vulnerable TPL database to identify the vulnerable TPL versions used by Android apps. These studies identified vulnerable TPLs but did not analyze whether the apps accessed the vulnerable code. In summary, these studies would cause false positives through analysis only at the library level. As for , we maintain all vulnerable APIs for each vulnerable TPL version. Once projects used a specific library version, can effectively determine whether the used library version could threaten the projects by analyzing if the projects used . § CONCLUSION In this paper, we proposed , a vulnerable API detection system for TPLs, which can precisely find used by Java projects. can sift out patch-unrelated methods with high precision, and augment which are absent in patch commits, to identify relatively precise and complete . Evaluation results show that can effectively detect based on the constructed vulnerable API database and can find the and real impact on real projects. § ACKNOWLEDGEMENT We thank the reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (No. 62102197, and 62202245), and the Natural Science Foundation of Tianjin (No. 22JCYBJC01010). IEEEtran [ < g r a p h i c s > ] Fangyuan Zhang is currently a Ph.D. candidate in the College of Computer Science at Nankai University (NKU). She received her BSc degree in computer science from Jilin University in 2021. Her research focuses on software supply chain security. [ < g r a p h i c s > ] Lingling Fan is an Associate Professor at the College of Cyber Science, Nankai University, China. In 2017, she joined Nanyang Technological University (NTU), Singapore as a Research Assistant and then had been a Research Fellow of NTU since 2019. Her research focuses on program analysis and testing, and software security. She got 4 ACM SIGSOFT Distinguished Paper Awards at ICSE 2018, ICSE 2021, ASE 2022, ICSE 2023. [ < g r a p h i c s > ] Sen Chen (Member, IEEE) is an Associate Professor at the College of Intelligence and Computing, Tianjin University, China. Before that, he was a Research Assistant Professor in the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research focuses on software security. He got 6 ACM SIGSOFT Distinguished Paper Awards. More information is available on <https://sen-chen.github.io/>. [ < g r a p h i c s > ] Miaoying Cai received her BEng degree in Information Security from Nanjing University of Aeronautics and Astronautics in 2023. She is currently pursuing a Ph.D. degree with the Nankai University. Her research interests lie in the area of mobile security and web security. [ < g r a p h i c s > ] Sihan Xu received the B.Sc. and Ph.D. degrees in computer science from Nankai University in 2013 and 2018, respectively. For her research, she spent a year with the National University of Singapore. She is currently an associate professor at the College of Cyber Science, Nankai University. Her research interests include intelligent software engineering and AI security. [ < g r a p h i c s > ] Lida Zhao is a Ph.D. candidate at Nanyang Technological University (NTU). His research focuses on software security and software engineering, with a particular emphasis on open-source supply chain security and software composition analysis.
http://arxiv.org/abs/2409.02869v1
20240904165346
Look Into the LITE in Deep Learning for Time Series Classification
[ "Ali Ismail-Fawaz", "Maxime Devanne", "Stefano Berretti", "Jonathan Weber", "Germain Forestier" ]
cs.LG
[ "cs.LG" ]
Look Into the LITE]Look Into the LITE in Deep Learning for Time Series Classification [1]Ali [email protected] 1]Maxime [email protected] 2]Stefano [email protected] 1]Jonathan [email protected] 1,3]Germain [email protected] *[1]IRIMAS, Universite de Haute-Alsace, Mulhouse, France [2]MICC, University of Florence, Florence, Italy [3]DSAI, Monash University, Melbourne, Australia Deep learning models have been shown to be a powerful solution for Time Series Classification (TSC). State-of-the-art architectures, while producing promising results on the UCR and the UEA archives , present a high number of trainable parameters. This can lead to long training with high CO2 emission, power consumption and possible increase in the number of FLoating-point Operation Per Second (FLOPS). In this paper, we present a new architecture for TSC, the Light Inception with boosTing tEchnique (LITE) with only 2.34% of the number of parameters of the state-of-the-art InceptionTime model, while preserving performance. This architecture, with only 9,814 trainable parameters due to the usage of DepthWise Separable Convolutions (DWSC), is boosted by three techniques: multiplexing, custom filters, and dilated convolution. The LITE architecture, trained on the UCR, is 2.78 times faster than InceptionTime and consumes 2.79 times less CO2 and power. To evaluate the performance of the proposed architecture on multivariate time series data, we adapt LITE to handle multivariate time series, we call this version LITEMV. To bring theory into application, we also conducted experiments using LITEMV on multivariate time series representing human rehabilitation movements, showing that LITEMV not only is the most efficient model but also the best performing for this application on the Kimore dataset, a skeleton based human rehabilitation exercises dataset. Moreover, to address the interpretability of LITEMV, we present a study using Class Activation Maps to understand the classification decision taken by the model during evaluation. [ [ September 9, 2024 ===================== This is the author’s version of an article under review in the International Journal of Data Science and Analytics. § INTRODUCTION Time Series Classification (TSC) has been widely investigated by researchers in recent years. Some TSC tasks include the classification of surgical evaluation <cit.>, action recognition of human motion <cit.>, cheat detection in video games <cit.>, interpretability <cit.>, Entomology <cit.>,   Thanks to the availability of the UCR archive <cit.>, and the UEA archive <cit.>, the largest archives for univariate and multivariate TSC datasets, a significant amount of work has been done in this domain. Deep learning models have been proposed in the time series context for classification <cit.>, clustering <cit.>, averaging <cit.>, representation learning <cit.>, adversarial attacks <cit.>,   Even though deep learning approaches proven to be very powerful for TSC, they present a large amount of trainable parameters, which often leads to a long training time, inference time and storage usage. For this reason, some works started to question the need of such a large complexity in deep learning models for TSC such as ROCKET and its variants <cit.>. Like for images, deep learning also presents a large complexity, which limits the usage of the models on small devices such as mobile phones and robots <cit.>. Furthermore, Large Language Models (LLM) also shown to be very effective <cit.>, and that their complexity can be decreased, while preserving performance <cit.>. In this paper, we address the methodology of reducing the complexity of deep learning models, while preserving the performance of the TSC task. We argue that a large complex model may not be necessary in order to perform well on the UCR archive. However, simply removing layers and or parameters to reduce complexity may not guarantee to preserve the performance. For this reason, the neural network architecture often requires additional techniques in order to balance between complexity and performance. In this work, we borrow existing techniques that have been efficiently used in state-of-the-art architectures on time series data. These techniques are multiplexing convolutions <cit.>, dilated convolutions <cit.>, and custom filters <cit.>. By combining these three techniques with a modified version of a small non complex model, the Fully Convolution Network (FCN) <cit.>, we propose a new architecture, named Light Inception with boosTing tEchniques (LITE). The proposed model uses only 2.34% of the number of parameters of the Inception model, while being competitive with state-of-the-art architectures. For instance, Figure <ref> shows that on the FreezerSmallTrain dataset, the classification accuracy of LITE is much higher than other approaches with way less trainable parameters. The reduction in number of parameters is made possible thanks to the usage of DepthWise Separable Convolutions (DWSC) <cit.>. The additional techniques used in this proposed architecture, multiplexing, dilated, and custom convolutions, have the advantage of only slightly increasing the number of parameters by about 1,000. To position the proposed architecture among the state-of-the-art, we compared not only the accuracy but also the training time and number of parameters. A comparison of CO2 and Power consumption using CodeCarbon [https://codecarbon.io/] python library is also presented. Assessing the utility of LITE in the case of both univariate and multivariate time series classification datasets is essential. For this reason, we propose two versions: LITE for univariate time series and LITEMV for multivariate time series. LITEMV differs form LITE only in the first layer, where we use DWSC instead of standard convolutions. This is not done in the case of the first layer of LITE, when the input is univariate because it would lead to learn only one filter, which is not suitable for the learning task (more details on DWSC are given in Section <ref>. Moreover, to address a real world application, we assess the performance of LITE in the case of human rehabilitation exercises, represented as multivariate time series. For this task, the goal is to analyze the rehabilitation exercise performed by a patient and associate a score representing how good the exercise has been carried out. We rely on the usage of the Kimore dataset <cit.>, a skeleton based human rehabilitation exercises dataset. Each sample of this dataset is a multivariate time series of skeleton poses changing through time. These skeleton poses are extracted using Kinect-v2 cameras <cit.>. In the original dataset, each skeleton sequence is associated with a continuous label for each sample representing a score between 0 and 100 given by an expert to evaluate the performance of the patient. Since this is a regression problem, we re-orient it to a classification task by classifying whether or not the patient performed the exercise correctly by setting a threshold to divide the scores (threshold score is 50). To understand the decision making process of our proposed deep learning model on this task, we also present a study of the interpretability of LITEMV using the Class Activation Map (CAM) <cit.>. The main contributions of this work are: * The LITE model is presented as a new architecture for TSC, with only 2.34% of the number of parameters of the Inception model; * An adaptation of LITE to handle multivariate data (LITEMV); * Extensive experiments showing that LITE achieves state-of-the-art results on the UCR archive ; * Extensive experiments showing that LITEMV achieves promising results on some datasets of the UEA archive; * A comparison based on the number of trainable parameters, number of FLOPS, training time, CO2 and Power consumption; * A deeper analysis presented as an ablation study to show the impact of each technique added to boost the proposed model; * The application of LITEMV in a real application of evaluating human rehabilitation exercises with interpretability analysis using CAMs. In what follows, we present some related work in Section <ref>, discuss the details of our proposed architecture in Section <ref>, present some results compared to other approaches in Section <ref>, and conclude by drawing future work in Section <ref>. § BACKGROUND AND RELATED WORK Time Series Classification (TSC) was widely investigated in the literature. Some work addressed this problem using machine learning algorithms by comparing similarity metrics between the time series <cit.>, decisions based on random forest algorithm <cit.>,   The problem of most of those algorithms is that they require huge amount of CPU time to perform their calculations, and can not be parallelized on a cluster of GPUs. For these reasons, deep learning for TSC is being used in the recent years. Even though the performance of deep learning is better than machine learning algorithms, the number of parameters to be optimized may be very high. In what follows, we first define the problem at hand, then we present some works that tackled the TSC problem using machine and deep learning techniques. Finally, we present some works that addressed the training time problem of deep learning models and the large number of parameters. §.§ Definitions Let x be a multivariate time series of length L, and let 𝒟={x_i,y_i}_i=0^N-1 be a dataset of N multivariate time series x_i with their corresponding labels y_i. The goal of this work is to construct an algorithm to learn how to correctly classify each input time series to its corresponding label. §.§ Machine Learning for TSC The basic approach to solve TSC tasks is by using the Nearest Neighbour (NN) algorithm. In order to use this algorithm, a specific similarity metric for time series data should be defined. In <cit.>, the authors used the Dynamic Time Warping algorithm (DTW) to define a similarity metric for the NN algorithm. However, this algorithm does not have the ability to extract features from the input samples. The work in <cit.> also used the same algorithm but with an upgraded version of DTW called shapeDTW that aligns a local neighborhood around each point. The main limitation of the DTW algorithm and its variants is the time complexity, which is a function of the time series length, , 𝒪(L^2). §.§ Deep Learning for TSC In this section, we present the work done on TSC using deep learning approaches. The simplest architecture is the Multi Layer Perceptron (MLP) proposed in <cit.> that uses fully connected layers and dropout operations. This architecture is limited given the fact that it ignores the temporal dependency in a time series. The Fully Convolution Network (FCN) was also proposed in <cit.> that uses 1D convolution operations. In this model, the backpropagation algorithm finds the best filters to extract features from the time series, and correctly classifies the samples. In this model, convolutions account for temporal dependencies in time series data, and they are also independent of the input time series length. The authors in <cit.> also proposed the ResNet model, which uses the residual connections <cit.> to solve the vanishing gradient problem. A comparative study in <cit.> shows that using convolutions, especially ResNet, outperforms other models that use multi-scale transformation or pooling layers with convolutions <cit.>. ResNet and FCN use Batch Normalization and ReLU activation instead of pooling operations after each convolution layer to avoid overfitting. The state-of-the-art model in deep learning for TSC on the UCR archive <cit.> is InceptionTime <cit.>, where the authors adapted the original Inception model on images for time series data. InceptionTime has the ability to detect multiple patterns of different length given to the multiplexing technique. This technique comes down to learning multiple convolution layers on the same input but with different characteristics. It is important to note that InceptionTime is an ensemble of five Inception models each trained separately. While FCN, ResNet have also been evaluated for multivariate data, some additional deep learning models have been proposed to address the task of TSC in the case of multivariate data <cit.>. More recently, the Disjoint-CNN architecture <cit.>, considered a new methodology for applying convolutions on multivariate time series that separates the temporal and spatial patterns extraction into two steps. The reference deep learning model for multivariate TSC on the UEA archive <cit.> is ConvTran <cit.>, a Transformer based architecture. ConvTran uses a Disjoint-CNN based encoder followed by self-attention layers <cit.> with novel absolute and relative positional encoders. §.§ Reducing Model Complexity Even though deep learning for TSC shown to be very effective, it still has some issues. One of these issues we address in this work is the large number of parameters to be optimized, which increases the training time as well. Recently, in <cit.> a new approach was proposed, called ROCKET, that also includes convolution operations, but is way faster than InceptionTime. They proposed not to learn few filters with a backpropagation algorithm, but instead randomly generate a large number of filters with different characteristics. These characteristics include the length of the filter, the bias value, the dilation rate,   It has been shown that on the UCR archive, no statistical significant difference can be found between InceptionTime and ROCKET. The main advantage of ROCKET compared to InceptionTime is the training and inference time. Some adaptations of ROCKET were proposed in order to boost its performance even more such as MiniROCKET <cit.> and MultiROCKET <cit.>. Knowledge distillation <cit.> was also approached for the TSC model called FCN <cit.>. In this study, the authors proposed a smaller variant of FCN with a lower number of convolution layers and filters to learn. Furthermore, the work in <cit.> proposed to hand-craft some custom convolution filters instead of randomly generate them. Those hand-crafted filters are constructed in a way to get activated on increasing and decreasing intervals as well as peaks in the time series. By using these filters, the authors were able to construct a Hybrid FCN (H-FCN). Results on the UCR archive shown that H-FCN is statistically significantly better than FCN and is competitive with InceptionTime. The H-FCN model uses the custom filters in parallel to the learned filters in the first layer. Some work to optimize the complexity of large models were proposed in Computer Vision as well. For instance, the authors in <cit.> proposed the usage of DWSC instead of standard ones. The MobileNet architecture as proposed in <cit.> proven to be very competitive with state-of-the-art models on ImageNet <cit.> with way less complexity. In Natural Language Processing, some works proposed the usage of Small Language Models (SLM) as a one-shot learning approach <cit.>. The authors showed that with way less parameters than GPT-3 <cit.>, their model can have no significant difference in performance. § PROPOSED APPROACH §.§ Convolutions for TSC Multiple Convolution Neural Networks (CNNs) were proposed for the task of TSC and they all proved how they outperform other methods. The Fully Convolution Network (FCN) is a simple three layered network, where each layer is composed of 1D convolutions followed by a batch normalization and a ReLU activation function. As FCN is only composed of simple 1D convolution layers, its performance generally lags behind more advanced architectures using residual connections (ResNet) or Inception modules (InceptionTime). In this paper, we present an adaptation of the FCN architecture that only has 2.34% of the number of parameters of the Inception model. Given this significant drop in the number of parameters, boosting techniques are used in order to preserve the performance of Inception. First, we discuss about two ways of applying convolutions with less number of parameters, while preserving the performance. The first approach uses standard convolutions with BottleNecks (BN), and the second uses DWSC. §.§.§ Standard Convolutions with BottleNecks Many approaches that use CNN based architectures suffer from the problem of high number of parameters such as ResNet <cit.>. For this reason, the authors of InceptionTime <cit.> proposed to use a BottleNeck operation in order to reduce the number of parameters. This BottleNeck operation is made of 1D convolutions with a unit kernel size. The following example shows the impact of this operation on reducing the number of parameters. Suppose at a depth d in the network, the input number of channels is C_in. The following convolution layer of depth d+1 projects the input into a new space with a number of channels C_out using a kernel of size k. On the one hand, without a BottleNeck operation, the number of learned parameters is C_in*C_out*k. On the other hand, with a BottleNeck operation that uses C_bn filters of size 1, the number of learned parameters is C_in*C_bn*1+C_bn*C_out*k. This operation reduces the number of parameters if and only if the following inequality is true: C_in*C_bn + C_bn*C_out*k < C_in*C_out*k , which indicates that the condition on the BottleNeck operation is: C_bn < C_in*C_out*k/C_in + C_out*k. The goal of the BottleNeck operation is to learn the same number of filters in the output channels (C_out), while reducing, at the same time, the intermediate learned filters between the input and output channels (C_in*C_out). §.§.§ DepthWise Separable Convolution DWSC DWSC can be divided into two phases: DepthWise convolution (Phase 1), and PointWise convolution (Phase 2). A visual representation of the DWSC operation is presented in Figure <ref>. In standard convolutions, if the input sample of length L has C_in channels and the desired output is a space with C_out channels using a kernel of size k, then the number of learned parameters is C_in*C_out*k. The number of multiplications is L*Cout*C_in*k. DepthWise convolution In this phase, if the convolution is done using a kernel of size k, then the number of learned filters is C_in, and the output number of channels will be C_in (such as the input). In other words, for each dimension of the input time series, one filter is learned. PointWise convolution This phase projects the output of the DepthWise convolution into a space with a desired number of channels C_out. This is done by applying a standard convolution with C_out filters of kernel size 1 (a BottleNeck). Hence, by combining these two phases, the number of learned parameters in a DWSC becomes C_in*k + C_in*C_out. The following calculation finds the condition that the DWSC have less parameters to learn compared to the standard one: C_in*C_out*k_standard Conv > C_in*k + C_in*C_out_separable Conv C_in*C_out*k > C_in*(k+C_out) C_out*k > k + C_out k*(C_out - 1) > C_out k > C_outC_out - 1k > 1. This means that if the number of desired output channels is high enough (if C_out >= 3 the previous equation holds), then DWSC have less parameters to learn compared to the standard convolutions. The number of multiplications performed in the DWSC is C_in*L*k + Cin*C_out*L*1. The following calculation finds the second condition for when DWSC have less multiplications to perform compared to standard convolutions: C_in*C_out*L*k_standard convs > C_in*L*k + C_in*C_out*L*1_separable convs C_in*C_out*L*k > C_in*L*(k + C_out) C_out*k > k + C_out k(C_out - 1) > C_out k > C_outC_out-1k > 1. This concludes that DWSC have less parameters to learn with less number of multiplications to perform compared to standard convolutions. In this work, we present a comparison between the usage of DWSC or standard ones + BottleNecks. Our results demonstrate that with the techniques added to boost DWSC, we can maintain performance, while significantly reducing the number of parameters to optimize. After defining two techniques to use convolutions in a more optimized way concerning number of parameters and multiplications, some other techniques should be defined as well. These techniques aim to minimize the impact of parameters reduction in convolutions operations explained above. §.§ Boosting Techniques The following techniques are borrowed from the literature. Multiplexing Multiplex convolution was proposed in the architecture of Inception <cit.>. Its main idea is to learn in parallel different convolution layers of different kernel size. A multiplexing example is shown in Figure <ref>. Dilation Dilated convolutions were not very explored for deep supervised learning on TSC but they were used in self-supervised models showing to be very effective <cit.>. Dilation in convolutions filters defines the number of empty cells in the kernel. Suppose a kernel of length 3 has the following parameters k = [k_0,k_1,k_2], a dilation of rate 2 will produce the following kernel k = [k_0, skip, k_1, skip, k_2]. The skip parameter indicates that the convolution layer will not use the values of the input aligned with this index of the kernel. A visualization of the dilation effect on convolution can be seen in Figure <ref>. Dilation will help increasing the receptive field of a model without having to add deeper layers because the dilated kernel will find the deeper combinations in the same layer. Custom Filters Custom filters were proposed in <cit.>. The authors hand-crafted some kernels in order to detect specific patterns in the input time series. These filters were then added to Inception and results on the UCR archive have shown that such filters can help with the generalization and boost the performance. This is due to the fact that these filters are generic and fixed (not learned). This allows the model to focus on learning new patterns harder to detect. §.§ Proposed architecture §.§.§ Light Inception with boosTing tEchniques (LITE) In our proposed architecture, we reduce the number of parameters, while preserving performance. This is obtained by using DWSC and the previously explained boosting techniques. First, custom filters are used in the first layer parallel to the first layer. Second, in this first layer, multiplexing convolution is used in order to detect different patterns of different characteristics (three parallel convolution layers). Third, the second and third layer present the usage of dilation in their kernels. It is important to notice that for the first layer, standard convolutions are used instead of DWSC. This is due to the fact that the input time series is univariate and DWSC will learn only one filter. A summary of the architecture is given in Figure <ref>. §.§.§ LITEMV: Adapting LITE for Multivariate Time Series Data The first layer of the LITE architecture uses the standard convolution in the case of univariate data. This is due to the fact that using DepthWise Separable Convolutions in the first layer means the model would only learn one filter and then re-scale the output with one learned value. For this reason we kept the usage of Standard Convolutions in the first layer. However, in the case of multivariate data, the model will learn a filter per channel and then learn how to combine in the PointWise Convolution step. A second information that is important to note is that the custom filters in the first layer of LITE are basically Standard Convolutions. This means in the case of multivariate input, the custom filter will be applied to each channel independently and the output are summed. This is done for each of the used custom filters. This can be an issue in the case of multivariate input given we suppose an equal importance per custom filter for all channels when summing the outputs. In the multivariate version of LITE, we instead concatenate the outputs of all custom filters operations on all channels. We refer in the rest of this work to the multivariate version of LITE as LITEMV. §.§.§ Ensemble Ensemble learning is a technique of combining the prediction of multiple models in order to reduce the variance, and it has been shown to be very effective in the literature <cit.>. Applying an ensemble of multiple classification models is equivalent to find the average predicted distribution of all the models. This average distribution is finally used for choosing the predicted class. This motivated us to build an ensemble of multiple LITE models to form LITETime and LITEMVTime (by adding the suffix Time following <cit.>). Moreover, the usage of an ensemble in the case of LITE is also motivated by its small architecture and the fact that it can boost less complex architectures even more. § EXPERIMENTAL EVALUATION §.§ Datasets The UCR archive <cit.> is the largest directory for the TSC problem. It is publicly available <cit.>. The archive contains 128 datasets of univariate TSC tasks. Some tasks involves Electrocardiography (ECG) time series data and some are observations of Sensors,   To evaluate the performance of the proposed architecture on more datasets, we detail in Section <ref> some experiments on multivariate time series data. The usage of multivariate data is a bit different given the uniqueness of this data of having multiple variables changing through time. We rely on the UEA Multivariate Time Series Classification (MTSC) archive <cit.>. This archive consists of 30 different datasets ranging from medical data recording ECG signals to speech recognition. The number of channels in this archive range from 2 to 1345. Each dataset is split into a training and a testing set. The labels are available for all the samples. In order to train the model on a normalized dataset, we apply z-normalization over all the samples independently. This normalization technique reduces the time series samples to a zero mean and unit variance sequence. §.§ Implementation Details Our results were obtained on the UCR archive using a GTX 1080 GPU with 8GB of VRAM. In the experiments, we accounted for the training time, inference time (testing time), CO2 and Power consumption. The model used for testing is the best model during training, chosen by monitoring the training loss. The Adam optimizer was used with a Reduce on Plateau learning rate decay method by monitoring the training loss. For the Adam optimizer we used the default set up of the Tensorflow Python model. The model is trained with a batch size of 64 for 1500 epochs. For the hyper-parameters of the LITE architecture presented in Figure <ref>, the following setup was used: N_i=6, variations in kernel sizes =[2,4,8,16,32,64]; N_d=6, variations in kernel sizes =[2,4,8,16,32,64]; N_p=5, variations in kernel sizes =[6,12,24,48,96]; N=32; K=40; D0=1 in order to start with no dilation and increase with depth (D1=2 and D2=4). The same hyper-parameters were used in the case of LITEMV. The source code is publicly available at . §.§ Results and Discussion Below, we provide the comprehensive set of experimental results acquired during the course of this study. First, we present an efficiency comparison between LITE and other architectures in the literature. This study is done over the case of univariate data only because in the case of multivariate data the study will depend on each dataset and its number of channels. Second, we present an ablation study to make sure that LITE utilizes all of its boosting techniques during training. Third, we present the performance using the accuracy metric on both the UCR and UEA archives. Finally, we evaluate the usage of LITEMVTime in a real world application for human motion rehabilitation on the Kimore dataset. §.§.§ Number of Parameters, FLOPS Training Time, Testing time, CO2 and Power Consumption Table <ref> summarizes the number of parameters, the number of FLoating-point Operations Per Second (FLOPS), training time, inference time, CO2 and Power consumption over the 128 datasets of the UCR archive. We show the number of trainable parameters of the architecture without the last classification Fully Connected layer because it depends on each dataset (number of classes). The rest of the information is summed over the 128 datasets of the UCR archive and averaged over five different runs. First, the table shows that the smallest model in terms of number of parameters is the LITE with 9,814 parameters. This is mainly due to the usage of DWSC instead of standard ones. Compared to FCN ResNet and Inception, LITE has only 3.7% 1.95% and 2.34% of their number of parameters, respectively. Second, the fastest model in the training phase is LITE, with a training time of 0.62 days. LITE is 2.79, 3.08 and 2.71 time faster than FCN, ResNet and Inception, respectively. Third, LITE is the model that consumes the smallest amount of CO2 and Power, 0.1048 grams and 0.2468 Watts, respectively. Compared to the other approaches, LITE presents the fastest and most ecologic model for TSC compared to FCN, ResNet and Inception. We believe, given the factors explained above, that LITE can be used for the deployment of deep learning for TSC in small machine such as mobile phones. §.§.§ Accuracy Performance on the UCR Archive In what follows, a one-vs-one comparison is presented between the models in order to show that LITE can preserve the performance of the more complex architectures. This one-vs-one comparison comes down to a Win/Tie/Loss count on the 128 datasets of the UCR archive between two classifiers. This comparison is visualized in Figures <ref> and <ref>. Each point in these plots represents one dataset of the UCR archive. The x-axis contains the accuracy value on the test set using classifier-x and the y-axis the ones using classifier-y. In order to evaluate the significance of the Win/Tie/Loss comparison, a statistical Wilcoxon Signed Rank Test <cit.> is used. This test will return a statistical value, the P-Value, representing how significant the difference is between the two classifiers. If the P-Value is low, this would mean that the difference in performance between the two classifiers is statistically significant. If the last condition is not true, it means that there are not enough examples (datasets) to find a statistical significant difference between the classifiers. This Wilcoxon test needs a P-Value threshold, usually in the literature a 0.05 (or 5%) threshold is used. On the one hand, the results presented in Figure <ref> show that LITE beats FCN significantly (low P-Value), and is statistically not significant than ResNet (high P-Value). The results compared to ResNet are impressive given the small complexity of LITE (almost 1.95% of ResNet's number of parameters). On the other hand, the comparison shows that LITE still is not significantly close to Inception. To study more the reason why LITE performs not as good as Inception with a large margin (more than 10%), we presented some characteristics of those datasets in Table <ref>. This table shows that some of the datasets have either long time series or small training set. Firstly, this indicates that Inception is better than LITE on long time series given its large receptive field (deeper architecture). Secondly, this points out that Inception generalizes better better in the case where the dataset presents a small training set so it can generalize better. Given that Inception still beats LITE, an ensemble comparison shows the real performance of the proposed architecture. This is due to the fact that LITE has way less parameters (2.34% of Inception's number of parameters), which can make it sensitive to a higher variance when training with different initialization. Applying an ensemble removes this variance as explained before. The comparison between LITETime with InceptionTime and ROCKET is presented in Figure <ref>. This comparison shows that, given the 128 datasets of the UCR archive, there are not enough datasets to find a statistical significance in the difference of performance with InceptionTime and ROCKET. We included ROCKET in the ensemble comparison with LITETime because the motivation in ROCKET is to replace the ensemble technique by using random filters instead of learning them starting with different initialization. Those last results suggest that in order to get a good performance on the UCR archive, a large complex architecture with a high number of parameters is not always needed. For a multi-classifier comparison, the average rank of each model is shown in a Critical Difference Diagram <cit.> (CD Diagram) based on the ranking classifiers given the average rank over multiple datasets, the two tailed Wilcoxon Signed-Rank Test with the Holm multiple test correction <cit.>. To generate the CD Diagram, we used the publicly available code . This diagram also presents connections between classifiers, when the difference in performance is not statistically different following the Wilcoxon Signed Rank Test. A CD Diagram is presented in Figure <ref> and shows that LITETime comes 3rd on the average rank between ROCKET and InceptionTime. The diagram also shows that FCN performs statistically significantly worse than LITE on the average rank. Furthermore, on the UCR archive, no statistical significance can be observed between ResNet and LITE. This last comparison shows the real impact of this work where LITE has almost 1.95% (Table <ref>) of ResNet's number of parameters. These conducted results show that on a large amount of cases, there is no need for a complex model with high number of parameters to achieve good performance. Furthermore, a new Multi-Comparison Matrix (MCM) evaluation tool was proposed that is stable to the variation of the addition and removal of classifiers <cit.>. The MCM is presented in Figures <ref> and <ref> to compare LITE and LITETime, respectively, to other approaches in the literature. The MCM uses the average accuracy on the UCR as an ordering metric instead of the average rank. As presented in the MCMs, LITE and LITETime perform better than FCN and ResNet on the average accuracy and is closer to the performance of InceptionTime, which is not significantly different than LITETime (high p-value). MCM has an advantage over the usage of the CD Diagram of being stable with the addition and removal of classifiers, given that the average performance would not change in this scenario unlike the average rank. Another advantage is not using a multiple test correction for the P-Value significance test. §.§ Ablation Study - LITE §.§.§ Impact of Additional Techniques The proposed LITE architecture, uses multiple techniques in order to improve the performance. In order to show the impact of each technique on the proposed architecture, an ablation study is presented in this section. First, the LITE is stripped down from the three used techniques: dilation, multiplexing and custom filters. Second, given that for the multiplexing convolutions performed in the first layer there are a total of three layers with n filters, the Striped-LITE learns a total of 3n filters for the first layer. The rest of the architecture is the same using DWSC without dilation. We then add each boosting technique separately on the stripped LITE model and evaluate its performance. The results of this ablation study are visualized in Figure <ref> in the form of a Heat Map. Each cell of the Heat Map contains the Win Tie Loss count when evaluating the test accuracy on the UCR archive. The addition of the P-Value statistics is presented in each cell in order to assess the significance in difference of performance. The P-Value is emphasized in bold when it is lower than the specified threshold (0.05). The colors of the Heat Map represent the difference in the average accuracy. Results show that adding custom filters in the first layer as well as using multiplexing convolution in the first layer significantly boosts the performance. The colors of the Heat Map indicates that adding the custom filters has also a positive impact on the average accuracy though it can add some parameters. This is not the case for the multiplexing convolutions. We believe that this small average impact (0.34% overall) is not as important as the positive one. This is due to the fact that multiplexing reduces the number of parameters and wins over the majority of the datasets significantly. The addition of the dilated convolution is shown to not have a statistical significance on the performance (P-Value >0.05). However, the average difference in accuracy shows that most of the times, using the dilated convolutions can improve the results. Given that dilation does not add more parameters and on average it boosts the performance, we keep it in the LITE architecture. This is due to the fact that dilation increases the receptive field, so for large datasets this can be a boosting feature. The reason why sometimes Dilation can have a negative effect is because some of the datasets in the UCR archive do not require a large receptive field. Altogether, the LITE model (with the boosting techniques) will have less parameters compared to the striped LITE, while preserving performance compared to state-of-the-art models. The decrease in number of parameters when using all the boosting techniques together comes from the fact that multiplexing removes more parameters than the custom filters adds. Lastly, Figure <ref> shows the average rank of the models, such as in the CD Diagram explained in Section <ref>. The average rank of the Add-Custom-Filters is the lowest, while the Striped-LITE has the highest rank. Therefore, the worst model between the four presented in the Heat Map is the LITE without any boosting techniques. §.§.§ Impact of DWSC To show the effect of DWSC as well, we replace them by standard convolutions followed by a BottleNeck. To get a non-noisy comparison, we used ensembles. Note that the usage of the ensemble technique is necessary in this case given that by removing the DWSC, the difference in number of parameters becomes very high (LITE has almost 11% of the compared model's number of parameters (the compared model has around 85,000 parameters). In Figure <ref>, the one-vs-one comparison between LITETime and LITETime with Standard Convolutions is presented. Results show that the usage of DWSC does not have an effect on the performance because the P-Value is high 0.4556. This means that the difference in performance is not statistically significantly different with less parameters mainly due to the usage of the DWSC. §.§.§ Number of LITE Models in the Ensemble Given that LITETime is an ensemble of multiple LITE models, we stick to using five LITE models in LITETime in order to be fairly comparable to InceptionTime (ensemble of five Inception models). It has been experimentally shown in <cit.> that no significant difference of performance can be found on the UCR archive when more than five Inception models are considered in the ensemble InceptionTime. However, in the case of LITE, given its small architecture, its more bound to produce more variance and be less robust than Inception. For this reason, we believe that the number of LITE models can be higher in the ensemble LITETime. To experimentally prove our hypothesis, we trained ten different LITE models on all the UCR archive. Ensembles of 1,2,3,…,10 are constructed by averaging over all possible ensemble combinations. For instance, if the goal is to construct LITETime-3 (ensemble of three LITE models), we constructed the ensemble of all possible combinations of three LITE models from the pool of ten trained models. We present the results on the CD diagram in Figure <ref>. It can be seen from Figure <ref> that in the case of LITE, LITETime-5 (ensemble of five LITE models) is not the limit that we can achieve, instead it is LITETime-7. This is due to the fact that LITE is almost 42 times smaller than Inception. The ability to achieve this comes with the advantage of enhancing accuracy whilst LITETime exhibits a considerably smaller complexity compared to InceptionTime. LITETime-5 with five models has almost 2.34% of InceptionTime's trainable parameters but LITETime-7 increases this percentage only to 3.27. This can be seen with an example on the Beef dataset of the UCR archive in Figure <ref>. In this figure, we plot how the number of models in the case of LITETime and InceptionTime changes the performance on unseen data. §.§ Accuracy Performance on the UEA Archive In Table <ref>, we present the accuracy performance of LITEMVTime, LITETime and five different competitors: ConvTran, InceptionTime, Disjoint-CNN, FCN and ResNet. The accuracies of all five competitors are reported from the ConvTran paper <cit.>. For the LITEMVTime model, we can see that when it wins over other models, a significant gap is observed between the accuracies. This can be clearly seen as well in the MCM plot of LITEMVTime with the rest of the models in Figure <ref>. It can be seen that LITEMVTime comes second on the average performance, outperforming Disjoint-CNN and InceptionTime. LITEMVTime still does not win on more than seven datasets compared to ConvTran, but looking deeper into Table <ref>, it can be seen that LITEMVTime can win with a large margin of accuracy. For instance, for the EigenWorms dataset, the highest accuracy is reported by ConvTran with a value of 59.34%, however LITEMVTime produces an accuracy of 93.89% and LITETime with an accuracy of 95.42%. To study in more details the cases where LITEMVTime works better than ConvTran with a significant margin, we present in Figure <ref> the difference in performance between these two models in function of the training samples per class. This study is done to make sure if there are any common information about the cases where LITEMVTime works much better than ConvTran. From Figure <ref>, we can see the most significant differences in performance are in the case of three datasets: StandWalkJump, EigenWorms and EthanolConcentration. It can be seen that these datasets do not have the same range in the number of training examples. For instance, StandWalkJump is considered to have a small training set with 4 training examples per class and 12 training examples in total. However, the EthanolConcentration dataset has 66 training examples per class and 261 training samples in total. Another approach to study the reason why these datasets would work better than ConvTran when using LITEMVTime is to dig into the number of dimensions, taking advantage that this is the case of multivariate time series data. In Figure <ref>, we present the same information as in Figure <ref> but in function of the number of dimensions of the MTS datasets. It can be seen that the same three datasets have a small number of dimensions. It can be also seen that ConvTran always wins when the number of dimensions increases. §.§ Experiments on Human Motion Rehabilitation To assess the performance of LITEMVTime in a real life application, we consider the Human Rehabilitation domain. For this domain, the data is extracted from patients into the form of 3D skeleton based sequences. A human expert has to asses for each patient if they are doing a movement (rehabilitation exercise) correctly or not. This is a very important application and is highly correlated with this work given that this type of data can be regarded as Multivariate Time Series. For each sequence recorded, we have a specific number of joints in a three dimensional space. In other words, each sequence is a multivariate time series with 3*J channels where J is the number of human joints. For this experiment, we used the Kimore dataset <cit.>. This dataset contains recorded sequences in the form of videos of patients performing rehabilitation exercises, which are then transformed into numerical MTS using kinect v2 <cit.>. This dataset contains both healthy and unhealthy subjects performing five different rehabilitation exercises. A human expert then assesses the quality of the performed exercise by providing a score between 0 and 100, from bad to good respectively. A visualization of the distribution of these scores for each exercise is presented in Figure <ref> in the case of healthy and unhealthy subjects. It can be seen, for instance, that most cases of unhealthy subjects tend to have a low score and a high score in the case of healthy subjects. Given that this is a regression dataset, we re-orient the task to learn a deep learning model to evaluate the performance of a subject regardless of being healthy or not. Instead, the evaluation is done as following: * if the subject's score is less than 50, the exercise is considered to be badly performed; * if the subject's score is higher than 50, the exercise is considered to be performed well. Some examples of the five exercises are presented in Figure <ref>. Each recorded sequence has a different length, which should be fixed to one common length in order to use deep learning models. We resampled [https://docs.scipy.org/doc/scipy] all the samples in the data into the average length, which is 748. The sequences contain 18 human joints with each in a 3D space. The dataset is made of 71 examples per exercise, where each sequence is a recording of one subject. For this reason, we can randomly split for each exercise the dataset into a 80%-20% train test sets. In order to make sure samples of each class exist in the train test split, we make sure to stratify when randomly splitting the train/test sets with the labels (good/bad). All samples are z-normalized in order to have a zero mean and unit standard variation. In Table <ref>, we present the accuracy performance for each of the five exercises using the three deep learning models from the literature: FCN, ResNet and InceptionTime, our proposed architecture LITEMVTime, and a baseline classifier 1-Nearest Neighbour-Dynamic Time Warping (1-NN-DTW) <cit.>. The experiments using 1-NN-DTW are conducted using aeon Python pacakge <cit.>. It can be concluded from the table that LITEMVTime is the best performing model for this task compared to the competitors following both the average performance and average rank over all exercises. This results show that a small deep learning model such as LITEMVTime can be used to classify if a patient is healthy or not given a recorded sequence of their exercise. §.§ Explainability: Class Activation Map Having a good performing deep learning model is important. However it is important to also be able to understand the decision making process of a deep learning model. This field has been significantly targeted in the last decade and in the domain of TSC for the last five years. Class Activation Map (CAM) is an explainability technique for deep CNNs, which helps interpreting the decision of CNNs as if they were black box models. It was first introduced in <cit.> for image datasets and got first adapted to time series data in <cit.>. It is important to note that the usage of CAM explainability necessities the usage of a global representative layer before the softmax classification layer. An example of such layers is the Global Average Pooling (GAP) used in FCN, ResNet, Inception, LITE and LITEMV. The outcome of a CAM in the case of TSC is a univariate time series. Each time stamp represents the importance of this time stamp in the input time series that lead to a specific decision making. For the mathematical setup of CAM, we defined: * the output of the last convolution layer as O(t), which is a multivariate time series with M variables, M being the number of filters of this layer; In other words, O_m(t) is the output univariate time series of filter m, where m ∈{0,1,…,M-1}; * w^c={w_0^c,w_1^c,…,w_M-1^c} as the weights vector connecting the GAP output to the neuron of the winning class (the class with the highest probability value) The CAM output can be defined as follows: CAM(t) = ∑_m=0^M-1 w_m^c.O_m(t) This is followed by a min-max normalization on the CAM output. For two given time stamps, the one with the highest CAM score contributed more in the decision making of the black box deep learning model. In this work, we use the CAM explainability technique to better understand the decision making of the LITEMV model on the human rehabilitation dataset Kimore. We use five different examples from each exercise, and produce the CAM output using a LITEMV model trained to solve the classification task of each exercise (see Figure <ref>). Given that in the case of human skeleton based data, the input is a multivariate time series, we suppose that the produced CAM values represent the temporal axis. In other words, for each time stamp, one CAM score represents the contribution of the current time pose (skeleton) to the classification. To study more the changes in the CAM scores depending on the state of correctly classifying a sample, we present in Figure <ref> two CAM explanations on two samples of the same exercise. The first sample is correctly classified as class 1 (score > 50) and the second is incorrectly classified as class 1 as it should be class 0 (score < 50). It can be seen that the first sample (top) has higher intensity in the CAM colors (so in the CAM scores), than the second sample (bottom). This indicates that the important time stamps for correctly classifying as class 1 will have lower scores, when the ground truth is actually class 0. This is due to the fact the CAM scores consider the weights of the winning class in the classification layer and not the weights of the ground truth. §.§ Limitations of LITE and LITEMV LITE and LITEMV have low complexity when compared to other architectures; so, one potential limitation can occur when using these methods for handling big data. For instance, LITE and LITEMV could fall short when a training set including millions of data is used. This shortcoming could be solved by increasing the number of filters operating in the DWSC layers: in fact, given the efficient way DWSC's apply convolutions, this would result in a slight increase in the cost. A second limitation that affects all the architectures detailed in this work, is related to the length of the time series samples. This can be circumvented by increasing the CNN's Receptive Field (RF), which is the length of input visibility of the CNN at the last layer. Let us consider a CNN with L convolution layers, of kernel size K_i and dilation rate d_i, where i ∈{0,1,…,L-1}. The Receptive Field <cit.> of this model is computed as: RF = 1+∑_d_i*K_i. This quantity varies from one CNN to another. For instance, in the case of FCN <cit.>, the RF is 14=1+7+4+2. This RF is small compared to the length of the time series in the UCR archive. In the case of ResNet, the RF increases to 40, and to 235 for Inception. In the case of LITE and LITEMV, the RF is 114, which has been shown to be sufficient to achieve state-of-the-art performance on the UCR archive. However, this RF value needs to be increased if a dataset includes a significantly longer time series compared to this value. The RF value can be increased by either increasing the length of the filters, or by adding more layers to increase the model's depth. In usual CNN models, however, this is not an efficient approach given the fact it drastically increases the complexity of the network. This does not occurs for LITE and LITEMV, making them suitable for such a solution. § CONCLUSIONS In this paper, we addressed the Time Series Classification problem by reducing the number of parameters compared to existing deep learning approaches for Time Series Classification, while preserving performance of InceptionTime. We presented a new architecture for Time Series Classification, LITE, and evaluated its performance on the UCR archive. LITE has only 2.34% of InceptionTime's number of parameters. This model is faster than the state-of-the-art in training and inference time. It consumes as well less CO2 and power, which is a topic we believe to be very important nowadays. Results have illustrated that the usage of LITE allows us to achieve state-of-the-art results on the UCR archive. Furthermore, the presented ablation study demonstrated the importance of the techniques used in LITE. Furthermore, we also adapted LITE to handle multivariate time series data, LITEMV. Experiments on the UEA multivariate TSC archive showed promising performance on some datasets compared to the state-of-the-art. Finally, we showcased the utility of LITEMV in a real application, where human rehabilitation exercises are evaluated. In this context, we showed that LITEMVTime outperforms other models on the Kimore dataset. We believe this work can represent a starting point for optimizing deep learning architectures in the time series domain. We believe that this study can address clustering, representation learning and generative models as well. In future work, we aim to tackle these others domains given the impressive performance of LITE compared to ResNet and InceptionTime. Acknowledgments This work was supported by the ANR DELEGATION project (grant ANR-21-CE23-0014) of the French Agence Nationale de la Recherche. The authors would like to acknowledge the High Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific sup- port and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d’Avenir) and the CPER Alsacalcul/Big Data. The authors would also like to thank the creators and providers of the UCR, UEA Archives and the Kimore dataset. § DECLARATIONS * Funding This work was supported by the ANR DELEGATION project (grant ANR-21-CE23-0014) of the French Agence Nationale de la Recherche. * Conflict of interest The authors certify that they have no conflict of interest in the subject matter or materials discussed in this manuscript. * Availability of data and materials All of the datasets used in this work are publicly available. * Code availability The source code is available on this github repository: * Authors' contributions Conceptualization: AIF; Methodology: AIF, MD, SB, JW and GF; Experiments: AIF, MD; Validation: MD, SB, JW and GF; Writing—original draft: AIF; Writing—review and editing: AIF, MD, SB, JW and GF; Funding acquisition: MD; all authors have read and agreed to the published version of the manuscript.
http://arxiv.org/abs/2409.03226v1
20240905034846
Theory of Turbulent Equilibrium Spheres with Power-Law Linewidth-Size Relation
[ "Sanghyuk Moon", "Eve C. Ostriker" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
0000-0002-6302-0485]Sanghyuk Moon Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA 0000-0002-0509-9113]Eve C. Ostriker Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA [email protected], [email protected] BEBonnor-Ebert TESturbulent equilibrium sphere SPSsingular polytropic sphere ISMinterstellar medium GMCgiant molecular cloud § ABSTRACT Dense cores inherit turbulent motions from the interstellar medium in which they form. As a tool for comparison to both simulations and observations, it is valuable to construct theoretical core models that can relate their internal density and velocity structure while predicting their stability to gravitational collapse. To this end, we solve the angle-averaged equations of hydrodynamics under two assumptions: 1) the system is in a quasi-steady equilibrium; 2) the velocity field consists of radial bulk motion plus isotropic turbulence, with turbulent dispersion increasing as a power-law in the radius. The resulting turbulent equilibrium sphere (TES) solutions form a two-parameter family, characterized by the sonic radius r_s and the power-law index p. The TES is equivalent to the Bonnor-Ebert (BE) sphere when r_s→∞. The density profile in outer regions of the TES is slightly shallower than the BE sphere, but is steeper than the logotropic model. Stability analysis shows that the TESs with size exceeding a certain critical radius are unstable to radial perturbations. The center-to-edge density contrast, mass, and radius of the marginally stable TES all increase with increasing average velocity dispersion. The FWHM of the column density profile is always smaller than the critical radius, by a larger factor at higher velocity dispersion, suggesting that observations need to probe beyond the FWHM to capture the full extent of turbulent cores. When applied to the highly turbulent regime typical of cluster-forming clumps, the critical mass and radius of the TES intriguingly resembles the typical mass and radius of observed star clusters. § INTRODUCTION Dense prestellar cores are roughly spherical, compact (≲ 0.1 pc), centrally-concentrated objects found at the bottom of the hierarchy of ISM structure. Due to gravity, cores are internally stratified, with radial profiles of density ρ characterized by a central plateau surrounded by an outer envelope approximately following ρ∝ r^-2 <cit.>. Some cores exhibit molecular line profiles that cannot be explained by thermal motions alone, but could be interpreted as signatures of infall or turbulence <cit.>. While the true dynamical nature of dense cores – individually and as a class – is still under debate, it is clear that a fraction of them undergo gravitational collapse, leading to star formation. The radial column density profile of some observed cores closely matches the theoretical prediction of the isothermal equilibrium known as the BE sphere <cit.>, suggesting that those particular cores obey hydrostatic equilibrium, in which self-gravity is balanced entirely by a radial thermal pressure gradient <cit.>. Stability analyses have shown that such equilibria are unstable unless they are truncated at a small enough radius <cit.>. The mass and radius of the marginally stable BE sphere are called the BE mass, M_BE, and the BE radius, R_BE, respectively, given by M_BE = 4.43 M_G(ρ_c), = 1.18 M_G(ρ_e), = 1.86 M_G(ρ), R_BE = 1.82 R_G(ρ_c), = 0.486 R_G(ρ_e), = 0.762 R_G(ρ), where ρ_c, ρ_e, and ρ≡ 3M_BE/(4π R_BE^3) are the center, edge, and average density of a critical BE sphere, respectively, with ρ_c/ρ_e=14.0 and ρ̅/ρ_e=2.45. Here, M_G and R_G are the characteristic gravitational mass and radius for an isothermal system obtained from dimensional analysis as a function of density ρ, M_G ≡c_s^3/G^3/2ρ^1/2 = 1.27 M_⊙( T/10 K)^3/2( n_H/10^4 cm^-3)^-1/2, R_G ≡c_s/G^1/2ρ^1/2 = 0.15 pc( T/10 K)^1/2( n_H/10^4 cm^-3)^-1/2, where c_s=(kT/μ)^1/2 is the sound speed, G is the gravitational constant, and n_H = ρ/μ_H is the hydrogen number density, with μ = 2.3m_H and μ_H =1.4m_H assuming 10% helium abundance by number. Although the BE sphere provides essential physical insight regarding the onset of the core collapse, its applicability is limited because it neglects non-thermal pressure arising from internal turbulent motions, which are known to be present in real dense cores. In particular, the observed turbulent velocity dispersion within dense cores is known to increase with distance from the core center approximately as a power-law <cit.>[Although some studies indicate that the turbulent velocity dispersion flattens out near the core center, forming a “coherent” region <cit.>, this could be due to projection effects (see <ref> for related discussion).]. In order to take into account the effect of turbulence, <cit.> constructed a theoretical model of isothermal spheres supported by non-thermal pressure using a phenomenological, “logotropic” equation of state P/P_c = 1 + A log (ρ/ρ_c), where P_c and ρ_c are the central pressure and density, respectively, and A is a free dimensionless parameter. However, such an equation of state is not derived from first principles, and the linewidth-size relation that it predicts is inconsistent with observations except for a limited range of radius, as pointed out by <cit.>. Other models simply replaced the pressure term in the hydrostatic equation with assumed nonthermal pressure <cit.> or employed a composite polytrope <cit.>. For application to cores within molecular clouds, the BE sphere is also problematic in that it assumes a definite outer boundary at which the density profile is sharply truncated, which is in stark contrast to real cores that continuously merge into the ambient surrounding cloud [Cores embedded in much warmer gas can in principle be truncated by the thermal pressure of hotter phase gas, but this is not a realistic star formation environment.]. For example, it has been pointed out that the critical center-to-edge density contrast of 14 is too small compared to the large dynamic range of volume density in GMC, such that it would be extremely difficult for a core to remain stable <cit.>. However, this objection does not account for the fact that a core never exists in isolation: the gravitational field away from the center of the core is increasingly governed by surrounding structures (dense filaments, other cores, and stars). Thus, it is entirely possible that there exists an effective outer “tidal boundary” determined by the structure of gravitational potential around a core. The primary goal of this paper is to construct a physical model of a dense core supported by both thermal and turbulent pressure. Instead of assuming a relationship between pressure and density, we directly solve the angle-averaged equations of hydrodynamics assuming a statistical quasi-equilibrium and a power-law linewidth-size relationship. <ref> visually illustrates the basic concept of our TES model, juxtaposed with the traditional BE sphere with no turbulent motion. The results presented here quantify how the critical density contrast, radius, and mass change with the properties of underlying turbulent velocity field. The BE sphere naturally appears as a limiting solution with vanishing turbulent velocities. In a companion paper (Paper II), we will show that quasi-equilibrium structures resembling the TES emerge in three-dimensional simulations of self-gravitating, isothermal turbulence with conditions similar to star-forming GMCs. The simulations presented in Paper II demonstrate that the size of the local potential well associated with a core is limited by the gravity from neighboring structures, imposing an effective maximum core radius. This allows stable equilibria to exist in an isothermal medium with continuous density distribution. Paper II will analyze evolution of individual cores and gravitational potential structure around them, to identify critical conditions that determine the onset of the collapse. The remainder of this paper is organized as follows. In <ref>, we derive the equation of motion governing the radial dynamics of a turbulent core. In <ref>, we introduce dimensionless variables and derive equilibrium solutions. In <ref>, we perform a stability analysis to obtain the critical radius beyond which the equilibrium becomes unstable. Equilibrium solutions truncated at this radius are termed “critical cores.” <ref> presents the physical properties of critical cores as a function of the average velocity dispersion. Finally, we summarize our work and discuss its implications in <ref>. § ANGLE-AVERAGED EQUATION OF MOTION In this section, we derive the angle-averaged Lagrangian equation of motion (<ref>) satisfied by a spherical, isothermal region pervaded by turbulence. We start by writing the continuity equation and the radial component of the momentum equation in spherical coordinates as follows: ∂ρ/∂ t + 1/r^2∂/∂ r( r^2ρ v_r ) + 1/rsinθ∂/∂θ( sinθρ v_θ) + 1/rsinθ∂/∂ϕ( ρ v_ϕ) = 0, ∂( ρ v_r)/∂ t + 1/r^2∂/∂ r( r^2 ρ v_r^2 + r^2 ρ c_s^2 ) + 1/r sinθ∂/∂θ(ρ v_r v_θsinθ) + 1/rsinθ∂/∂ϕ(ρ v_r v_ϕ) - ρ v_θ^2 + ρ v_ϕ^2 + 2ρ c_s^2/r = ρ g_r. Here, ρ is the gas density, v_r, v_θ, and v_ϕ are the radial, meridional, and azimuthal components of the gas velocity, and g_r is the radial component of the gravitational acceleration. Integrating <ref> over a full solid angle and dividing by 4π leads to the angle-averaged continuity and momentum equation, ∂<ρ>/∂ t + 1/r^2∂/∂ r( r^2 <ρ v_r > ) = 0, ∂<ρ v_r >/∂ t + 1/r^2∂/∂ r( r^2 <ρ v_r^2 > + r^2 <ρ c_s^2 > ) - <ρ v_θ^2 > + <ρ v_ϕ^2 > + 2 <ρ c_s^2 >/r = <ρ g_r >, in which angle brackets denote the averaging operation <Q >≡1/4π∫_0^2π∫_0^π Q sinθ dθ dϕ for any physical quantity Q. It is useful to define a related operation <Q >_ρ≡∫_0^2π∫_0^πρ Q sinθ dθ dϕ/∫_0^2π∫_0^πρsinθ dθ dϕ = <ρ Q>/<ρ> which is the mass-weighted angle-average of Q. Without loss of generality, we decompose the velocity fields into mean and turbulent components, v_r = <v_r >_ρ + δ v_r, v_θ = <v_θ>_ρ + δ v_θ, v_ϕ = <v_ϕ>_ρ + δ v_ϕ., such that <δ v_r >_ρ = <δ v_θ>_ρ = <δ v_ϕ>_ρ = 0 by definition. Using <ref>, one can recast <ref> into a Lagrangian equation of motion <D <v_r >_ρ/Dt>_ρ = f_thm + f_trb + f_grv + f_cen + f_ani where <D <v_r >_ρ/Dt>_ρ = ∂<v_r >_ρ/∂ t + <v_r >_ρ∂<v_r >_ρ/∂ r is the radial acceleration. The individual force components (thermal, turbulent, gravitational, centrifugal, and anisotropic terms) are given by f_thm = -1/<ρ>∂ P_thm/∂ r, f_trb = -1/<ρ>∂ P_trb/∂ r, f_grv = < g_r >_ρ, f_cen = <v_θ>_ρ^2 + <v_ϕ>_ρ^2/r, f_ani = <δ v_θ^2 >_ρ + <δ v_ϕ^2 >_ρ - 2 <δ v_r^2 >_ρ/r, in which the thermal and turbulent pressures are defined by P_thm ≡<ρ>c_s^2, P_trb ≡<ρ><δ v_r^2 >_ρ, respectively. In what follows, we assume the centrifugal force is negligible compared to other forces and the turbulence is statistically isotropic, such that f_cen = f_ani = 0. § FAMILY OF EQUILIBRIA We now consider a roughly spherical, radially stratified region, and seek a solution in which thermal and turbulent pressure gradient forces balance self-gravity at every radius, i.e. f_thm + f_trb = -f_grv. Within a region relatively far from surrounding gravitating masses, the gravitational force can be approximated by f_grv≈ -GM_enc(r)/r^2, where M_enc(r) ≡ 4π∫_0^r r'^2⟨ρ⟩ dr' is the enclosed mass within the radius r. Using <ref>, <ref> can be written as 1/r^2∂/∂ r( r^2/<ρ>∂ P_eff/∂ r) = -4 π G <ρ> where P_eff≡ P_thm + P_trb is the total effective pressure. Motivated by observations, we further assume that the turbulent velocity dispersion increases with radius as a power law, <δ v_r^2 >_ρ^1/2 = c_s ( r/r_s)^p, in which r_s is the sonic radius and p is the power law index, such that P_trb = P_thm(r/r_s)^2p. To recast <ref> in a dimensionless form, we define a dimensionless radial coordinate ξ and logarithmic density contrast u by[See Appendix <ref> for an alternative dimensionless formulation based on given external pressure.] ξ ≡(4π Gρ_c)^1/2/c_sr, u(ξ;ξ_s,p) ≡lnρ_c/<ρ>, where ρ_c is the density at the center (i.e., r=0). In <ref>, it is made explicit that u is a function of ξ and involves two dimensionless parameters ξ_s = r_s(4π G ρ_c)^1/2/c_s and p. Here, ξ_s is the dimensionless sonic radius defined by r=r_s in <ref>. We also define the dimensionless enclosed mass m(ξ;ξ_s,p) = (4π G^3ρ_c)^1/2/c_s^3M_enc and the function χ(ξ;ξ_s,p) ≡ 1 + (ξ/ξ_s)^2p that relates thermal pressure to total pressure through P_eff = χ P_thm. In terms of these dimensionless variables, <ref> becomes 1/ξ^2∂/∂ξ[ ξ^2 (χ∂ u/∂ξ - ∂χ/∂ξ) ] = e^-u. It is useful to note that the term in the square bracket is identical to the dimensionless enclosed mass, i.e., m(ξ;ξ_s,p) ≡ξ^2 (χ∂ u/∂ξ - ∂χ/∂ξ). We remind the reader that <ref> reduces to the usual isothermal Lane-Emden equation <cit.> in the limit of vanishing turbulent pressure. For a solution to have a finite central density (i.e., a plateau), it must satisfy the boundary conditions u|_ξ=0 = 0, ∂ u/∂ξ|_ξ=0 = u'_0, with a finite value of u'_0 which does not necessarily equal zero. For example, for p=0.5, one can find a series solution of <ref> near ξ=0, u = 1/ξ_sξ + ( 1/6 - 1/2ξ_s^2) ξ^2 + ( 1/3ξ_s^3 - 7/36ξ_s) ξ^3 + ⋯, which yields u'_0 = ξ_s^-1. In practice, one can integrate <ref> in terms of the logarithmic radius ϖ≡lnξ, with the initial conditions u=0 and ∂ u/∂ϖ=ξ (∂ u/∂ξ) =0 for very small ξ. The resulting solutions form a two-parameter family characterized by ξ_s and p. We term this two-parameter family of solutions TES. A python package that calculates these solutions is available in <cit.>. [<https://github.com/sanghyukmoon/turbulent_equilibrium_sphere>.] <ref> plots the radial density profiles of the TES having p=0.5 and ξ_s=∞, 7, and 3. The profile with ξ_s=∞ is identical to that of the BE sphere. For smaller values of ξ_s, the density profile in the inner part is steeper than the BE solution, while it is shallower in the outer part. In the innermost regions of cores where gravity is weak and the local density profile ρ∝ r^-q is shallow (small q), thermal pressure gradients for the TES must be steeper than for the BE sphere in order to compensate for turbulent pressure forces, which are inward when q < 2p. Conversely, in the outer regions of cores where the BE solution requires a steep density gradient for thermal pressure to balance gravity, the density gradient can be shallower for the TES because turbulent pressure support assists thermal pressure support provided q > 2p. § STABILITY We assess the stability of the TES using an analysis similar to <cit.>. We consider a sphere of radius r and imagine that the volume enclosed is slightly compressed. In reality, this compression can be driven by either inflows from larger radii or random turbulent motions. As the gas distribution interior to this Lagrangian boundary surface adjusts to a new equilibrium, the effective pressure at the boundary will change according to the normalized bulk modulus defined by K ≡ -(∂ln P_eff/∂ln V_r)_M_enc,c_s,r_s,p where V_r ≡4π r^3/3 is the spherical volume enclosed within the radius r. Note that we keep the sonic radius r_s and index p constant in <ref>, assuming that the background turbulent flow is not affected by the perturbations applied.[This implies that the turbulent kinetic energy contained within the Lagrangian radius (i.e., the radius containing constant M_enc) decreases under the compression. Although turbulence can in principle be amplified when the compression timescale is shorter than the flow crossing time <cit.>, in practice we do not expect that the overall compression speed of turbulent cores will exceed their velocity dispersion, unless they are already undergoing gravitational collapse.] To calculate the bulk modulus K, we first write the effective pressure, volume, and mass as functions of ξ and ξ_s as follows: P_eff =ρ_c c_s^2 χ e^-u= c_s^4/4π G r_s^2ξ_s^2χ e^-u, V_r = 4π/3r^3=4π r_s^3/3ξ_s^-3ξ^3 , M_enc =c_s^3/(4π G^3 ρ_c)^1/2m= c_s^2 r_s/Gξ_s^-1 m . We remind the reader that χ, u, and m appearing in <ref> are dimensionless functions of the variable ξ and the parameters ξ_s and p. The total derivatives of ln P_eff, ln V_r, and ln M_enc at a fixed c_s, r_s, and p are δln P_eff = ( ∂lnχ/∂lnξ - ∂ u/∂lnξ) δlnξ + ( 2+ ∂lnχ/∂lnξ_s - ∂ u/∂lnξ_s) δlnξ_s = -m/χξδlnξ + (2+ ∂lnχ/∂lnξ_s - ∂ u/∂lnξ_s) δlnξ_s, δln V_r = 3 δlnξ - 3 δlnξ_s, δln M_enc = ∂ln m/∂lnξδlnξ + ( ∂ln m/∂lnξ_s - 1 ) δlnξ_s, where we have used <ref> in the second equality of <ref>. Because we are interested in the bulk modulus of a region of constant M_enc, <ref> requires δlnξ/δlnξ_s = m/ξ^3 e^-u( 1 - ∂ln m/∂lnξ_s) to be satisfied. Using <ref>, after some algebra, one obtains K = 2/31 - 1/2∂ u/∂lnξ_s + 1/2∂lnχ/∂lnξ_s - m^2/2χ e^-uξ^4( 1 - ∂ln m/∂lnξ_s) /1 - m/ξ^3e^-u( 1 - ∂ln m/∂lnξ_s) = 2/31 - 1/2∂ u/∂lnξ_s + 1/2∂lnχ/∂lnξ_s - ( 4π/3)^1/3GM_enc^2/6P_effV_r^4/3( 1 - ∂ln m/∂lnξ_s) /1 - M_encc_s^2/3P_thmV_r( 1 - ∂ln m/∂lnξ_s). In the limit of negligible turbulence (ξ_s→∞), each of the derivatives with respect to ξ_s in <ref> may be set to zero, and we recover Equation 2.16 of <cit.> for an isothermal sphere. In the simultaneous limit ξ_s→∞ and G→ 0, the density is uniform so that M_encc_s^2/(P_thmV_r) = 1, and one recovers the ideal gas K=1. The equilibrium is unstable when K < 0, because a slight compression leads to a further decrease in the interior pressure. <ref> plots density profiles for selected TES with p=0.5 and ξ_s=∞, 7, and 3, together with the radial profile of K in each case. In all cases, for ξ≪ 1 the solutions have K≈ 1 because there is not enough mass to be self-gravitating and the thermal pressure dominates the turbulent pressure. As ξ increases, however, self-gravity becomes more and more important and K decreases. We define the critical radius ξ_crit as the radius where K first becomes negative. For ξ>ξ_crit, the magnitude of K rapidly increases, undergoing another sign change at ξ = ξ_crit,2 back to positive. A TES with outer radius ξ > ξ_crit is unstable because a part of its interior has K < 0. For each member of the family of solutions with given p and ξ_s, the TES with outer radius ξ = ξ_crit is identified as the critical solution. We further define the critical logarithmic density contrast u_crit≡ u(ξ_crit)=ln[ρ_c/ρ(ξ_crit)] and the critical mass m_crit≡ m(ξ_crit) associated with the critical radius ξ_crit. <ref>(a)–(d) shows the parametric dependence of ξ_crit, m_crit, u_crit, and the average turbulent velocity dispersion σ_1D on ξ_s, for p=0.3, 0.5, and 0.7. Here, we define σ_1D using the mass-weighted average over the whole core within r_crit, assuming the turbulence is statistically isotropic: σ_1D ≡(∭_r<r_critρδ v_r^2 dV/∭_r<r_critρ dV)^1/2 = c_s [∫_0^ξ_crit e^-u (ξ/ξ_s)^2pξ^2 dξ/∫_0^ξ_crit e^-uξ^2 dξ]^1/2. In the limit of ξ_s→∞, we find ξ_crit=6.45, m_crit=15.7, and u_crit=2.64, identical to the well-known result for the critical BE sphere <cit.>. The point ξ_s = ξ_crit might be of some interest, because the sonic radius is within the core for smaller ξ_s, while it is outside the core for larger ξ_s. For p=0.3, 0.5, and 0.7, ξ_s=ξ_crit occurs at ξ_s = 10.2, 8.99, and 8.21. It is interesting to note that σ_1D≈ c_s at ξ_s ≈ 6, quite insensitive to p. It is worth mentioning, as well, that while this and other curves plotted in <ref> give an impression that they all cross at the same locus, a detailed examination reveals that it is only approximately true, and the crossing points differ for different curves (e.g. ξ_crit≈ 14 when ξ_s ≈ 5, and m_crit≈ 65 when ξ_s ≈ 4). The similar crossing points arise because the curves with different p (for panels a-c) go to the same limit ξ_crit→ξ_BE at ξ_s→∞, and more generally these solutions must be quite similar in the regime where ξ_s>ξ_crit because χ in <ref> is close to unity. For p=0.5, <ref> provides tabulated values for ξ_s, ξ_crit, m_crit, u_crit, and σ_1D. ccccc Properties of critical TES with p=0.5. ξ_s ξ_crit m_crit u_crit σ_1D ∞ 6.45 15.7 2.64 0.0 18.2 7.50 19.9 2.81 0.5 6.42 10.6 34.5 3.25 1.0 4.21 15.8 64.3 3.76 1.5 3.42 22.7 115 4.24 2.0 2.55 103 1.4× 10^3 6.17 5.0 2.44 385 1.3× 10^4 7.79 10 2.42 ∞ ∞ ∞ ∞ <ref> is published in its entirety in machine-readable format. A few representative rows are shown here for guidance regarding its form and content. We note that ξ_crit, m_crit, and u_crit can be translated to the dimensional critical radius r_crit, mass M_crit, and critical density contrast (ρ_c / ρ_e)_crit, through <ref>. For radius and mass, conversion between dimensionless and dimensional variables is given by: r = ξ× 0.014 ( T/10 K)^1/2( n_H,c/10^5 cm^-3)^-1/2 M = m× 0.11 M_⊙( T/10 K)^3/2( n_H,c/10^5 cm^-3)^-1/2 where n_H,c is the central density and T is the temperature. <ref> shows that the dimensionless critical radius ξ_crit steeply increases with decreasing ξ_s. This indicates that a core forming in a region of strong local turbulence would initially be stable: large ξ_crit would make the critical radius lie beyond the effective tidal radius that limits the core, as imposed by the gravity of nearby structures in the GMC However, evolution generally occurs in the direction of increasing ξ_s ∝ρ_c^1/2 r_s as the initial converging flows further compress the core and turbulence dissipates, leading to a decrease in ξ_crit. When ξ_crit becomes small enough, the core will become unstable, triggering collapse. More details on the collapse scenario will be discussed in <ref>; the quantitative test of this scenario using numerical simulations will be presented in Paper II. We empirically find that, for p ≳ 0.4, there exists a minimum dimensionless sonic radius ξ_s,min below which K>0 at every radius. In <ref>, ξ_s,min for a given p corresponds to the value at which ξ_crit→∞. The existence of ξ_s,min for a TES family means that for a given central density, every member of the family is stable if the turbulence is sufficiently strong that r_s < ξ_s,min c_s/ (4π G ρ_c)^1/2, where <ref> can be used to translate this into physical terms. Alternatively, because ξ_s ∝ρ_c^1/2 r_s (<ref>), the existence of ξ_s,min implies that for a given dimensional sonic radius r_s, a TES with the central density lower than ρ_c,min≡ξ_s,min^2c_s^2/(4π G r_s^2) is always stable. Let us suppose that the larger-scale environment of the cores is a spherical cloud of radius R_cloud, mass M_ cloud, and one-dimensional Mach number ℳ_1D = (R_ cloud/r_s)^p, with virial parameter α_vir,cloud≡5 ℳ_1D^2c_s^2 R_ cloud/G M_ cloud. The minimum central density for instability to be possible can then be written as ρ_c,min/ρ_0 = ξ_s,min^2/15α_vir,cloudℳ_1D^2/p - 2, where ρ_0 = M / (4π R^3/3) is the mean density of the cloud. For turbulent power law index p=0.5, we find ξ_s,min = 2.42, which would correspond to a minimum sonic scale of r_s≈ 0.03 using the fiducial central density and temperature in <ref>. Alternatively, the minimum molecular hydrogen number density at the core center, n_H2,c^min=n_ H,c^ min/2 ≡ρ_c,min / (2μ_Hm_H), that would allow for instability is given in terms of the physical sonic scale r_s by n_H2,c^min = 2.2× 10^4 cm^-3 (ξ_s,min/2.42)^2( T/10 K) ( r_s/0.05 pc)^-2. We emphasize that these are the local values of T and r_s, the latter of which can vary considerably within a GMC (see Paper II). With typical molecular cloud virial parameter α_vir,cloud∼ 2-4 on ∼ 100 scale <cit.>, <ref> implies that the central density in the core would have to exceed the ambient cloud density by at least a factor ρ_c,min/ρ_0∼ M^2 for instability to be possible, when p=0.5. We stress that the condition ξ_s> ξ_s, min is a necessary but not a sufficient condition for collapse. That is, ρ_c > ρ_c,min does not guarantee instability and hence should not be interpreted as a critical density for collapse. Instead, ρ_c > ρ_c,min simply means that there exists a critical radius beyond which a quasi-equilibrium is unstable. For a core to collapse, its central density must exceed the minimum value, and the core's outer radius and total mass must exceed the critical values for TES solutions, as shown in <ref>. For p=0.5, <ref> plots the critical radius and mass as a function of ρ_c / ρ_c,min=(ξ_s/ξ_s,min)^2. We note that cases with large (small) ρ_c/ρ_c,min correspond to weak (strong) turbulence. A highly turbulent core would therefore have to be larger and more massive to become unstable, compared to quiescent cores. Notably, <ref> shows that when ρ_c is close to ρ_c,min, r_crit is at least a factor of a few larger than the BE radius at that ρ_c, R_BE=1.82 R_G(ρ_c). <ref> also shows that the mass begins to exceed M_BE = 4.43 M_G(ρ_c) by a factor two or more when ρ_c/ρ_c,min < 9. At the same time, <ref> shows that for a given r_s (and therefore given ρ_c,min), the central density must become quite large in order for the unstable core mass to approach M_BE: ρ_c/ρ_c,min has to be at least 21 in order to have a core with M_crit/M_BE < 1.5. Since real cores cannot have either too large a radius (this would exceed the cloud size) or too high a central density (difficult to reach with realistic larger-scale dynamics), the majority of cores that become unstable may have a moderate range of r_crit/R_BE and M_crit/M_BE, or equivalently, of σ_1D/c_s (see <ref>). A comparison of gravitational and flow crossing timescales also suggests a highly turbulent core with σ_1D≫ c_s would be difficult to collapse as a whole (see below). Although ξ_s,min is a strict lower limit for allowing instability, we expect collapse would mostly occur at ξ_s ≳ 6, for which turbulent velocity dispersion is at most transonic regardless of the power law index p (see <ref>(d)). We further discuss scenarios for collapse in <ref>. § PHYSICAL PROPERTIES OF CRITICAL TES While the structure of the TES is fully characterized by ξ_s and p, it is often useful to reparameterize the results in terms of the core-averaged one-dimensional velocity dispersion σ_1D defined in <ref>. Here we will consider just the case of critical cores. For p=0.5, <ref>(a) plots the density profiles of critical TES having different values of σ_1D/c_s, normalizing relative to the density at the outer radius. Increased turbulent support makes the core larger and more centrally concentrated; a trans-sonic core has a factor of two higher density contrast than a purely thermal core, while increasing the internal Mach number to 2 leads to a factor ∼ 5 increase in the density contrast. Here we show just cases with p=0.5, but other indices produce similar profiles for low σ_1D since thermal pressure dominates. <ref>(a) suggests that a TES develops a power law envelope as σ_1D increases. To illustrate this more clearly, in <ref>(b) we plot the density profiles of highly turbulent TES with σ_1D = 10 c_s for selected linewidth-size indices p=0.3, p=0.5, and p=0.7. We then calculate the local density slope q ≡ -∂lnρ/∂ln r for all cases shown in <ref>(a)-(b) and plot the profiles of q in <ref>(c). While no density profile is described by a single power law, the outer density slopes generally lie in the typical range q ∼ 1–2 found in observations <cit.>. <ref>(c) suggests that for σ_1D/c_s≳ 2, q becomes flat in the outer part of the core. In fact, if in the limit σ_1D≫ c_s one substitutes ρ∝ r^-q into <ref>, one can show that the outer envelope of the TES becomes a power law with q → 2 - 2p when 0< p < 0.5 and q → 2p when 0.5 ≤ p < 1. The corresponding limits in the polytropic index γ_p ≡∂ln P_eff / ∂lnρ = 1 - 2p/q are γ_p → (1 - 2p) / (1 - p) when 0< p < 0.5 and γ_p → 0 when 0.5 ≤ p < 1. The former limit is equivalent to the SPS ρ∝ r^-q and P_eff∝ρ^γ_p with q = 2/(2-γ_p) <cit.>: the outer envelope of a highly turbulent TES with 0 < p < 0.5 approaches the SPS solution. For 0.5 ≤ p < 1, a singular polytropic solution to <ref> does not exist. We note that the convergence of q → 1 when p=0.5 is extremely slow: the local slope of the TES solution manages to reach q = 1.05 at ξ_crit = 10^9. Because of this, the outer envelope of the p=0.5 TES shown in <ref>(c) has not yet reached the power-law limit of q→ 1, while the solutions with p=0.3 and 0.7 almost reach the limiting value q → 1.4. For all these solutions, equilibrium is achieved by balancing the three forces: thermal pressure gradient (f_thm), turbulent pressure gradient (f_trb), and gravity (f_grv) (e.g., <ref>). While f_thm is always positive and f_grv negative, the direction of f_trb can be either outward or inward depending on the sign of 2p - q, because locally P_trb∝ r^2p - q (see the last paragraph of <ref>). <ref>(d) shows the relative importance of gravity compared to the other two forces in determining the equilibrium, by plotting the quantity ℛ_grv≡|f_grv|/max(|f_thm|, |f_trb|) as a function of radius. It shows that for all solutions (except the BE sphere where f_trb = 0), ℛ_grv≪ 1 in the inner part: there, the thermal pressure gradient is balancing the inward turbulent pressure gradient, with gravity playing only a minor role. However, as q increases toward the outer part, the turbulent pressure gradient force decreases in magnitude, and gravity starts to take over. As mentioned in <ref>, a previous theoretical model that was introduced in order to represent nonthermal support was the logotrope <cit.>. <ref> compares the internal structure of the TES with that of the logotrope having the identical velocity dispersion. It shows that compared to the TES, the logotrope has a significantly flatter outer density profile. <ref> also compares the linewidth-size relation for the TES with that of the logotrope, where the unphysical turnover of the logotrope is evident at large scales. Using <ref>, the critical mass and radius of a TES can be expressed in terms of its center, edge, or mean densities as follows: M_crit = m_crit/(4π)^1/2M_G(ρ_c), = m_crite^-u_crit/2/(4π)^1/2M_G(ρ_e), = (3/4π)^1/2( m_crit/ξ_crit)^3/2M_G(ρ), r_crit = ξ_crit/(4π)^1/2R_G(ρ_c), = ξ_crite^-u_crit/2/(4π)^1/2R_G(ρ_e), = (3/4π)^1/2( m_crit/ξ_crit)^1/2R_G(ρ), where ρ_e ≡<ρ>(r = r_crit) is the edge density, ρ≡ 3M_crit/(4π r_crit^3)=3ρ_c m_crit/ξ_crit^3 is the mean density, and M_G and R_G are the gravitational mass and radius defined in <ref>. We remind the reader that u_crit, ξ_crit and m_crit are dimensionless functions of p and ξ_s, or alternatively, of p and σ_1D (see <ref>(d) for the relationship between ξ_s and σ_1D). We note that <ref> reduce to <ref> in the limit of σ_1D→ 0. <ref>(a),(b) plots M_crit/M_G(ρ) and r_crit/R_G(ρ) as functions of σ_1D/c_s, showing that both quantities increase with σ_1D/c_s, rather slowly when σ_1D≲ c_s and rapidly for σ_1D≳ c_s. For p=0.5 and σ_1D = c_s, we find r_crit = 0.88 R_G(ρ) and M_crit = 2.86 M_G(ρ), which are factors of 1.15 and 1.54 larger than the corresponding values for the BE sphere having the same mean density (see also the profile comparison in <ref>).[We note that the difference between the TES and BE sphere is more pronounced when compared at the same central density (see <ref>).] <ref>(a),(b) also plot the “turbulent BE” mass and radius resulting from a naive substitution c_s→ (c_s^2 + σ_1D^2)^1/2 in <ref>, which significantly overestimate the true critical mass and radius depending on the Mach number. For example, the “turbulent BE” mass and radius of mildly supersonic cores (σ_1D = 2c_s) are larger than M_crit and R_crit by a factor of 3.7 and 1.5 (assuming p=0.5), respectively, indicating such simple substitution would lead to incorrect results. When σ_1D≫ c_s, the critical radius and mass as well as the critical density contrast all increase as a power-law in σ_1D/c_s. For example, we find that the critical quantities for p=0.5 are well approximated by M_crit ≈ M_BE(ρ) (1 + σ_1D^2/2c_s^2) ≈ 30 M_⊙( T/20 K)^3/2( n_H/500 cm^-3)^-1/2 ×[ 1 + 64(σ_1D/3 km s^-1)^2(T/20 K)^-1], r_crit ≈ R_BE(ρ)( 1 + σ_1D^2/2c_s^2)^1/3 ≈ 0.74 pc( T/20 K)^1/2( n_H/500 cm^-3)^-1/2 ×[ 1 + 64(σ_1D/3 km s^-1)^2(T/20 K)^-1]^1/3, (ρ_c/ρ_e)_crit≡ e^u_crit≈ 14( 1 + 0.7σ_1D^2/c_s^2)^1.2 which are plotted in <ref>(a)–(c) with blue dashed lines. We note that <ref> are valid within relative error of 5% for σ_1D < 9.5 c_s, σ_1D<13c_s, σ_1D<31c_s respectively. Intriguingly, by evaluating <ref> at σ_1D≳ 3 km s^-1, it is readily evident that the critical mass at high Mach number σ_1D≫ c_s becomes comparable to typical masses of star clusters, as will be discussed in <ref>. <ref>(c) plots the center-to-edge density contrast for marginally stable equilibrium solutions, as a function of σ_1D/c_s. The critical center-to-edge density contrast converges to the well-known value of 14 in the limit σ_1D→ 0, but then monotonically increases with σ_1D/c_s with very weak dependence on p, reaching 26 and 70 at σ_1D = c_s and 2c_s, respectively. We note that for the equilibrium with larger p, the outer density profile has to have a steeper slope to generate outward pressure gradient, leading to smaller r_crit and M_crit despite having similar center-to-edge density contrast. <ref>(c) also shows the ratio of the average density to the edge density (right axis). The average density is a factor of ∼ 1.5 - 2.5 larger than the edge density, with a mild dependence on σ_1D/c_s and p, and the largest ratio for the limiting non-turbulent case. The ratio ρ̅/ρ_e is only ∼ 1.5-2 for the highly turbulent cores that have extreme center-to-edge density contrast ρ_c/ρ_e > 10^2; while central densities are large, very little mass is involved. <ref>(d) plots the ratio of the gravitational free-fall time t_ff,avg≡[3π/(32 Gρ)]^1/2 at the mean core density to the effective crossing time t_cr≡ r_crit/(c_s^2 + σ_1D^2)^1/2. The large t_ff,avg/t_cr for σ_1D≫ c_s and small p may indicate that turbulence would quickly rearrange fluid elements in the outer part of the core before gravity can bring about collapse. For p<0.5, however, t_ff,avg/t_cr∼ 0.7 - 2, so turbulence would be less likely to disrupt cores before the structure as a whole collapses. Observations with limited spatial resolutions are able to constrain ρ and therefore t_ff,avg. However, the characteristic timescale of the evolution leading to the formation of a singularity (i.e., protostar) at the center is t_ff,c≡ [3π / (32 G ρ_c)]^1/2, which is shorter than t_ff,avg. Using the data shown in <ref>(c), we find an analytic approximation (ρ_c/ρ)_crit≈ 5.70 ( 1 + 0.8 σ_1D^2/c_s^2)^1.2 analogous to <ref>, which indicates that the ratio t_ff,c/t_ff,avg = (ρ_c/ρ)_crit^-1/2≈ 0.42(1 + 0.8σ_1D^2/c_s^2)^-0.6 is a decreasing function of σ_1D/c_s (see <ref>(a)). We note that t_ff,c is a factor of ∼ 3 times smaller than t_ff,avg for a transonically turbulent core. Since t_ff,avg/t_cr∼ 1 for transonic cores, this implies that the center would collapse on a timescale shorter than the outer part would evolve either due to gravity or to turbulence-driven restructuring. This would be all the more true for highly turbulent cores that reach critical conditions, as ρ_c/ρ̅≫ 1 in this limit. Observational sensitivity limits generally make it difficult to probe the outer envelopes of cores. It is common to define the core radius using the FWHM of the column density profile, which is known to approximate R_BE if the underlying density distribution is similar to the BE sphere <cit.>. We introduce the notation R_FWHM to denote this radius, i.e., R_FWHM≡ 2R_HM where the column density at R_HM is the half of the central value. To relate R_FWHM to the critical radius for cores having nonzero velocity dispersion, we first integrate to obtain 2D maps of TES, thereby obtaining column density profiles. <ref> compares examples of the column density profiles for the critical TES having zero, transonic, and mildly supersonic turbulent velocity dispersions (see <ref> for the corresponding volume density profiles normalized to the edge density). For the latter two, we assume the turbulent power-law index of p=0.5. Based on the column density profiles, we plot the ratio R_FWHM/r_crit in <ref>(b). This shows that while R_FWHM/r_crit∼ 0.8 when σ_1D≲ 0.5 c_s, this ratio is smaller by a factor of two or more for cores having supersonic turbulent velocity dispersions. That is, when the turbulence level is high, the radius of the core must extend well beyond R_FWHM for a core to be unstable to collapse. The dotted curve in <ref> shows an example of this. § DISCUSSION §.§ Collapse Scenario In the idealized theory, a quasi-equilibrium core supported by thermal and turbulent pressure is unstable if its maximum radius r_max exceeds r_crit. However, the TES model describes a single isolated object, without regard to any gravitational forces other than those produced by the core itself. In reality, hierarchical structure around the core contributes to the landscape of the gravitational potential, and r_max must also be effectively tidally limited. For example, in the case of two cores with equal mass, the effective tidal limit is half of their separation. More generally, if we take r_max to be the distance to the nearest saddle point of the gravitational potential, it scales linearly with the distance to an external structure of mass M_external, and inversely with 1 + (M_external/M_core)^1/2. While making a quantitative prediction of r_max is beyond the scope of this work, it is clear that the spatial extent of any realistic core forming inside a turbulent cloud is limited by the scale associated with the velocity perturbation creating the core. In observations, transformation of column density maps of molecular clouds into a dendrogram <cit.> can provide one way of characterizing r_max. <ref> schematically illustrates two alternative evolutionary scenarios leading to collapse or dispersal. An overall assumption is that a core is formed by turbulent flows that are locally converging, and the density of the core is determined by the strength of the converging motion. Larger-amplitude and longer-duration converging velocity perturbations will create higher local density contrast relative to ambient values. At the same time, turbulence on scales smaller than the overall converging flow will help to protect against collapse by providing support against gravity. If the converging flow is strong (creating large ρ_c) and/or the smaller-scale turbulence is weak (i.e., large r_s), the resulting core would have large enough ξ_s such that r_crit < r_max. The core would then subsequently undergo collapse leading to star formation. In contrast, when the converging flow is too weak or turbulence is too strong such that r_crit > r_max, the entire region within r_max is stable and the core would disperse unless it is further compressed by, e.g., passing shock waves. In Paper II, we will test the above scenario by directly comparing r_crit and r_max for cores forming in a large suite of numerical simulations. By tracking cores over time, we show that the net radial force within each core becomes negative (i.e., accelerating the inward bulk motions) when r_crit from TES theory first falls below r_max computed from the full gravitational potential. We also show that the internal density profiles of the simulated cores at the time they initiate gravitational collapse are consistent with the TES solutions obtained in this paper. We will connect the timescales of different stages of evolution to observed core lifetimes in future work. §.§ Projection Effect on Linewidth-Size Relation A basic assumption of the TES model is that the turbulent velocity dispersion increases with radius as a power law (<ref>). Although this assumption is motivated by several observational studies <cit.>, a direct comparison between <ref> and the observed linewidth-size relation is difficult because the latter involves the integration along the line-of-sight, weighted by the density and emissivity of the transition. To explore how the intrinsic turbulent velocity structures within a core are related to the observable line-of-sight velocity dispersion, it is necessary to perform numerical simulations in which prestellar cores form within the context of a larger scale GMC, which is done in Paper II. Our preliminary analyses from those simulations show that the line-of-sight non-thermal velocity dispersion traced by dense gas is approximately constant within a core, reminiscent of the so-called “coherent cores” <cit.>, even though the intrinsic velocity dispersion obeys a power-law. Careful modeling including density selection effect of different molecular tracers would be required to relate the observed linewidth-size relation to the intrinsic turbulent structures in three dimensions. §.§ Implication for Star Cluster Formation Massive star clusters such as those found within Carina or the Orion Nebula contain a few thousand solar masses within a few parsecs and have velocity dispersions of a few kilometers per second <cit.>. Given that the thermal Jeans mass under GMC conditions on large scales is ∼ 10-10^2 M_⊙, a longstanding question is how material can avoid early fragmentation prior to assembling the massive, compact structures that are cluster progenitors. Our TES solutions shed some light on this question: provided that the central density remains lower than the value given in <ref> or <ref>, a turbulent overdense patch of a GMC encompassing significant mass could in principle become progressively more concentrated without undergoing runaway collapse. Assuming T = 20 K, n_H = 500 cm^-3, and σ_1D = 3 km s^-1, typical of dense clumps in GMCs, <ref> yield M_crit∼ 2× 10^3 M_⊙ and r_crit∼ 3 pc, which are intriguingly similar to typical mass and radius of ONC-like clusters. The quantitative results from our TES model may therefore be key to realizing the equilibrium cluster formation scenario of <cit.>. This work was supported in part by grant 510940 from the Simons Foundation to E. C. Ostriker. Computational resources for this project were provided by Princeton Research Computing, a consortium including PICSciE and OIT at Princeton University. We are grateful to the anonymous reviewer for constructive comments on the manuscript. aasjournal sectionappsec § ALTERNATIVE FORMULATION In this appendix, we provide an alternative nondimensionalization of <ref> based on the external pressure P_ext rather than the central density. We find this formulation useful when probing the highly turbulent regime where the dependence of the critical quantities on ξ_s becomes extremely sensitive (see <ref>). In order to distinguish the alternative dimensionless variables from those in the main text, we attach a prime to each variable and define ξ' ≡G^1/2P_ext^1/2/c_s^2r, u'(ξ';ξ'_s,p) ≡lnP_eff/P_ext, m'(ξ';ξ'_s,p) ≡G^3/2P_ext^1/2/c_s^4 M_enc, χ'(ξ';ξ'_s,p) ≡ 1 + ( ξ'/ξ'_s)^2p. In terms of these dimensionless variables, <ref> becomes 1/ξ'^2∂/∂ξ'( χ' ξ'^2 ∂ u'/∂ξ') = - 4π e^u'/χ'. Starting from very small ξ', one can integrate <ref> in terms of the logarithmic radius t' ≡lnξ' with the initial conditions u' = u_c and ∂ u' / ∂ t = 0, until P_eff matches the external pressure P_ext (i.e., u' = 0) at the outer radius ξ_ext. With all other parameters fixed, the total mass of the sphere m'(ξ_ext) initially increases with u_c (starting from u_c = 0), and then reaches the maximum when u_c = u_c,crit. It is not difficult to see that this stationary point is equivalent to the point where K = 0 and thus defines the marginally stable solution. This also implies that, for a given external pressure, there is a maximum mass above which no equilibrium solution exists.
http://arxiv.org/abs/2409.02550v1
20240904091222
Star-Crossed Labours: Checking Consistency Between Current Supernovae Compilations
[ "William L. Matthewson", "Arman Shafieloo" ]
astro-ph.CO
[ "astro-ph.CO" ]
=1 JHEP a]W.L. Matthewson0000-0001-6957-772X a]A. Shafieloo0000-0001-6815-0337
http://arxiv.org/abs/2409.03483v1
20240905124729
Defects and type D relativistic Toda lattice for some 5d gauge theories
[ "Kimyeong Lee", "Norton Lee" ]
hep-th
[ "hep-th" ]
1 patterns,shapes.misc Υ 𝔼 𝕃 𝔮 𝔮 𝔮 𝔰 𝔠 𝔇 𝔄 𝔞 ℜ 𝔏 x H i i j L 𝐢 𝐣 𝐳 𝐲 𝐰 𝐱 𝐚 𝐛 𝐧 𝐦 𝐜 𝐝 𝐓 𝐬 𝐀 𝐁 𝐙 𝐂 𝐗 𝐄 𝐋 𝐌 𝐔 𝐈 𝐉 𝐊 𝐍 𝐒 Π Υ ∇ 𝐃 𝐘 ℂ 𝕄 ℝ 𝔼 ℤ 𝕀 𝕁 𝔾 ℙ 𝒩 𝒜 𝒱 𝒫 ℛ 𝒵 ℋ 𝒲 𝒰 𝒪 ℳ 𝒮 𝒞 𝒯 𝔮 ℒ 𝒳 Tr ℰ ℳ ε ℤ 𝚊 𝚋 𝚌 𝚡 𝚍 𝚎 𝚒 𝚓 𝚖 𝚛 𝚐 𝚙 𝚚 𝚜 𝚗 𝙿 𝚄 𝙻 𝙼 𝚈 𝙺 𝚠 𝔷 𝔶 𝔵 𝔱 𝔢 𝔳 𝔲 𝔤 𝔐 𝖸 𝖷 𝖰 𝖯 𝖨 𝖪 𝖩 𝖨 𝖲 𝖬 𝖥 𝖫 𝖳 𝗍 𝗆 𝗇 𝗑 𝗄 𝗐 𝖾 𝗀 𝖠 𝖡 𝖢 𝖣 𝖴 𝖯 𝖥 𝖧 𝖪 𝖭 ∂ Q P N Y X O Z G H A S G B R T M L C D K W Q 𝔸 𝔻 𝕄 ∂ ∂̅ α β̱ γ δ̣ ϵ þθ ϑ κ̨ łλ μ ν ξ ρ̊ ῠ ϱ σ τ þθ ζ φ Γ Δ Θ Ξ Π Σ ŁΛ ØΩ øω Υ 𝔰𝔩 sh ch citea,b]Kimyeong Leec]and Norton Lee[a]School of Physics, Korea Institute for Advanced Study, Hoegiro 85, Seoul 02455, Korea[b] Beijing Institute of Mathematical Sciences and Applications (BIMSA), Huaibei Town, Huairou District, Beijing 101408, China[c]Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Korea [email protected], [email protected] We perform folding on the ADHM construction of the instanton moduli space from SU to SO group. A Young diagram description for the SO instanton is obtained after modifying the real and complex moment maps of the ADHM data. We study the Bethe gauge correspondence between type D relativistic Toda lattice and 5d =1 folded theory. In particular we prove that the regular monodromy defect in the folded gauge theory is the stationary wavefunction of the type D relativistic Toda lattice. Defects and type D relativistic Toda lattice for some 5d gauge theories [ September 9, 2024 ======================================================================= § INTRODUCTION Supersymmetric gauge theories with eight supercharges have intrinsic integrability, known as the Bethe/Gauge correspondence. It has been an active research area since the ground breaking work by Seiberg and Witten <cit.>. The R-matrix characterizing the quantum integrability is found associated to the instanton counting of gauge theory with an Ω-background <cit.>. In the Nekrasov-Shatashvili limit where 2d =(2,2) super-Poincaré symmetry is restored the 4d =2 theory is effectively described by an effective =(2,2) theory that captures the integrability nature <cit.>. When the gauge group of the 4d gauge theory is G=A_N, the R-matrix is associated to a coproduct of N copies of affine Yangian of 𝔤𝔩_1<cit.>. In particular, the R-matrices of the integrabile system can be explicitly constructed from the 4d gauge theory. The stationary state of the quantum integrable system – under Bethe/Gauge correspondence – are the vacuua of the effective 2d =(2,2) theory. This beautiful correspondence has a K-theoretic uplift: The R-matrix, associated with the Ding-Iohara-Miki algebra <cit.>, can be constructed from 5d =1 gauge theory with gauge group G = A_N<cit.>. The stationary state of the quantum integrable system are the vacuua of the effective 3d =2 theory. When the gauge group G is of BCD-type, the action of the quantum toroidal algebra on the supersymmetric gauge theory is less well understood. Unlike A-type, the instanton configurations of gauge theories with BCD-type gauge groups lack a Young diagram description. The qq-character – which acts as a quantum-quantum uplift of the Seiberg-Witten curve – for BCD-type <cit.> is much more complicated than for A-type <cit.> in the general Ω-background[Young diagram descriptions of instanton configurations exist in the unrefined limit for BCD-type <cit.>. However such limit does not yield quantum integrability.]. In this work, our goal is to study the Bethe/Gauge correspondence between the quantum type D relativistic Toda lattice (RTL for short) and 5d =1 supersymmetric gauge theory. On the integrable systems side, RTL of type BCD can be treated as type A with specific boundary condition <cit.>. On the gauge theory side, We fold the SU(2N) gauge theory with eight fundamental matters into the SO(2N) gauge theory by fine-tuning the Coulomb moduli parameters and the masses of the hypermultiplets. By restricting the SU(2N) ADHM moduli space to the SO sector, with the real and complex moment maps properly modified, we obtain a Young diagram description for the SO gauge theory. The instanton partition function is then given by an ensemble of Young diagrams similar to type A, with a much simpler qq-character associated to. The effective potential of 3d =2 theory can be obtained from the folded partition function in the Nekrasov-Shatashvili limit, whose vacuum equation is identified with the Bethe ansatz equation of the type D RTL. Alternatively, the same equation can be obtained by the analycity of the qq-character <cit.>. To find the stationary wavefunction of type D RTL, we introduce the regular surface defect <cit.>, also known as the Gukov-Witten monodromy-type defect <cit.>, defined by a singular boundary condition along a surface, which can be modeled by the orbifold construction. The parameters of the co-dimensional two defect become the coordinates on which the stationary states depend. The defect partition function obeys non-perturbative Dyson-Schwinger equation, which becomes Schrödinger-type equation in the Nekrasov-Shatashvili limit <cit.>. §.§ The main results and outline The main results of this paper are: * Establishing the folding from type A ADHM construction to type D, resulting Young diagram descriptions of instanton configuration for SO(2N) gauge group. * The Bethe ansatz equation of the type D RTL is recovered from the gauge theory through both the vacuum equation of the twisted superpotential and analyticity of qq-character. * The identification of the vacuum expectation value of the monodromy defect ψ being the stationary state of the type D RTL. This is done by exploiting the analyticity property of the qq-character in the presence of monodromy defect. This article is organized as follows. In Section. <ref> we give a brief review on the integrability of relativistic Toda lattice. In Section. <ref> we review on the ADHM construction for SU and SO gague group instanton. We fold the instanton partition function from SU to SO gauge group. By using the folded partition function, we first reproduce the well-known Bethe/Gauge correspondence between the Bethe ansatz equation of type D relativistic Toda lattice and the qq-character in the Nekrasov-Shatashvili limit in Section. <ref>. We further push forward by introducing co-dimensional two monodromy defect to the gauge theory. By exploiting the analytic property of fractional qq-character, we reproduce the quantum Hamiltonian of type D RTL from the gauge theory and identify the defect partition function as its wavefunction. Finally we point out our conclusion and point out furture direction in Section. <ref>. The authors thank Saebyeok Jeong, Taro Kimura, Yongchao Lü, Xin Wang for the discussion. The work of NL is supported by IBS project grant IBS-R003-D1. KL is supported in part by KIAS Grants PG006904 and the National Re- search Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (No. 2017R1D1A1B06034369). KL also thanks KITP for the program https://www.kitp.ucsb.edu/activities/strings24What is String Theory? Weaving Perspectives Together. This research was supported in part by grant NSF PHY- 2309135 to the Kavli Institute for Theoretical Physics (KITP). We would like to thank SCGP for the https://scgp.stonybrook.edu/archives/4126421st Simons Physics Summer Workshop "Landscapia" where the part of work was done. § RELATIVISTIC TODA LATTICE The integrability of Relativistic Toda lattice (RTL for short) is characterized by the R-matrix R_a_i,a_j :V_a_i⊗ V_a_j→ V_a_i⊗ V_a_j , satisfying the Yang-Baxter equation R_a_1,a_2(x-x') R_a_1,a_3(x) R_a_2,a_3(x') = R_a_2,a_3(x') R_a_1,a_3(x) R_a_1,a_2(x-x'). The 2× 2 Lax matrix is a special case of the R-matrix with the choice of V_a_1=V_a_2 = ^2 := V_ aux, V_a_3 = _n on each lattice site. _n is the Hilbert space of the n-th particle. V_ aux is called the auxiliary space. On each of the lattice site we define a 2 × 2 Lax operator as a GL_2-valued function <cit.>: L_n(x) = [ (x - _n) -R e^-_n; R e^_n 0 ]∈End(_n ⊗ V_aux) where _n=ħ__n and _n are the canonically conjugated momentum and coordinate of the n-th particle. R is the potential constant. For later convenience, we define (x) = 2sinh(x/2). The R-matrix acting on the space V_aux⊗ V_aux by R_a_1,a_2(x-x') = [ (x-x'+ħ) 0 0 0; 0 (x-x') ħ 0; 0 ħ (x-x') 0; 0 0 0 (x-x'+ħ) ] . The commutation relations between two elements in the Lax operator is governed by the Yang-Baxter RLL-relation (train track relation) R_a_1,a_2(x-x')L_a_1(x) L_a_2(x') = L_a_2(x') L_a_1(x) R_a_1,a_2(x-x') which can be verified true by direct computation. The monodromy matrix (x) of type A RTL with periodic boundary condition is an ordered product of the Lax matrices across N particles (x) = L_N(x) L_N-1(x) ⋯ L_2(x) L_1(x) ∈End(⊗_n=1^N ⊗ V_ aux). It is obvious that the monodromy matrix (x) satisfies the same Yang-Baxter equation as the Lax operator: R_a_1,a_2(x-x')_a_1(x) _a_2(x') = _a_2(x') _a_1(x) R_a_1,a_2(x-x'). The spectral curve of the integrable system is defined through introduction of spectral parameter y: q-det ((x) - y) = y^2 - (x) y + q-det(x) = 0. §.§ Toda lattice with boundary E. Sklyanin points out that the monodromy matrix of type BCD RTL can be obtained by introducing reflection matrix as boundary condition <cit.>. The transfer matrix of a RTL with boundary condition is given by t(x) = K_+(x) (x) K_-(x) ^-1(-x). (x) is usually taken as the same as closed RTL. K_±(x) are the reflection matrices obeying the reflection equation R_12(x-x') K_+,1(x) R_21(x+x'-ħ) K_+,2(x') = K_+,2(x') R_12(x+x'-ħ) K_+,1(x) R_21(x-x') and R_12(-x+x') K^T_-,1(x) R_21(-x-x'-ħ) K^T_-,2(x') = K^T_-,2(x') R_12(-x-x'-ħ) K^T_-,1(x) R_21(-x+x'). The reflection matrices for type BC RTL are constant matrices K_±∈End(V_aux), (x) ∈End(⊗^N_n=1_n ⊗ V_aux). In the case for type D, the reflection matrices depends on dynamical variables K_+ ∈End(_1 ⊗ V_aux), K_- ∈End(_N ⊗ V_aux), (x) ∈End(⊗^N-1_n=2_n ⊗ V_aux). Given a simple solution K_±(x) to the reflection equation, one can check that U^T_+(x) = _+(x) K_+(x) (^-1_+(-x))^T, U_-(x) = _-(x) K_-(x) _-^-1(-x) satisfy the same reflection equation if _±(x) satisfy (<ref>). The generating function of the integral of motion is t(x) = U_+ U_- = K_+(x) (x) K_-(x) (-x)^-1 = e^Nx[ 1 + ∑_n=1^N H_n e^-2nx]. The reflection matrix K_± satisfying the reflection equation takes the following form: K_+ = [ α_1^+ e^x/2 - α_2^+ e^-x/2 ^̣+(e^x + e^-x) - β^+; ^+ - ^̣+(e^x + e^-x) α_2^+ e^x/2 - α_1^+ e^-x/2 ], K_- = [ - α_1^- e^x/2 + α_2^- e^-x/2 ^- - ^̣-(e^x + e^-x); ^̣- (e^x + e^-x ) - ^̱- -_2^- e^x/2 + _1^- e^-x/2 ]. The first Hamiltonian obtained from the transfer matrix reads H_1 = ∑_j=1+χ^N-χ 2cosh_j + ∑_j=1+χ^N-1-χ R^2 e^_j-_j+1 2cosh_j+_j+1/2 + ^̱+ + ^̱- + R α_1^+ e^-_1+χ/2-_1+χ + R α_2^+ e^_1+χ/2-_1+χ + R α_1^- e^-_N-χ/2+_N-χ + R _2^- e^_N-χ/2+_N-χ + ^̣+ R^2 e^-2_1+χ + ^̣- R^2 e^2_N-χ with χ = 0, type B & C, 1, type D. The boundary reflection matrices K_± of type D RTL are <cit.>: K_+ = [ R [e^x/2cosh(_1/2-_1) - e^-x/2cosh(_1/2+_1) ] cosh x - cosh_1; 2R^2 cosh 2_1 - 2^2 coshx R [-e^x/2cosh(_1/2+_1) + e^-x/2cosh(_1/2-_1) ] ], K_- = [ R [ -e^x/2cosh(_N/2+_N) + e^-x/2cosh(_N/2-_N)] 2R^2 [cosh 2_N - cosh x ]; 2cosh x - 2cosh_N R [e^x/2cosh(_N/2-_N) - e^-x/2cosh(_N/2+_N)] ]. The first Hamiltonian of D̂_N RTL is H_1 = ∑_j=1^N 2cosh_j + ∑_j=1^N-1 R^2 e^_j-_j+1 2cosh_j+_j+1/2 + R^2 e^-_1-_22cosh_1-_2/2 + R^2 e^_N+_N-1 2cosh_N-_N-1/2 + R^4 e^-2_2 + R^4 e^2_N-1. § INSTANTON PARTITION FUNCTION Let us start with a quick review on the ADHM construction of SU and SO group <cit.>. §.§ SU group The ADHM construction for the k-instanton of U(N) gauge group is given by the supersymmetric quantum mechanics of a U(k) gauge theory with N hypermultiplet in the fundamental representation. The ADHM data are defined by maps between the two vector spaces = ^N and = ^k: B_1,2∈End(), I ∈Hom(,), J ∈Hom(,). along with the real and complex moment maps given by μ_ = ∑_i=1^2 [ B_i, B_i^†] + II^† - J^† J, μ_ = [B_1,B_2] + IJ. The instanton moduli space are constructed by ^(SU)_N,k = { (B_1,B_2,I,J) | μ_ = ζid_k, μ_ = 0 }. Here ζ is the FI-parameter. The instanton partition function for =2 is the integration over the moduli space of instanton ^(SU) = ∑_k=0^∞^k _k^(SU), _k^(SU) = ∫ _ _N,k^(SU) 1. The partition function of the supersymmetric quantum mechanics enjoys a global symmetry = U(N) × U(1)_^2 of the ADHM construction of U(N) gauge group subjected to Ω-deformation, with its maximal torus _⊂ acts naturally on the instanton moduli space ^(SU)_N,k, allowing further equivariant localization. The Ω-deformation acts on the ADHM data by (q_1,q_2) · (B_1,B_2,I,J) → (q_1B_1,q_2B_2,I,q_1q_2J) where q_i = e^_i, i=1,2 are the exponentiated Ω-deformation parameters. As a result, the partition function is computed as rational function in the equivariant parameters ξ∈Lie(_): _k^(SU) = ∫_^(SU)_N,k ^_ 1 The integration can be evaluated by calculating residues at appropriate poles <cit.>. Let us show some details here for better demonstration. The fix points on the moduli space is determined by the condition that the U(k) and U(N) transformation of the maps can be compensated by Ω-deformation g B_1 g^-1 = q_1 B_1, g B_2 g^-1 = q_2 B_2, g I h^-1 = I, h J g^-1 = q_1q_2 J Here g = e^ϕ̂, h= e^â. ϕ̂∈Lie (GL(k)), â∈Lie (GL(N)). Denote I = ⊕^N_α=1 I_α, J = ⊕^N_α=1 J_α. The infinitesimal analogue of the symmetry transformation [ϕ̂,B_1] = _1B_1, [ϕ̂,B_2] = _2B_2, [B_1,B_2] + IJ = 0 ϕ̂I_α - I_α a_α = 0, a_α J_α - J_αϕ̂ = (_1+_2) J_α modulo the complex gauge transformation. In the matrix form ϕ̂= diag(ϕ_1,…,ϕ_k). Lemma: The instanton vector space satisfies the stability condition: = [B_1,B_2] I() when the FI-parameter ζ is positive. Proof: Assume there exists a _⊥ = - [B_1,B_2]I(). We can define a projection operator P_⊥:→_⊥ such that P_⊥ = P_⊥^2 = P_⊥^†, and P_⊥ B_1,2 = B_1,2 P_⊥, P_⊥ I = 0. Consider the projection on the real moment map ζid__⊥ = P_⊥μ_ P_⊥ = [(P_⊥ B_1 P_⊥), (P_⊥ B_1 P_⊥)^†] + [(P_⊥ B_2 P_⊥),(P_⊥ B_2 P_⊥)^†] - P_⊥ J^† J P_⊥ Taking the trace over the full vector space =^k gives 0 ≤ζ (id__⊥) = - ( (J P_⊥)^† (J P_⊥) ) ≤ 0 This means _⊥=0. q.e.d. Furthermore, since is independent of J one can simply set J=0. The complex moment map immediately implies [B_1,B_2]=0. The fix points, which are the poles in (<ref>), can be labeled by B_1^i-1B_2^j-1I_α, i,j=1,2,…, spamming the vector space . One can easily show that ϕ̂B_1^i-1B_2^j-1 I_α = (a_α + (i-1)_1 + (j-1)_1 ) B_1^i-1B_2^j-1 I_α These fix points are classified by a set of Young diagrams λ = (λ^(1),…,λ^(N)). The partition function becomes a statistic ensemble over states labeled by λ. The pseudo-measure associated to the instanton state λ is defined through the -functor which converts the additive Chern character class into multiplicative class: [ ∑_a_a e^_a] = ∏_a_a^-_a (rational/4d) ∏_a ( _a )^-_a (trigonometric/5d) ∏_aθ(e^-_a;p) (elliptic/6d) with (x) = e^x/2 - e^-x/2. In this paper we mostly only apply the trigonometric convention which corresponds to the five dimensional gauge theory. The pseudo measure associated to the instanton configuration λ=(λ^(1),…,λ^(N)) is given by [λ] = [ ^* + q_1q_2^* - P_1P_2 ^* - ^* ]. q_i=e^_i, i=1,2, are the exponentiated Ω-deformation parameters with P_i = 1-q_i. Here we are abusing the notation of the vector spaces and their Chern characters = ∑_α=1^N e^a_α, = ∑_α=1^N∑_(,)∈λ^(α) e^a_α + (-1)_1 + (-1)_2, = ∑_f=1^N_f e^m_f. Given a virtual character = ∑_a_a e^_a we denote by ^* = ∑_a_a e^-_a its dual virtual character. m_f, f=1,…,N_f, are the fundamental hypermultiplet masses. §.§ SO group The k-instanton moduli space of SO(2N) gague group admits a standard ADHM construction. The supersymmatric quantum mechanics of the k-instanton moduli space is described by a Sp(k) ⊂ U(2k) gauge theory with 1 hypermultiplet in the second rank anitsymmetric representation and N hypermultiplet in the fundamental representation <cit.>. The ADHM data (B_1,2,I,J) for k-instanton moduli space are defined by B_1,2 = [ X_1,2 Z'_1,2; Z_1,2 X^T_1,2 ], J = (V,V'), I^† = (-V^'*,V^*), where Z_1,2, Z'_1,2 are anti-symmetric such that B_1,2 are in the anti-symmetric representation of the Sp(k) satisfying <cit.>: B_1,2^T = - B_1 , = [ 0 1_k; -1_k 0 ]. In other words, B_1,2 is anti-symmetric. The real and complex moment maps take the form μ_ = [ M_ N'_; N_ -M_^T ], μ_ = [ M_ N'_; N_ -M_^T ] where M_ = [X_1,X_2] +Z'_1Z_2 - Z_2'Z_1 - V^'T V N_ = Z_1 X_2 - X_2^T Z_1 + X_1^T Z_2 - Z_2 X_1 + V^T V N'_ = Z_1' X^T_2 - X_2 Z_1' + X_1 Z'_2 - Z_2' X_1^T - V^'T V' and M_ = ∑_i [X_i,X_i^T] + Z_i^* Z_i - Z_i' Z_i^'* + V^'T V^'* - V^† V N_ = ∑_i Z_i X_i^† - X_i^* Z_i + Z_i^'*X_i - X_i^T Z_i^'* - V^T V^'* - V^'† V N'_ = ∑_i Z_i' X_i^* - X_i^† Z_i' + Z_i^* X_i^T - X_i Z_i^* - V^'T V^* - V^† V' Notice that N_, N_', N_, and N_' are symmetric matrices such that μ_ and μ_ are symmetric. The k-instanton moduli space is defined by ADHM data with vanishing real and complex moment map ^(SO)_N,k = { (B_1,B_2,I,J) | μ_ = 0, μ_ = 0 } The partition function of the supersymmetric quantum mechanics is given by an ensemble of instanton number ^(SO)(𝐛,;_SO) = ∑_k=0^∞^k_SO/2^k k!_k^(SO) with the k-instanton pseudo-measure taking the form <cit.> _k^(SO) = ∮∏_I=1^k dϕ_I/2π_+/_1 _2(2ϕ_I)^2(2ϕ_I+_+)(2ϕ_I-_+)/∏_β=1^N ( _+/2±ϕ_I ± b_β) ×∏_I≠ J(ϕ_I-ϕ_J)(ϕ_I-ϕ_J+_+)/(ϕ_I-ϕ_J+_1)(ϕ_I-ϕ_J+_2)×∏_I<J±(ϕ_I+ϕ_J)(_+± (ϕ_I+ϕ_J))/(_1± (ϕ_I+ϕ_J))(_2± (ϕ_I+ϕ_J)) Here b_β, β=1,…,N, are the Coulomb moduli parameters from the adjoint scalar Φ = diag{[ 0 -b_1; b_1 0 ], …, [ 0 -b_N; b_N 0 ]}. The contour integration can be evaluated by JK-residue technique <cit.>. Remarkably in the unrefined limit _1 = -_2 = ħ the JK-poles can be classified by N tuples of Young diagrams λ = (λ^(1),…,λ^N) with the total number of boxes ∑_β=1^N |λ^(β)| = k<cit.>. A pole location can be specified by a box (i,j) ∈λ^(β) by ϕ_i,j = b_β + (i-j) ħ The instanton pseudo-measure associated to the Young diagram λ reads ^(SO)[λ] = [ 2(+^*) + (q+q^*-2)(+^*) ] [ -(q^1/2+q^-1/2+2)] with q=e^ħ and the Chern characters are defined by = ∑_β=1^N e^b_β, = ∑_β=1^N ∑_(i,j)∈λ^(β) e^b_β + (i-j)ħ Here we define another -functor which converts the additive Chern character class into multiplicative class: [ ∑_a _a e^_a] = ∏_a (2_a)^-_a. The instanton partition function of SO(2N) gauge group in the un-refined limit is an ensemble over the Young diagrams ^(SO) = ∑_λ_SO^|λ|^(SO) [λ] §.§ Folding from SU to SO The folding of SU(2N) to SO(2N) instanton can be performed on the level of moduli space. The vector spaces are decomposed by =_+ ⊕_- and = _+ ⊕_-. The ADHM data (B_1,B_2,I,J) for 2k instanton are denoted by B_i = [ B_i,++ B_i,+-; B_i,-+ B_i,– ], I = [ I_++ I_+-; I_-+ I_– ], J = [ J_++ J_+-; J_-+ J_– ] B_i,σσ'∈Hom(_σ,_σ') are k × k matrices, I_σσ'∈Hom(_σ,_σ') are k × N matrices, J_σσ'∈Hom(_σ,_σ') are N × k matrices. The real and complex moment maps are μ_ = ∑_i=1^2 [B_i,B_i^†] + II^† - J^† J μ_ = [B_1,B_2] + IJ We define moduli space which aligns with the moment maps of the SO group ADHM data (<ref>)<cit.>: _2N,2k^(SU) = { (B_1,B_2,I,J) | μ_ = ζ_[ 1 0; 0 -1 ], μ_ = [ 0 ζ_^+; ζ_^- 0 ] . } It's straight forward to see that the SO instanton moduli space can be embedded in to such SU instanton moduli space when FI-parameters ζ_, ζ_^± are turned off ^(SO)_2N,k⊂^(SU)_2N,2k|_ζ_=ζ_^±=0 by comparing to the ADHM data defining the k-instanton moduli space of SO(2N) group. In particular it can be obtained by restraining the SU ADHM data components by * B_i,++ = B^T_i,– = X_i. * B_i,+- = Z'_i antisymmetric. * B_i,-+ = Z_i antisymmetric. * V = [ J_++; J_-+ ] = [ I_-+^T; I_–^T ]. * V' = [ J_+-; J_– ] = - [ I_++^T; I_+-^T ]. The identification of I and J map indicates that the Ω-deformation charge of I and J map needs to be identical (q_1,q_2) · (B_1,++,B_1,–,B_2,++,B_2,–,I,J) → (q_1B_1,++,q_1^-1B_1,–,q_2B_2,++,q_2^-1B_2,–,q_+^1/2 I, q_+^1/2J) with the exponentiated Ω-deformation parameters are q_1=e^_1, q_2=e^_2, q_+ = e^_+, _+ = _1+_2. The fix points on the moduli space is determined by the condition that the Sp(k) transformation can be compensated by the Ω-deformation as in (<ref>) with ϕ̂ satisfying the symplectic condition: ϕ̂^T = -ϕ̂ in matrix notation ϕ̂= (ϕ_1,…,ϕ_2k). The SO condition restricts a_β=-a_β+N:=b_β, β=1,…,N. The Sp(2k) condition on ϕ̂ can be written as (ϕ_l + ϕ_l') _l,l' = 0. The form of in (<ref>) implies the Young diagram must come in pairs ϕ_I+k = - ϕ_I, I=1,…,k. The SU group pseudo measure associated to the instanton configuration ϕ̂ is given by: [ϕ̂] = [ (1+q_+)(+^*)(+^*) - P_1P_2 (+^*)(+^*) - (+^*) ] where we denote = ∑_β=1^N e^b_β, = ∑_I=1^k e^ϕ_I, and = ∑_f=1^N_f e^m_f labels the fundamental matters. We consider B_1,2 being block diagonal: B_i,+- = B_i,-+ = 0 such that there are no mixture between the instantons in _+ and _- spaces. The eight sub-matrices of the real and complex moment maps become μ_,++ = ∑_i=1^2 [B_i,++,B_i,++^†] + I_++I_++^† + I_+- I_+-^† - I_-+^* I_-+^T - I_–^* I_–^T = ζ_ 1_k, μ_,+- = I_++I_-+^† + I_+-I_–^† + I_-+^* I_++^T + I_-+^* I_+-^T = 0, μ_,-+ = I_–I_+-^† + I_-+I_++^† + I_+-^* I_–^* + I_++^* I_-+^T = 0, μ_,– = ∑_i=1^2 [B_i,–,B_i,–^†] + I_–I_–^† + I_-+I_-+^† - I_+-^* I_+-^T - I_++^* I_+ +^T = -ζ_ 1_k, and μ_,++ = [B_1,++,B_2,++] + I_++I^T_-+ + I_+-I_–^T = 0, μ_,+- = -I_++ I_++^T - I_+-I_+-^T = ζ^+_ 1_k, μ_,-+ = I_-+I^T_-+ + I_–I_–^T = ζ_^- 1_k, μ_,– = [B_1,–,B_2,–] - I_-+I_++^T + I_–I_+-^T = 0. Lemma: The vector space = _+ ⊕_- satisfies stability condition _+ = [B_1,++.B_2,++] [ I_++(_+) ⊕ I_+-(_-) ] _- = [B_1,–, B_2,–] [ I_++^*(_+) ⊕ I_+-^*(_-) ] when ζ_>0. Pf: It is enough to show the case that _+ is true. Suppose there exists a subspace _⊥ = _+ - [B_1,++.B_2,++] [ I_++(_+) ⊕ I_+-(_-) ]. We can define a projection operator P_⊥: _+ →_⊥ such that P_⊥=P_⊥^2=P_⊥^†, and P_⊥ B_i,++ = B_i,++P_⊥, P_⊥ I_++ = P_⊥ I_+- = 0. Consider the projection on the real moment map ζ_id__⊥ = P_⊥μ_,++ P_⊥ = ∑_i=1^2 [ P_⊥ B_i,++ P_⊥,(P_⊥ B_i,++ P_⊥) ^†] - P_⊥ I_-+^* I_-+^T P_⊥ - P_⊥ I_–^* I_–^T P_⊥ Taking the trace of the full vector space _+ = ^k gives 0 ≤ζ_id__⊥ = - ( P_⊥ I_-+^* I_-+^T P_⊥ + P_⊥ I_–^* I_–^T P_⊥) ≤ 0 This means _⊥ = 0. A similar arguemnt holds for _-. q.e.d. The stability allows us to set I_-+ = 0 = I_–, which further implies [B_1,++,B_2,++]=0=[B_1,–,B_2,–]. We set I_+-=0 such that the instanton configuration is labeled by N tuples of Young diagrams λ=(λ^(1),…,λ^(N)). The character of the vector spaces are given by _± = ∑_=̱1^N e^± b_ _+ = ∑_=̱1^N ∑_(i,j)∈λ^()̱ e^b_ q_+^1/2 q_1^i-1q_2^j-1 _- = ∑_=̱1^N ∑_(i,j)∈λ^()̱ e^-b_ q_+^-1/2 q_1^1-iq_2^1-j = _+^*. We can denote = _+ = _-^*, = _+ = _-^* The pseudo measure is given by ^(SU)[λ] = [ (q_+^1/2+q_+^-1/2) (+^*)(+^*) - P_12 (+^*) (+^*) - (+^*) ]. The perturbative contribution of the partition function is given by ^(SU)_ pert(,) = [ - (+^*)^2 - q_+^1/2 (+^*) /P_12^*] The combined contribution of the perturbative and instanton factor can be organized as ^(SU)[λ] = [ - (+^*)(+^*)^*/P_12^* + q_+^-1/2(+^*)/P_12^*] with := - P_12 q_+^-1/2. We introduce eight fundamental hypermultiplet to the SU gauge theory with masses { 0,_1/2,_2/2,_+/2, π, _1/2+π, _2/2 +π, _+/2+π}. The SU(2N) instanton pseudo measure (<ref>) becomes double of the instanton pseudo measure of SO(2N)(<ref>): ^(SO)[λ] = [ -2^* + (^*)^2 + ^2 /2P_12^* + q_+^-1/2(+^*)/2P_12^*], ^(SU)[λ] = ( ^(SO)[λ] )^2 The power squares comes from the double copy of SO(2N) when embedded to U(2N). The instanton partition function is an ensemble over all instanton configuration ^(SU)(,;_SU) = ∑_λ_SU^2|λ|^(SU)[λ] = ∑_λ( _SU^|λ|^(SO)[λ] )^2. Define modified instanton partition function: ^(SO)(;=_SU) = ∑_λ^|λ|^(SO)[λ]. In the unrefined limit _1=-_2=ħ, the character of the instanton vector space becomes = ∑_β=1^N ∑_(i,j)∈λ^(β) e^b_β + (i-j)ħ. The SU instanton pseudo measure associated to the Young diagram λ becomes ^(SU)[λ] = ( [ 2(+^*) + (q+q^*-2) (+^*) ] [ -(q^1/2+q^-1/2+2)] )^2 = ( ^(SO)[λ] )^2 The power square comparing to the SO(2N) instanton partition function pseudo-measure in the unrefined limit (<ref>) is indicated by the double copy from the folding. This agrees with the JK-residue computation <cit.>. We would like to point out here that although the unrefined limit allows the folding from SU(2N) gauge theory to SO(2N) on the non-perturbative level. It is not a limit one takes in the context of Bethe/Gauge correspondence. As mentioned the quantum uplift of the classical Bethe/Gauge correspondence is obtained in the NS-limit _2 → 0, _1 = ħ rather than the unrefined limit. Nevertheless, it still provide enough information to reconstruct the type D RTL in the next section. § BETHE/GAUGE CORRESPONDENCE §.§ =1 5d gauge theory The correspondence between the four-dimensional =2 supersymmetric gauge theories and non-relativistic algebraic integrable models is extended to the quantum version in <cit.> by introducing the Ω-deformation in the 4d gauge theories. The low energy physics of the 4d =2 supersymmetric gauge theory is effectively described by a twisted super potential ^(4D), an multi-valued function on the Coulomb branch defined in terms of the Nekrasov parition function in the Nekrasov-Shatashvili limit (NS-limit): ^(4D) (,_1=ħ;) = lim__2→ 0_2 log^(4D)(,_1,_2;). The twisted super potential of the 4d theory coincides with a 2d =(2,2) supersymmetric gauged sigma model that lives on the _1 subspace. There the two dimensional theory acquires a twisted mass _1 from the Ω-background, and the F-term equation is then identified with the Bethe ansatz equation of the quantum integrable system. The stationary states of the quantum integrable system are in one-to-one correspondence to the vacuua of the effective 2d =(2,2) theories. Such correspondence can be uplifted to in between 5d =1 supersymmetric gauge theories on S^1_R ×^4 and the relativistic integrable models <cit.>. In the NS-limit, the Ω-deformed 5d theory is effectively described by a 3d =2 supersymmetric theory on S^1_R×_2, which captures the integrable nature of the system <cit.>. Let us start with the instanton partition function derived in the last section (<ref>). For later convenience, the Coulomb moduli parameters are denoted by =(b_1,…,b_N). The instanton partition function is an ensemble over N Young diagrams λ=(λ^(1),…,λ^(N)): ^(SO)(;) = ∑_λ^|λ|^(SO)[λ] ^(SO)[λ] = [ - 2^*+(^*)^2+^2/2P_12^*] [ 1/2(1+q_1^1/2+q_2^1/2+q_+^1/2)( q_+^1/2/P_12 + q_+^-1/2^*/P_12^*) ]. Denote x_i̱ = b_+̱(i-1/2)_1 + λ^()̱_i _2, = ∑_(i̱) e^x_i̱, = P_1 q_1^-1/2 for a given instanton configuration λ. The SO instanton pseudo-measure can be expressed as _SO[λ] = [ -2P_1^*+P_1(^*)^2+P_1^2/2P_2^*] [ 1/2(1+q_1^1/2+q_2^1/2+q_+^1/2) ( q_2^1/2/P_2 + q_2^-1/2^*/P_2^*) ] = ∏_(i̱)≠ ('̱i')Γ_q_2( x_i̱-x_'̱ i'-_1/_2) /Γ_q_2( x_i̱-x_'̱ i'/_2)∏_(i̱),('̱i')[ Γ_q_2( x_i̱+x_'̱ i'/_2) /Γ_q_2( x_i̱+x_'̱ i'-_1/_2)Γ_q_2( -x_i̱-x_'̱ i'/_2) /Γ_q_2( -x_i̱-x_'̱ i'-_1/_2)]^1/2 ×∏_(i̱)[ _q_2^2( x_i̱+_2/2/_2) _q_2^2( x_i̱+_2/2-_1/2/_2) _q_2^2( x_i̱/_2) _q_2^2( x_i̱-_1/2/_2) /_q_2^2( -x_i̱+_2/2/_2) _q_2^2( -x_i̱+_2/2-_1/2/_2) _q_2^2( -x_i̱/_2) _q_2^2( -x_i̱-_1/2/_2) ]^1/2 Here the _q-function is the q-gamma function defined by Γ_q(x) = q^-(x-1)x/4∏_n=0^∞1-q^n+1/1-q^n+x = q^-(x-1)x/4(q;q)_∞/(q^x;q)_∞, _q(x+1)/_q(x) = q^x/2 - q^-x/2. Now we consider the Nekrasov-Shatashvili limit _2 → 0 where the =2 super-Poincaré symmetry is restored and the five dimensional =1 theory is now effectively described by the three dimensional =2 theory. Precisely speaking, the twisted superpotential of the two theories coincide when evaluated on their corresponding vacua <cit.>. In this limit, we can approximate the q-gamma function using the Stirling formula for q-gamma function <cit.>: lim_log q_2→ 0log_q_2^l( x/log q_2) = ∑_n=0^∞log (1-q_2^l(n+1)) - x^2/4_2 l + 1/l_2Li_2(e^lx) + (_2). Li_2 is dilogarithm. The instanton partition function has the asymptotics lim__2 → 0^(SO)(,_1,_2;) = e^-(,ħ=_1;)/_2 + (_2) The twisted superpotential (,m_f,_1;) is specified to (,ħ;) = 1/2∑_(i̱)≠ ('̱i')Li_2 ( e^± x_i̱± x_'̱ i'-ħ) - ( ± x_i̱± x_'̱ i'-ħ)^2/4 - Li_2 ( e^± x_i̱± x_'̱ i') + ( ± x_i̱± x_'̱ i')^2/4 + ∑_(i̱) x_i̱log + Li_2 ( 2x_i̱) - Li_2 (-2x_i̱ - ħ) The twisted superpotential (,ħ;) recovers the effective potential of pure 3-dimensional =2 theory <cit.>. The ensemble over instanton configuration ^(SO)[λ] is dominated by the limit shape instanton configuration Λ=(Λ^(1),…,Λ^(N)) defined by ^|Λ|^(SO)[Λ] = e^-(,ħ;)/_2. The vacuum equation of the twisted superpotential is given by exp( / x_i̱(,ħ;) ) = 1 which reads (2x_i̱)^2 (2x_i̱-ħ)^2 ∏_('̱i')≠ (i̱)(x_i̱± x_'̱i'-ħ)/(x_i̱± x_'̱i'+ħ) = 1. We recovered the Bethe ansatz equation of type D relativistic Toda lattice <cit.>. §.§ qq-character The introduction of defects in the gauge theory turns out to be an useful tool studying the quantum version of the Bethe/Gauge correspondence <cit.>. It is known that the Nekrasov-Shatashvili free energy, which works excellently for 4d, is not enough to establish 5d Bethe/Gauge correspondence. The correct quantization requires including a tower of non-perturbative effects <cit.>. Our approach here relies on the co-dimensional four observable known as the qq-character <cit.>– a quantum-quantum uplift of the Seiberg-Witten curve. Here the fundamental qq-character is a line defect warpping on the compactified S^1_R in the 5d gauge theory <cit.>. The fundamental qq-character observable can be obtained by considering a variation on the bulk instanton configuration (<ref>). This is done by adding an extra instanton at X=e^x, → + X, → - P_12q_+^-1/2X. The instanton pseudo measure varies by ^(SO)[-P_12q_+^-1/2X]/^(SO)[] = [ - 2(-P_12q_+^-1/2X)(^*-P_12^*q_+^1/2X^*)+(-P_12q_+^-1/2X)^2+(^*-P_12^*q_+^1/2X^*)^2/2P_12^* + 2^*+^2+(^*)^2/2P_12^*] ×[ -1/2 (1+q_1^1/2+q_2^1/2+q_+^1/2) (+^*+ X+X^*) + 1/2 (1+q_1^1/2+q_2^1/2+q_+^1/2) (+^*) ] = [ q_+^1/2 X (- P_12 q_+^-1/2X)^* + q_+^1/2 X^* + q_+^1/2 X + q_+^1/2X^*^* - P_12(X^*)^2/2 - P_12X^2/2] ×[ - 1/2 (1+q_1+q_2+q_+)(X^2 + (X^*)^2) ] = P(x)/Y(x+_+/2)Y(-x+_+/2)Y(x-_+/2)Y(-x-_+/2). Here we define the Y-function by Y(x)[λ] = [ -X ^* ] = [ - e^x ( -q_+^-1/2P_12)^* ] = ∏_β=1^N ( x - b_) ∏_(i,j)∈λ^()̱(x-b_-̱i_1-(j-1)_2) (x-b_-̱(i-1)_1-j_2)/(x-b_-̱(i-1)_1-(j-1)_2 ) (x-b_-̱i_1-j_2 ) = ∏_=̱1^N (x-b_-̱_1λ_1^()̱,T) ∏_i=1^λ^()̱( x-b_-̱ (i-1)_1 - _2 λ^()̱_i )/( x-b_-̱ i_1 - _2 λ^()̱_i ), and Y(-x) = [ -X^* ^* ] = ∏_β=1^N ( -x - b_) ∏_(i,j)∈λ^()̱(-x-b_-̱i_1-(j-1)_2) (-x-b_-̱(i-1)_1-j_2)/(-x-b_-̱(i-1)_1-(j-1)_2 ) (-x-b_-̱i_1-j_2 ) = (-1)^N ∏_=̱1^N (x+b_+̱_1λ_1^()̱,T) ∏_i=1^λ^()̱( x+b_+̱ (i-1)_1 + _2 λ^()̱_i )/( x+b_+̱ i_1 + _2 λ^()̱_i ). We also have P(x) = (2x)^2 (2x-_+)(2x+_+). We construct the fundamental qq-character as (x)[λ] = Y(x+_+/2)[λ] Y(-x-_+/2)[λ] + P(x)/Y(x-_+/2)[λ]Y(-x+_+/2)[λ] whose expectation value T(x) = ⟨(x) ⟩ = 1/^(SO)∑_λ^|λ|^(SO)[λ] (x)[λ] = E_0 X^N + E_1 X^N-1 + ⋯ + E_2NX^-N, E_0 = 1+_SU, N=4; 1, N>4. is an analytic function in x as the poles of the qq-character are canceled in the ensemble <cit.>. In short words, the poles of x coming from the individual qq-character function (x)[λ] are canceled in the ensemble. The Seiberg-Witten curve of the supersymmetric gauge theory can be recovered by turning off the Ω-deformation _1,2→ 0 T(x) = Y(x)Y(-x) + P(x)/Y(x)Y(-x). The Bethe ansatz equation that we obtained from the twisted super potential can also be obtained from the qq-character. In the NS-limit _2 → 0, _1 = ħ where the instaotn partition function is dominated by limit shape configuration (<ref>). we define the Q-function Y(x)[Λ] = Q(x+_1/2)/Q(x-_1/2), Q(x) = ∏_=̱1^N∏_i=1^∞(x-x_i̱), Y(-x)[Λ] = (-1)^N Q(-x+_1/2)/Q(-x-_1/2), Here x_i̱ is defined in (<ref>) with the instanton configuration as limit shape configuration Λ. The physical meaning of Q(x) is a canonical surface defect wrapping ^2_12=_1 ⊂ S^1 ×^4 in the 5d supersymmetric gauge theory. See Fig. <ref> for an illustration. A three dimensional =2 theory lives on the canonical surface defect whose twisted superpotential coincides with the 5d =1 superpotential <cit.>. The expectation value of qq-character (<ref>) in the NS-limit becomes T(x ) = ⟨(x) ⟩ = (x)[Λ] = Y(x+ħ/2)[Λ] Y(-x-ħ/2)[Λ] + P(x ) /Y(x-ħ/2)[Λ]Y(-x+ħ/2)[Λ] = Q(x+ħ)Q(-x-ħ)/Q(x) Q(-x) + P(x ) Q(x-ħ)Q(-x+ħ)/Q(x)Q(-x) We find the Q-function satisfying the Baxter T-Q equation by multiplying both side with Q(x)Q(-x): T(x)Q(x)Q(-x) = Q(x+ħ)Q(-x-ħ) + P(x ) Q(x-ħ)Q(-x+ħ). At the zeros of Q(x) or Q(-x), the vanishing right hand side requires -1 = P( ± x_i̱) Q(x_i̱-ħ)Q(-x_i̱+ħ)/Q(x_i̱+ħ)Q(-x_i̱-ħ) (2x_i̱)^2 (2x_i̱-ħ)^2 ∏_(β' i') ≠ (i̱)(x_i̱± x_β' i'-ħ)/(x_i̱± x_β'i'+ħ) = 1 This is the same Bethe ansatz equation that we obtained from the twisted superpotential (<ref>). In the context of integrable system through Bethe/Gauge correspondence, Q-observable is the eigen function of Baxter Q-operator. Its zeros are the Bethe roots of the integrable system. See <cit.> for the rules of Q-observables in both the gauge theory and integrable systems. §.§ Monodromy defect The four dimensional theory with co-dimensional two monodromy surface defect can be viewed as a theory on an orbifold. The localization technique extended to the computation of defect instanton partition function and also the expectation value of some observables. We in particular focus on the class of qq-observables <cit.>, which are fractionalized in the presence of surface defect. The main statement in <cit.> states certain vanishing condition for the qq-character. These vanishing conditions, called non-perturbative Dyson-Schwinger equation, are useful for deriving the KZ-equations satisfied by the defect partition function. In addition, in the NS-limit _2 → 0, the Dyson-Schwinger equation becomes the Schrödinger-type equation of the integrable system satisfied by the defect partition function. The co-dimensional two defect is introduced in the form of an _L-orbifold acting on ^4 = _1 ×_2 by (_1,_2) → (_1,ζ_2) with ζ^L = 1. The orbifold modifies the ADHM construction and creates a chain-saw quiver structure <cit.>. Such defect is characterized by a coloring function c:[N] →_L that assigns the representation _c()̱ of _L to each color =̱1,…,N. We use _ø to denote the one dimensional complex irreducible representation of _L, where the generator ζ is represented by the multiplication of exp( 2πø/L) for ø≡ø+L. In general one can consider any _L. A surface defect is called regular/full-type surface defect when L=2N and the coloring function c(α) is bijective. Hereafter, we will consider the case L=2N and coloring function of the form c()̱ = -̱1. The reader might notice that there are only N Coulomb moduli parameters =(b_1,…,b_N) and conclude that the monodromy defect considered here is not regular. Remember that SO(2N) can be embedded in U(2N) by restricting N of the 2N Coulomb moduli parameters =(a_1,…,a_2N) a_ = - a_2N-+̱1 = b_,̱ =̱1,…,N. On the level of the U(2N) gauge theory, the coloring function c:[2N]→_2N is bijective. Hence we conclude the monodromy defect is regular. The folding from SU to SO results in several changes on how the instanton Young diagrams are colored. First, since a Young diagram λ^()̱ labels instanton configuration generated by _+ = [B_1,++,B_2,++] I_++(_+) in (<ref>). The Ω-deformation charge of the ADHM data I is (q_1,q_2) · I → q_+^1/2I(<ref>). This leads to the fact that the boxes in a colored Young diagram is in the _ø+1/2 representation, with the additional 1/2 shift comes from I. The complex exponentiated gauge coupling fractionalized to 2N fractional coupling (_ø+1/2)_ø=0^2N-1: = ∏_ø=0^2N-1_ø+1/2, _ø+1/2+2N = _ø+1/2. The fractional coupling _ø+1/2 is assigned to the _ø+1/2 representation of the _2N-orbifold. The boxes in the Young diagrams are colored by the representation _ø based on their position. _ø+1/2^+ = { (,̱(i,j)) |∈̱[N], (i,j)∈λ^()̱, c()̱+j-1/2 = ω mod 2N }, _ø+1/2^- = { (,̱(i,j)) |∈̱[N], (i,j)∈λ^()̱, -c()̱-j+1/2 = ω mod 2N }, k_ø+1/2^+ = |_ø+1/2^+| , k_ø+1/2^- = |_ø+1/2^-| denote the number of squares in a colored Young diagram that is in the _ω+1/2 representation of _2N orbifold. Note that in the context of U(2N)–where we embed SO(2N)– a single Young diagram λ^()̱ is used for both a_=̱b_$̱ anda_2N-+̱1=-b_$̱. A square in the Young diagram is two-color. See Fig. <ref> for illustration. The symmetry leads to _ø+1/2^+ = _2N-ø-1/2^-. for all ø=0,…,2N-1. Here raise the question: how do we count the fractional instanton counting based on a given colored Young diagram λ̂? There are two potential options: * Option 1: Count ^+_ø+1/2 (or ^-_-ø-1/2) only. * Option 2: Count both ^±_ø+1/2, with each colored boxes count half instanton in ^+_ø+1/2, and half in ^-_2N-ø-1/2. As (<ref>) will cause a mixture between _ø+1/2 and _2N-ø-1/2 in later computation. Here we choose the first option. The defect partition function is the integration over the _2N invariant fields ^(SO) = ∑_λ̂∏_ø=0^2N-1_ø+1/2^k^+_ø+1/2^(SO)[λ̂] ^(SO)[λ̂] = [ -(_++_-)(_++_-)/2P̂_12^*]^_2N[ 1/2(1+q_1+q̂_2+q̂_+)( q̂_+^1/2/P̂_12 + q̂_+^-1/2^*/P̂_12^*) ]^_2N Here _+ = = _-^*. This separation will turn out to be very useful. The Ω-deformation parameters are charged with the orbifold action q̂_1 = q_1 _0, q̂_2 = q_2^1/2N_1, P̂_1 = (1-q_1) _0, P̂_2 = _0 - q_2^1/2N_1. All the ADHM data can be written in terms of the shifted parameters _+ = ∑_ø=0^N-1_+,ø+1/2 q_2^ø+1/2/2N_ø, _+,ø+1/2 = ∑_∈̱c^-1(ø) e^b̃_ø, _+ = ∑_ø=0^2N-1_+,ø+1/2 q_2^ø+1/2/2N_ø+1/2, _+,ø+1/2 = ∑_=̱1^N ∑_J=0^∞∑_c()̱+j-1/2=ω+1/2+NJ(i,j)∈λ^(α) e^b̃_c()̱ q_1^i-1/2q_2^J _+ = ∑_ø=0^2N-1_+,ø q_2^ø/2N_ø, _+,ø = _+,ø+1/2 - P_1 _+,ø+1/2 + q_2^_̣ø,0 P_1 _+,ø-1/2 _- = _+^*, _- = _+^*, _- = _+^*. Here we use the short handed notation _ø=0 for ø>N-1. The bulk instanton configuration is identified with ø=2N-1 colored partition ∑_ø=0^2N-1_+,ø = - P_12_+,2N-1/2 = We define a projection ρ(λ̂) = λ from the colored partition λ̂ to the bulk partition λ<cit.>, λ^()̱_i = ⌊λ̂^()̱_i+c()̱/2N⌋, =1,…,N; i=1,2,… The colored partition function can be reorganized into ^(SO) = ∑_λ^|λ|^(SO)[λ] ∑_λ̂∈ρ^-1(λ)∏_ø=0^2N-1 u_ø+1/2^k^+_ø-1/2 - k^+_ø+1/2^(SO)_ surface[λ̂] = ⟨ψ⟩^(SO) The monodromy defect observable ψ[λ] is defined by ψ[λ] = ∑_λ̂∈ρ^-1(λ)∏_ø=0^2N-1 u_ø+1/2^k^+_ø-1/2 - k^+_ø+1/2^(SO)_ surface[λ̂]. §.§ Fractional qq-character We consider the variation of the defect instanton pseudo-measure by adding an instanton based on an given instanton configuration λ̂ at X̂=e^xq_2^ø+1/2/2N_ø+1/2. The defect pseudo measure (<ref>) is changed by _ø+1/2^(SO)[-P̂_12q̂_+^-1/2X̂]/^(SO)[] = _ø+1/2[ q̂_+^1/2X̂ (- P̂_12q̂_+^-1/2X̂)^* + q̂_+^1/2X̂^* + q̂_+^1/2X̂ + q̂_+^1/2X̂^*^* - (1+q̂_+) (X̂^2+(X̂^*)^2) ]^_2N = _ø+1/2 P_ø(x)/Y_ø+1^(+)(x+_1/2+_2_̣2N-1,ø) Y_2N-ø-1^(-)(-x-_1/2-_2) Y_ø^(+)(x-_1/2) Y_2N-ø^(-)(-x+_1/2+_2(1-_̣ø,0)) The fractional Y-observable obeys Y_ø^(+)(x)[λ̂] = [ -e^x _ø^* ] = [ - e^x ( _ø - P_1 q_1^-1/2_ø+1/2 + q_1^-1/2 P_1 q_2^_̣ø,0_ø-1/2)^* ] Y_2N-ø^(-)(-x)[λ̂] = [ -e^-x_2N-ø^* ] = [ - e^-x( _2N-ø - P_1 q_1^-1/2_2N-ø-1/2 + q_1^-1/2 P_1 q_2^_̣ø,0_2N-ø+1/2)^* ] The fractional masses term are P_ø(x) = (2x+_1) ø = N-1, 2N-1; (2x-_1) ø = 0, N; 0 otherwise. The 2N fractional qq-character observables are _ø (X)[λ̂] = Y_ø+1^(+)(x+_1/2+_2_̣ø,2N-1)[λ̂] Y_2N-ø-1^(-)(-x-_1/2-_2)[λ̂] + _ø+1/2 P_ø(x)/Y_ø^(+)(x-_1/2)[λ̂] Y_2N-ø^(-)(-x+_1/2+_2(1-_̣ø,0))[λ̂]. Their expectation values ⟨_ø(X) ⟩ = T_ø(X) = 1/^(SO)∑_λ̂∏_ø=0^2N-1_ø+1/2^k_ø+1/2^+^(SO)[λ̂] _ø(X) [λ̂] are analytic functions in x<cit.>. §.§ Quantum Hamiltonian A function f(X=e^x) analytic in x means it can only have poles at X=∞ (x=+∞) and X=0 (x=-∞)[Here we assume f(X) has no branch cuts for X]. We can consider both large X and small X expansion of the function f(X). The two expansion must match since they come from the same function. We can expand the expectation value of the fractional qq-characters (<ref>)T_ø(X) in two ways. The X>>1 expansion yields T_ø(X) = ∑_j=0^∞ c^(+)_j,ø X^1/2-j. The small X<<1 expansion yields T_ø(X) = ∑_j=0^∞ c^(-)_j,ø X^j-1/2. The expansion coefficients c^(±)_j,ø are obtained through expanding of building blocks Y_ø^(+)(x) Y^(-)_2N-ø(x) of the fractional qq-characters. For ø=0,…,N-1: Y_ø^(+)(x)Y_2N-ø^(-)(-x) = ( e^-b_ø+1/2/2√(X) - e^b_ø+1/2/2/√(X)) ∏_∈^+_ø+1/2 q_1^-1/21-e^c_q_1^1/2/X/ 1-e^c_q_1^-1/2/X∏_∈^+_ø-1/2 q_1^1/21- e^c_q_1^-1/2/X/1-e^c_q_1^1/2/X ×∏_∈^-_2N-ø+1/2 q_1^1/21-e^-c_q_1^-1/2/X/1-e^-c_q_1^1/2/X∏_∈^-_2N-ø-1/2 q_1^-1/21-e^-c_q_1^1/2/X/1-e^-c_q_1^-1/2/X. For ø=N,…,2N-1: Y_ø^(+)(x)Y_2N-ø^(-)(-x) = ( e^-b_2N-ø-1/2/2/√(X) - e^b_2N-ø-1/2/2√(X)) ∏_∈^+_ø+1/2 q_1^-1/21-e^c_q_1^1/2/X/ 1-e^c_q_1^-1/2/X∏_∈^+_ø-1/2 q_1^1/21- e^c_q_1^-1/2/X/1-e^c_q_1^1/2/X ×∏_∈^-_2N-ø+1/2 q_1^1/21-e^-c_q_1^-1/2/X/1-e^-c_q_1^1/2/X∏_∈^-_2N-ø-1/2 q_1^-1/21-e^-c_q_1^1/2/X/1-e^-c_q_1^-1/2/X. Let us focus on the coefficient of X^-1/2 term in both the large and small X expansion of T_ø(X). Coming from the same function requires the coefficients to match: c^(+)_1,ø - c^(-)_0,ø = 0. Define the fractional variables _ø+1/2 = R^2-2_̣ø+1/2,N-1/2-2_̣ø+1/2,2N-1/2u_ø+3/2/u_ø+1/2. u_ø+1/2 = u_2N+ø+1/2, ∇^u_ø+1/2 = ∇^_ø-1/2 - ∇^_ø+1/2 such that by (<ref>) we can write ⟨ k^+_ø-1/2 - k^+_ø+1/2 + k^-_2N-ø+1/2 - k^+_2N-ø-1/2⟩_^(SO) = ∑_λ̂∏_ø'=0^2N-1_ø'+1/2^k^+_ø'+1/2^(SO)[λ̂] (k^+_ø-1/2 - k^+_ø+1/2 + k^-_2N-ø+1/2 - k^+_2N-ø-1/2) = 2 ∑_λ̂∏_ø'=0^2N-1_ø'+1/2^k^+_ø'+1/2^(SO)[λ̂] (k^+_ø-1/2 - k^+_ø+1/2) = 2 ∇^u_ø+1/2^(SO). We consider linear combination 0 = ∑_ø=0^2N-1Ĉ_ø( c^(+)_1,ø - c^(-)_0,ø) = Ĥ⟨ψ⟩ - ⟨ (+^*) ψ⟩ with properly chosen coefficients Ĉ_ø. The Hamiltonian takes the form Ĥ = ∑_ø=0^2N-1 e^-_ø+1/2 + ∑_ø=2^N-2_ø-1/2 e^-_ø+1/2+_ø-1/2/2 + ∑_ø=N+2^2N-2_ø-1/2 e^-_ø+1/2+_ø-1/2/2 + _N+1/2_N-1/2_N-3/2 e^-_N+3/2-2_N+1/2-2_N-1/2-_N-3/2/2 + _N-1/2( e^-_N+1/2-3_N-1/2/2 + e^_N-1/2-_N+1/2/2) + _N-3/2(1+_N-1/2 e^-_N-1/2-_N+1/2/2) e^_N-1/2-_N-3/2/2 + _N+1/2( e^_N+1/2-_N+3/2/2 + e^-3_N+1/2-_N+3/2/2) + _N+1/2_N-1/2( e^_N-1/2-2_N+1/2-_N+3/2/2 + e^-3_N-1/2-2_N+1/2-_N+3/2/2) + _1/2_2N-1/2_2N-3/2 e^-_3/2-2_1/2-2_2N-1/2-_2N-3/2/2 + _2N-1/2( e^-_1/2-3_2N-1/2/2 + e^_2N-1/2-_1/2/2) + _2N-3/2(1+_2N-1/2 e^-_2N-1/2-_1/2/2) e^_2N-1/2-_2N-3/2/2 + _1/2( e^_1/2-_3/2/2 + e^-3_1/2-_3/2/2) + _1/2_2N-1/2( e^_2N-1/2-2_1/2-_3/2/2 + e^-3_2N-1/2-2_2N+1/2-_2N+3/2/2) with _ø+1/2 = 2_1 ∇^u_ø+1/2 - b_ø+1/2 - _1 ø=0,…,N-1, 2_1 ∇^u_ø+1/2 + b_2N-ø-1/2 + _1 ø=N,…,2N-1. See Appendix. <ref> for calculation detail. In the NS-limit _2 → 0, _1≡ħ, the bulk instanton partition function ^(SO) is dominated by the limit shape configuration Λ(<ref>). The vev of the surface defect becomes lim__2 → 0⟨ψ⟩ = ψ[Λ] :=ψ(,_1;u_ø;) lim__2→0⟨ (+^*) ψ⟩ = (+^*)[Λ] ψ (,_1;u_ø;) The Dyson-Schwinger equation becomes Schrödinger equation Ĥψ (,_1;u_ø;) = (+^*)[Λ](,_1,) ψ (,_1;u_ø;). We now claim that the Hamiltonian Ĥ in (<ref>) is equivalent to D̂_N RTL Hamiltonian after folding and change of variables. This can be better illustrated on the classical level. First, we consider shifting u_N-1/2→ u_N-1/2√(e^2_N-1/2+e^-2_N-1/2), u_N+1/2→ u_N+1/2√(e^2_N+1/2+e^-2_N+1/2), u_2N-1/2→ u_2N-1/2√(e^2_2N-1/2+e^-2_2N-1/2), u_1/2→ u_1/2√(e^2_1/2+e^-2_1/2). Secondly we reduce the degree of freedom of the system by half through folding: u_ø+1/2 = u^-1_2N-ø-1/2, _ø+1/2 = - _2N-ø-1/2 Finally we take another change of variables on the reduced variables e^2_N-1/2→cosh_N-1/2/cosh_N-1/2/2, e^2_N-1/2→cosh_N-1/2-2_N-1/2/2/cosh_N-1/2+2_N-1/2/2, e^-2_1/2→cosh_1/2/cosh_1/2/2, e^-2_1/2→cosh_1/2-2_1/2/2/cosh_1/2+2_1/2/2. The Hamiltonian (<ref>) recovers the D̂_N RTL Hamiltonian (<ref>): Ĥ = ∑_ø=0^N-1 2cosh_ø+1/2 + ∑_ø=1^N-1 2 R^2 e^_ø+1/2-_ø-1/2cosh_ø+1/2+_ø-1/2/2 + R^4 e^2_3/2 + 2R^2 e^_1/2+_3/2cosh_1/2-_3/2/2 + R^4 e^-2_N-3/2 + 2R^2 e^-_N-1/2-_N-3/2cosh_N-1/2-_N-3/2/2 with log u_ø+1/2 = _ø+1/2. § SUMMERY AND FUTURE DIRECTION In this article we explore the Bethe/Gauge correspondence of type D relativistic Toda lattice and its gauge theory dual. Due to lack of Young diagram description for SO(2N) supersymmetry gauge theory instanton partition function, we consider U(2N) gauge theory with eight fundamental matters which shares the same toric diagram with the SO(2N) theory. The fundamental masses and Coulomb moduli of the U(2N) gauge theory are fine-tuned to allow folding from U(2N) gauge theory to SO(2N) theory. We study the instanton moduli space of U(2N) group with a modified real and complex moment map (<ref>). A class of solution of the modified real and complex moment mapa has the ADHM matrices B_1 and B_2 be block diagonal. We prove the stability condition for the ADHM vector spaces (<ref>) in the said instanton moduli space. The instanton configuration can be represented by a set of N-tuples of Young diagrams. In the unrefined limit _+ = 0 our result matches with the JK-residue computation <cit.>. In the NS-limit _2 → 0 the twisted superpotential (<ref>) matches with the effective potential of the 3d =2 theory <cit.>. The correspondence between the type D RTL and the folded gauge theory is established on two levels: First, we recover the Bethe ansatz equation of type D RTL from the qq-character of the gauge theory. Secondly, by introducing co-dimensional two orbifold defect, we recover the D̂_N RTL Hamiltonian from the fractional qq-character. The defect partition function in the NS-limit is proven as the wavefunction of the D̂_N RTL. Let us end this note with a few remarks and furture directions * We would like to study the potential folding of ADHM construction from type A to type B and C. In particular we are curious whether a proper modification of the real and complex moment maps can lead to Young diagram description of the instanton configuration. * Two change of variables (<ref>) and (<ref>) are performed in order to fully match the D̂_N RTL Hamiltonian with what we obtained from the fractional qq-characters. A similar but not identical change of variable is also observed in the study of dimer graphs for type D RTL <cit.>. Despite being straightforward in the classical level, the meaning of these change of variables (<ref>) and (<ref>) in the quantum level is not well understood at this moment. * Finally, we would like to know if the Bethe/Gauge correspondence can be uplifted to 6d =(1,0) theory. The R-matrix of the elliptic integrable system is associated to the elliptic Ding-Iohara-Miki algebra <cit.>. § INSTANTON PSEUDO MEASURE The pseudo-measure of SO gauge group: dϕ_I/2π∏_I_+/_1 _2(2ϕ_I)^2(2ϕ_I+_+)(2ϕ_I-_+)/∏_β=1^N ( _+/2±ϕ_I ± b_β) ×∏_I≠ J(ϕ_I-ϕ_J)(ϕ_I-ϕ_J+_+)/(ϕ_I-ϕ_J+_1)(ϕ_I-ϕ_J+_2)×∏_I<J±(ϕ_I+ϕ_J)(_+± (ϕ_I+ϕ_J))/(_1± (ϕ_I+ϕ_J))(_2± (ϕ_I+ϕ_J)) Can be rewritten into -functor by [ -(+^*)^2/2P_12^*] = [ -(+^* + q_+^-1/2P_12 + q_+^1/2P_12^*^* )^2/2P_12^*] = [ -(+^*)^2/2P_12^*] [ q_+^1/2 (+^*)(+^*) ] [ -1/2P_12(+^*)^2 ] where [ - 1/2 P_12 (+^*) (+^*) ] = [ -1/2 P_12( ∑_I=1^k e^ϕ_I + e^-ϕ_I) ( ∑_J=1^k e^ϕ_J + e^-ϕ_J) ] = [ -1/2 P_12∑_I<J e^ϕ_I + ϕ_J + e^ϕ_I-ϕ_J + e^ϕ_J-ϕ_I + e^-ϕ_I-ϕ_J] ×[ -1/2 P_12∑_I>J e^ϕ_I + ϕ_J + e^ϕ_I-ϕ_J + e^ϕ_J-ϕ_I + e^-ϕ_I-ϕ_J] ×[ - 1/2 P_12∑_I e^2ϕ_I + 2 + e^-2ϕ_I] = [ -P_12∑_I≠ J e^ϕ_I-ϕ_J] [ - P_12∑_I<J e^ϕ_I+ϕ_J + e^-ϕ_I-ϕ_J] [ -1/2P_12∑_I e^2ϕ_I +2+ e^-2ϕ_I] = ∏_I≠ J(ϕ_I-ϕ_J)(ϕ_I-ϕ_J+_+)/(ϕ_I-ϕ_J+_1)(ϕ_I-ϕ_J+_2)×∏_I<J±(ϕ_I+ϕ_J)(_+± (ϕ_I+ϕ_J))/(_1± (ϕ_I+ϕ_J))(_2± (ϕ_I+ϕ_J)) ×_+/_1_2×[ (2ϕ_I)(-2ϕ_I)(_++2ϕ_I)(_+-2ϕ_I)/(_1+2ϕ_I)(_1-2ϕ_I)(_2+2ϕ_I)(_2-2ϕ_I)]^1/2 The contribution from the fundamental masses in U(2N) is [ - (+^*) ] = [ - ∑_f=1^8 ∑_I e^m_f+ϕ_I + e^m_f-ϕ_I] = ∏_f=1^8 ∏_I(ϕ_I+m_f)(m_f-ϕ_I) Let us consider the case m_f+4 = m_f + π The mass contribution becomes ∏_f=1^4 ∏_I(ϕ_I+m_f)(m_f-ϕ_I) (ϕ_I+m_f+π)(m_f+π-ϕ_I) = ∏_f=1^4 ∏_I (2ϕ_I+2m_f) (2m_f-2ϕ_I) = [ -' (+^*) ], ' = ∑_f=1^4 e^m_f To match with the SO(2N) measure, we choose m_1 = 0, m_2 = _1/2, m_3 = _2/2, m_4 = _+/2 § DETAIL CALCULATION OF QQ-CHARACTER The fractional qq-characters (<ref>) _ø (X)[λ̂] = Y_ø+1^(+)(x+_1/2+_2_̣ø,2N-1)[λ̂] Y_2N-ø-1^(-)(-x-_1/2-_2)[λ̂] + _ø+1/2 P_ø(x)/Y_ø^(+)(x-_1/2)[λ̂] Y_2N-ø^(-)(-x+_1/2+_2(1-_̣ø,0))[λ̂] is build by fractional Y_ø^(±)(x). P_ø(x) is defined in (<ref>). For ø=0,…,N-1: Y_ø^(+)(x)Y_2N-ø^(-)(-x) = [ -X (_ø+1/2 - P_1 q_1^-1/2_+,ø+1/2 + P_1 q_1^-1/2_+,ø-1/2 )^* + X^* ( P_1 q_1^-1/2_-,2N-ø+1/2 - P_1q_1^-1/2_-,2N-ø-1/2)^* ] = ( e^-b_ø+1/2/2√(X) - e^b_ø+1/2/2/√(X)) ∏_∈^+_ø+1/2 q_1^-1/21-e^c_q_1^1/2/X/ 1-e^c_q_1^-1/2/X∏_∈^+_ø-1/2 q_1^1/21- e^c_q_1^-1/2/X/1-e^c_q_1^1/2/X ×∏_∈^-_2N-ø+1/2 q_1^1/21-e^-c_q_1^-1/2/X/1-e^-c_q_1^1/2/X∏_∈^-_2N-ø-1/2 q_1^-1/21-e^-c_q_1^1/2/X/1-e^-c_q_1^-1/2/X. For ø=N,…,2N-1: Y_ø^(+)(x)Y_2N-ø^(-)(-x) = [ X ( P_1 q_1^-1/2_+,ø+1/2 - P_1 q_1^-1/2_+,ø-1/2 )^* + X^* ( - _2N-ø-1/2^* + P_1 q_1^-1/2_-,2N-ø+1/2 - P_1q_1^-1/2_-,2N-ø-1/2)^* ] = ( e^-b_2N-ø-1/2/2/√(X) - e^b_2N-ø-1/2/2√(X)) ∏_∈^+_ø+1/2 q_1^-1/21-e^c_q_1^1/2/X/ 1-e^c_q_1^-1/2/X∏_∈^+_ø-1/2 q_1^1/21- e^c_q_1^-1/2/X/1-e^c_q_1^1/2/X ×∏_∈^-_2N-ø+1/2 q_1^1/21-e^-c_q_1^-1/2/X/1-e^-c_q_1^1/2/X∏_∈^-_2N-ø-1/2 q_1^-1/21-e^-c_q_1^1/2/X/1-e^-c_q_1^-1/2/X. The expectation value of qq-character can be calculated based on the building block. In the case ø=1,…,N-2: _ø(X) = ( e^-b_ø+3/2/2q_1^1/4√(X) - e^b_ø+3/2/2q_1^-1/4/√(X)) ∏_∈^+_ø+3/2 q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈^+_ø+1/2 q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈^-_2N-ø-1/2 q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈^-_2N-ø-3/2 q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X. + _ø+1/2/e^-b_ø+1/2/2 q_1^-1/4√(X) - e^b_ø+1/2/2q_1^1/4/√(X)∏_∈^+_ø+1/2 q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈^+_ø-1/2 q_1^-1/21-e^c_q_1/X/1- e^c_/X ×∏_∈^-_2N-ø+1/2 q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈^-_2N-ø-1/2 q_1^1/21-e^-c_/X/1-e^-c_q_1/X. Similarly for ø=N+1,…,2N-2 _ø(X) = ( e^-b_2N-ø-3/2/2q_1^-1/4/√(X) - e^b_2N-ø-3/2/2q_1^1/4√(X)) ∏_∈_ø+3/2^+ q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈_ø+1/2^+ q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈_2N-ø-1/2^- q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈_2N-ø-3/2^- q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X. + _ø+1/2/e^-b_2N-ø-1/2/2q_1^1/4/√(X) - e^b_2N-ø-1/2/2q_1^-1/4√(X)∏_∈_ø+1/2^+ q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈_ø-1/2^+ q_1^-1/21-e^c_q_1/X/1- e^c_/X ×∏_∈_2N-ø+1/2^- q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈_2N-ø-1/2^- q_1^1/21-e^-c_/X/1-e^-c_q_1/X. For the case ø=0: _0(X) = ( e^-b_3/2/2q_1^1/4√(X) - e^b_3/2/2q_1^-1/4/√(X)) ∏_∈^+_3/2 q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈^+_1/2 q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈^-_-1/2 q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈^-_-3/2 q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X. + _1/2 X q_1^-1/2 - q_1^1/2/X/e^-b_1/2/2 q_1^-1/4√(X) - e^b_1/2/2q_1^1/4/√(X)∏_∈^+_1/2 q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈^+_-1/2 q_1^-1/21-e^c_q_1/X/1- e^c_/X ×∏_∈^-_1/2 q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈^-_-1/2 q_1^1/21-e^-c_/X/1-e^-c_q_1/X. For the case ø=N: _N(X) = ( e^-b_N-3/2/2q_1^1/4/√(X) - e^b_N-3/2/2q_1^-1/4√(X)) ∏_∈_N+3/2^+ q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈_N+1/2^+ q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈_N-1/2^- q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈_N-3/2^- q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X. + _N+1/2 X q_1^1/2 - q_1^-1/2/X/e^-b_N-1/2/2 q_1^-1/4/√(X) - e^b_N-1/2/2q_1^1/4√(X)∏_∈^+_N+1/2 q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈^+_N-1/2 q_1^-1/21-e^c_q_1/X/1- e^c_/X ×∏_∈^-_N+1/2 q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈^-_N-1/2 q_1^1/21-e^-c_/X/1-e^-c_q_1/X. For the case ø=N-1: _N-1(X) = ( e^-b_N-1/2/2 q_1^-1/4/√(X) - e^b_N-1/2/2q_1^1/4√(X)) ∏_∈_N+1/2^+ q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈_N-1/2^+ q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈_N+1/2^- q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈_N-1/2^- q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X + _N-1/2Xq_1^-1/2-q_1^1/2/X/e^-b_N-1/2/2 q_1^-1/4√(X) - e^b_N-1/2/2q_1^1/4/√(X)∏_∈_N-1/2^+ q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈_N-3/2^+ q_1^-1/21-e^c_q_1/X/1- e^c_/X ×∏_∈_N+3/2^- q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈_N+1/2^- q_1^1/21-e^-c_/X/1-e^-c_q_1/X. Last but not least the ø=2N-1 case: _2N-1(X) = ( e^-b_1/2/2 q_1^1/4√(X) - e^b_1/2/2 q_1^-1/4/√(X)) ∏_∈_1/2^+ q_1^-1/21-e^c_/X/ 1-e^c_q_1^-1/X∏_∈_-1/2^+ q_1^1/21- e^c_q_1^-1/X/1-e^c_/X ×∏_∈_1/2^- q_1^1/21-e^-c_q_1^-1/X/1-e^-c_/X∏_∈_-1/2^- q_1^-1/21-e^-c_/X/1-e^-c_q_1^-1/X + _2N-1/2Xq_1^1/2-q_1^-1/2/X/e^-b_1/2/2 q_1^1/4/√(X) - e^b_1/2/2q_1^-1/4√(X)∏_∈_-1/2^+ q_1^1/2 1-e^c_/X/1-e^c_q_1/X∏_∈_-3/2^+ q_1^-1/21-e^c_q_1/X/1- e^c_q_1^-1/X ×∏_∈_3/2^- q_1^-1/21-e^-c_q_1/X/1-e^-c_/X∏_∈_1/2^- q_1^1/21-e^-c_/X/1-e^-c_q_1/X. (<ref>) implies k_ø+1/2^+ = k^-_2N-ø-1/2. The expectation value of the fractional instanton counting k_ø+1/2^+ can be rewrite as differential operator acting on the fractional instanton partition function ⟨ k_ø+1/2^+ ⟩^(SO) = ∇^_ø+1/2^(SO)). For ø=1,…,N-2, large X expansion: [ X^-1/2] T_ø(X) = ⟨ e^-b_ø+3/2/2 q_1^1/4 q_1^1/2 (k^+_ø+1/2-k^+_ø+3/2+k_2N-ø-1/2^–k^-_2N-ø-3/2) . ×[ -e^b_ø+3/2q_1^-1/2 + (q_1^-1-1) ( _+,ø+3/2 - _+,ø+1/2 - _-,2N-ø-1/2^* + _-,2N-ø-3/2^* ) ] . + _ø+1/2 e^b_ø+1/2/2 q_1^1/4 q_1^1/2(k_ø+1/2^+ - k_ø-1/2^+ - k_2N-ø+1/2^- + k_2N-ø-1/2^- ) ⟩ DS equation: 0 = ⟨ e^-b_ø+3/2/2 q_1^1/4 q_1^1/2 (k^+_ø+1/2-k^+_ø+3/2+k_2N-ø-1/2^–k^-_2N-ø-3/2) . ×[ -e^b_ø+3/2q_1^-1/2 + (q_1^-1-1) ( _+,ø+3/2 - _+,ø+1/2 - _-,2N-ø-1/2^* + _-,2N-ø-3/2^* ) ] + _ø+1/2 e^b_ø+1/2/2 q_1^1/4 q_1^1/2(k_ø+1/2^+ - k_ø-1/2^+ - k_2N-ø+1/2^- + k_2N-ø-1/2^- ) . + e^b_ø+3/2/2q_1^-1/4 q_1^-1/2(k^+_ø+1/2-k^+_ø+3/2+k^-_2N-ø-1/2 - k^-_2N-ø-3/2)⟩ For ø=N+1,…,2N-2, large X expansion: -[ X^-1/2] T_ø(X) = ⟨ e^b_2N-ø-3/2/2 q_1^1/4 q_1^1/2 (k^+_ø+1/2-k^+_ø+3/2+k_2N-ø-1/2^–k^-_2N-ø-3/2) . ×[ -e^-b_2N-ø-3/2q_1^-1/2 + (q_1^-1-1) ( _+,ø+3/2 - _+,ø+1/2 - _-,2N-ø-1/2^* + _-,2N-ø-3/2^* ) ] . + _ø+1/2 e^-b_2N-ø-1/2/2 q_1^1/4 q_1^1/2(k_ø+1/2^+ - k_ø-1/2^+ - k_2N-ø+1/2^- + k_2N-ø-1/2^- ) ⟩ DS equation: 0 = ⟨ e^b_2N-ø-3/2/2 q_1^1/4 q_1^1/2 (k^+_ø+1/2-k^+_ø+3/2+k_2N-ø-1/2^–k^-_2N-ø-3/2) . ×[ -e^-b_2N-ø-3/2q_1^-1/2 + (q_1^-1-1) ( _+,ø+3/2 - _+,ø+1/2 - _-,2N-ø-1/2^* + _-,2N-ø-3/2^* ) ] + _ø+1/2 e^-b_2N-ø-1/2/2 q_1^1/4 q_1^1/2(k_ø+1/2^+ - k_ø-1/2^+ - k_2N-ø+1/2^- + k_2N-ø-1/2^- ) + . e^-b_2N-ø-3/2/2q_1^-1/4 q_1^-1/2(k_ø+1/2^+-k_ø+3/2^++k_2N-ø-1/2^- - k_2N-ø-3/2^-)⟩ For ø=0, large X expansion: [ X^-1/2] T_0(X) = ⟨ e^-b_3/2/2 q_1^1/4 q_1^1/2 (k^+_1/2-k^+_3/2+k_-1/2^–k^-_-3/2) . . ×[ -e^b_3/2q_1^-1/2 + (q_1^-1-1) ( _+,3/2 - _+,1/2 - _-,-1/2^* + _-,-3/2^* ) ] ⟩ + _1/2⟨ e^b_1/2/2q_1^-1/4 q_1^1/2 (k_1/2^+-k_-1/2^++k_-1/2^–k_1/2^-) . . ×[ e^b_1/2q_1^1/2 + (q_1-1)q_2^-1( _+,1/2 - _+,-1/2 + _-,-1/2^* - _-,1/2^* ) ] ⟩ DS equation: 0 = ⟨ e^-b_3/2/2 q_1^1/4 q_1^1/2 (k^+_1/2-k^+_3/2+k_-1/2^–k^-_-3/2) . . ×[ -e^b_3/2q_1^-1/2 + (q_1^-1-1) ( _+,3/2 - _+,1/2 - _-,-1/2^* + _-,-3/2^* ) ] ⟩ + _1/2⟨ e^b_1/2/2q_1^-1/4 q_1^1/2 (k_1/2^+-k_-1/2^++k_-1/2^–k_1/2^-) . . ×[ e^b_1/2q_1^1/2 + (q_1-1)q_2^-1( _+,1/2 - _+,-1/2 + _-,-1/2^* - _-,1/2^* ) ] ⟩ + e^b_3/2/2q_1^-1/4⟨ q_1^-1/2 (k_1/2^+-k_3/2^++k_-1/2^–k_-3/2^-)⟩ + _1/2e^-b_1/2/2q_1^1/4⟨ q_1^-1/2 (k_1/2^+-k_-1/2^++k_-1/2^–k_1/2^-)⟩ For ø=N, large X expansion: [ X^-1/2] T_N(X) = ⟨ e^b_N-3/2/2 q_1^1/4 q_1^1/2 (k^+_N+1/2-k^+_N+3/2+k_N-1/2^–k^-_N-3/2) . . ×[ -e^-b_N-3/2q_1^1/2 + (q_1^-1-1) ( _+,N+3/2 - _+,N+1/2 - _-,N-1/2^* + _-,N-3/2^* ) ] ⟩ + _N+1/2⟨ e^-b_N-1/2/2q_1^-3/4 q_1^1/2 (k_N+1/2^+-k_N-1/2^++k_N-1/2^–k_N+1/2^-) . . ×[ e^-b_N-1/2q_1^-1/2 + (q_1-1) ( _+,N+1/2 - _+,N-1/2 + _-,N-1/2^* - _-,N+1/2^* ) ] ⟩ DS equation: 0 = ⟨ e^b_N-3/2/2 q_1^-1/4 q_1^1/2 (k^+_N+1/2-k^+_N+3/2+k_N-1/2^–k^-_N-3/2) . . ×[ -e^-b_N-3/2q_1^1/2 + (q_1^-1-1) ( _+,N+3/2 - _+,N+1/2 - _-,N-1/2^* + _-,N-3/2^* ) ] ⟩ + _N+1/2⟨ e^-b_N-1/2/2q_1^-3/4 q_1^1/2 (k_N+1/2^+-k_N-1/2^++k_N-1/2^–k_N+1/2^-) . . ×[ e^-b_N-1/2q_1^-1/2 + (q_1-1) ( _+,N+1/2 - _+,N-1/2 + _-,N-1/2^* - _-,N+1/2^* ) ] ⟩ + e^-b_N-3/2/2q_1^1/4⟨ q_1^-1/2 (k_N+1/2-k_N+3/2+k_N-1/2-k_N-3/2)⟩ + _N+1/2 e^b_N-1/2/2q_1^3/4⟨ q_1^-1/2 (k_N+1/2^+-k_N-1/2^++k_N-1/2^–k_N+1/2^-)⟩ For ø=N-1, large X expansion: -[ X^-1/2] T_N-1(X) = ⟨ e^b_N-1/2/2q^1/4 q_1^1/2(k_N-1/2^+-k_N+1/2^+ + k^-_N+1/2-k^-_N-1/2) . ×. [ - e^-b_N-1/2q_1^-1/2 + (q_1^-1-1) ( _+,N+1/2 - _+,N-1/2 - _-,N+1/2^* + _-,N-1/2^* ) ] ⟩ - _N-1/2⟨ e^b_N-1/2/2 q_1^3/4 q^1/2 (k^+_N-1/2-k^+_N-3/2+k^-_N+1/2-k^-_N+3/2). ×. [ e^b_N-1/2q_1^1/2 + (q_1-1) ( _+,N-1/2 - _+,N-3/2 + _-,N+1/2^* - _-,N+3/2^* ) ] ⟩ DS equation: 0 = ⟨ e^b_N-1/2/2q^1/4 q_1^1/2(k_N-1/2^+-k_N+1/2^+ + k^-_N+1/2-k^-_N-1/2) . ×. [ - e^-b_N-1/2q_1^-1/2 + (q_1^-1-1) ( _+,N+1/2 - _+,N-1/2 - _-,N+1/2^* + _-,N-1/2^* ) ] ⟩ - _N-1/2⟨ e^b_N-1/2/2q_1^3/4 q^1/2 (k^+_N-1/2-k^+_N-3/2+k^-_N+1/2-k^-_N+3/2). ×. [ e^b_N-1/2q_1^1/2 + (q_1-1) ( _+,N-1/2 - _+,N-3/2 + _-,N+1/2^* - _-,N+3/2^* ) ] ⟩ + e^b_N-1/2/2q_1^-1/4⟨ q_1^-1/2(k_N-1/2^+-k_N+1/2^++k_N+1/2^–k_N-1/2^-)⟩ - q_1^-3/4_N-1/2 e^-b_N-1/2/2⟨ q_1^-1/2(k_N+1/2^-+k_N-1/2^+-k_N+3/2^–k_N-3/2^+)⟩ . For ø=2N-1, large X expansion: [ X^-1/2] T_2N-1(X) = ⟨ e^-b_1/2/2q_1^1/4 q_1^1/2(k_-1/2^+-k_1/2^++k_1/2^–k_-1/2^-). ×. [ - e^b_1/2q_1^-1/2 q_2^-1 + (q_1^-1-1) q_2^-1( _+,1/2 - _+,-1/2 + _-,-1/2^* - _-,1/2^* ) ] ⟩ - _-1/2⟨ e^-b_1/2/2 q_1^3/4 q_1^1/2 (k^+_-1/2-k^+_-3/2 + k^-_1/2 - k^-_3/2 ) . ×. [ e^-b_1/2q_1 + (q_1-1) ( _+,-1/2 - _+,-3/2 + _-,1/2^* - _-,3/2^* ) ] ⟩ DS equation: 0 = ⟨ e^-b_1/2/2q_1^1/4 q_1^1/2(k_-1/2^+-k_1/2^++k_1/2^–k_-1/2^-). ×. [ - e^b_1/2q_1^-1/2 q_2^-1 + (q_1^-1-1) q_2^-1( _+,1/2 - _+,-1/2 + _-,-1/2^* - _-,1/2^* ) ] ⟩ - _-1/2⟨ e^-b_1/2/2 q_1^3/4 q_1^1/2 (k^+_-1/2-k^+_-3/2 + k^-_1/2 - k^-_3/2 ) . ×. [ e^-b_1/2q_1 + (q_1-1) ( _+,-1/2 - _+,-3/2 + _-,1/2^* - _-,3/2^* ) ] ⟩ + e^b_1/2/2q_1^-1/4⟨ q_1^-1/2(k_-1/2^+-k_1/2^++k_1/2^–k_-1/2^-)⟩ - q_1^-3/4_-1/2 e^b_1/2⟨ q_1^-1/2 (k_1/2+k_-1/2-k_-3/2-k_3/2) ⟩ We consider linear combination of the DS equations ∑_ø=0^2N-1Ĉ_ø( c^(+)_1,ø - c^(-)_0,ø)_ø = 0 The coefficients are chosen to cancel unwanted terms _±,ø+1/2. 0=Ĉ_2N-1 e^-b_1/2/2q_1^1/4q_1^k^+_-1/2-k^+_1/2 - Ĉ_0 ( e^-b_3/2/2q_1^1/4 q_1^k^+_1/2 - k^+_3/2 + _1/2 e^b_1/2/2 q_1^3/4q_2^-1 q_1^k^+_1/2-k^+_-1/2), ø+1/2 = 1/2 0=Ĉ_ø-1 e^-b_ø+1/2/2 q_1^1/4 q_1^k^+_ø-1/2-k^+_ø+1/2 - Ĉ_ø e^-b_ø+3/2/2 q_1^1/4 q_1^k^+_ø+1/2-k^+_ø+3/2 , ø+1/2 = 3/2,…,N-5/2 0=Ĉ_N-3 e^-b_N-3/2/2 q_1^1/4 q_1^k^+_N-5/2-k^+_N-3/2 - Ĉ_N-2 e^-b_N-1/2/2 q_1^1/4 q_1^k^+_N-3/2-k^+_N-1/2 + Ĉ_N-1_N-1/2 e^b_N-1/2/2 q_1^7/4 q_1^k^+_N-1/2-k^+_N-3/2, ø+1/2 = N-3/2 0= Ĉ_N-2 e^-b_N-1/2/2 q_1^1/4 q_1^k^+_N-3/2-k^+_N-1/2 - Ĉ_N-1( e^b_N-1/2/2 q_1^1/4 q_1^k^+_N-1/2-k^+_N+1/2 + _N-1/2 e^b_N-1/2/2 q_1^7/4 q_1^k^+_N-1/2-k^+_N-3/2) + Ĉ_N _N+1/2e^-b_N-1/2/2 q_1^1/4 q_1^k^+_N+1/2-k^+_N-1/2 , ø+1/2 = N-1/2 0=Ĉ_N-1 e^b_N-1/2/2 q_1^1/4 q_1^k^+_N-1/2-k^+_N+1/2 - Ĉ_N ( e^b_N-3/2/2 q_1^1/4 q_1^k^+_N+1/2-k^+_N+3/2 + _N+1/2 e^-b_N-1/2/2 q_1^1/4 q_1^k^+_N+1/2-k^+_N-1/2), ø+1/2 = N+1/2 0=Ĉ_ø-1 e^b_2N-ø-1/2/2 q_1^1/4 q_1^k^+_ø-1/2-k^+_ø+1/2 - Ĉ_ø e^b_2N-ø-3/2/2 q_1^1/4 q_1^k^+_ø-1/2-k^+_ø+3/2 , ø+1/2 = N+3/2,…,2N-5/2 0=Ĉ_2N-3 e^b_3/2/2 q_1^1/4 q_1^k^+_-5/2-k^+_-3/2 -Ĉ_2N-2 e^b_1/2/2 q_1^1/4 q_1^k^+_-3/2-k^+_-1/2 - Ĉ_2N-1_-1/2 e^-b_1/2/2 q_1^7/4 q_1^k^+_-1/2-k^+_-3/2 , ø + 1/2 = 2N - 3/2 0= Ĉ_2N-2 e^b_1/2/2 q_1^1/4 q_1^k^+_3/2-k^+_-1/2 - Ĉ_2N-1( e^-b_1/2/2q_1^1/4 q_1^k^+_-1/2-k^+_1/2 q_2^-1 - _-1/2 e^-b_1/2/2 q_1^7/4 q_1^k^+_-1/2-k^+_-3/2) - Ĉ_0 _1/2 e^b_1/2/2 q_1^3/4 q_2^-1 q_1^k^+_1/2-k^+_-1/2 , ø+1/2 = 2N - 1/2 We find the following solution: Ĉ_ø = e^b_ø+3/2/2q_1^-1/4 q_1^-∇^u_ø+3/2 ø = 0,…,N-3 Ĉ_ø = e^-b_2N-ø-3/2/2 q_1^-1/4 q_1^-∇^u_ø+3/2 ø = N,…, 2N-3 Ĉ_N-1 = e^-b_N-1/2/2 q_1^-1/4 q_1^-∇^u_N+1/2 + e^-b_N-1/2 - b_N-3/2/2 q_1^-1/4 q_1^-∇^u_N+1/2-∇^u_N+3/2 _N+1/2 q_1^-∇^u_N+1/2 Ĉ_N-2 = e^b_N-1/2/2 q_1^-1/4 q_1^-∇^u_N-1/2 + e^b_N-1/2/2 q_1^5/4 q_1^-∇^u_N+1/2-∇^u_N-1/2 _N-1/2 q_1^-∇^u_N-1/2 + e^-b_N-3/2/2 q_1 q_1^-∇^u_N+3/2 - ∇^u_N+1/2 - ∇^u_N-1/2 _N+1/2 q_1^-∇^u_N+1/2 _N-1/2 q_1^-∇^u_N-1/2 Ĉ_2N-1 = e^b_1/2/2 q_1^-1/4 q_1^-∇^u_1/2 + e^b_1/2+b_3/2/2 q_1^1/4 q_2^-1 q_1^-∇^u_1/2 - ∇^u_3/2_1/2 q_1^-∇^u_1/2 Ĉ_2N-2 = e^-b_1/2/2 q_1^-1/4 q_1^-∇^u_2N-1/2 - e^-b_1/2/2 q_1^5/4 q_1^-∇^u_2N-1/2-∇^u_1/2_2N-1/2 q_1^-∇^u_2N-1/2 - e^b_3/2/2 q_1^7/4 q_1^-∇^u_2N-1/2 - ∇^u_1/2 - ∇^u_3/2_1/2 q_1^-∇^u_1/2_2N-1/2 q_1^-∇^u_2N-1/2 The summation over Dyson-Schwinger equation can be written as differential operators acting on the defect partition function 0 = ∑_ø=0^2N-1Ĉ_ø (DS)_ø = ⟨( Ĥ - - ^* ) Ψ⟩. The Hamiltonian is given by Ĥ = ∑_ø=0^2N-1 e^-p̂_ø+1/2 + ∑_ø=2^N-2 2 _ø-1/2 e^-p̂_ø+1/2+p̂_ø-1/2/2 + ∑_ø=N+2^2N-2 2 _ø-1/2 e^-p̂_ø+1/2+p̂_ø-1/2/2 + _N+1/2_N-1/2_N-3/2 e^-p̂_N+3/2-2p̂_N+1/2-2p̂_N-1/2-p̂_N-3/2/2 + _N-1/2( e^-p̂_N+1/2-3p̂_N-1/2/2 + e^p̂_N-1/2-p̂_N+1/2/2) + _N-3/2(1+_N-1/2 e^-p̂_N-1/2-p̂_N+1/2/2) e^p̂_N-1/2-p̂_N-3/2/2 + _N+1/2( e^p̂_N+1/2-p̂_N+3/2/2 + e^-3p̂_N+1/2-p̂_N+3/2/2) + _N+1/2_N-1/2( e^p̂_N-1/2-2p̂_N+1/2-p̂_N+3/2/2 + e^-3p̂_N-1/2-2p̂_N+1/2-p̂_N+3/2/2) + _1/2_2N-1/2_2N-3/2 e^-p̂_3/2-2p̂_1/2-2p̂_2N-1/2-p̂_2N-3/2/2 + _2N-1/2( e^-p̂_1/2-3p̂_2N-1/2/2 + e^p̂_2N-1/2-p̂_1/2/2) + _2N-3/2(1+_2N-1/2 e^-p̂_2N-1/2-p̂_1/2/2) e^p̂_2N-1/2-p̂_2N-3/2/2 + _1/2( e^p̂_1/2-p̂_3/2/2 + e^-3p̂_1/2-p̂_3/2/2) + _1/2_2N-1/2( e^p̂_2N-1/2-2p̂_1/2-p̂_3/2/2 + e^-3p̂_2N-1/2-2p̂_2N+1/2-p̂_2N+3/2/2) with p̂_ø+1/2 = 2_1 ∇^u_ø+1/2. utphys
http://arxiv.org/abs/2409.02537v1
20240904085708
Astrochemistry on Galactic scales
[ "L. Colzi", "V. M. Rivilla", "M. T. Beltrán", "C. Y. Law", "E. Redaelli", "M. Padovani" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.EP", "astro-ph.SR" ]
T_eff km s^-1 Centro de Astrobiología (CSIC-INTA), Ctra Ajalvir km 4, 28850 Torrejón de Ardoz, Madrid, Spain [email protected] INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching, Germany Department of Space, Earth & Environment, Chalmers University of Technology, 412 96 Gothenburg, Sweden Max-Planck-Institut für extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching bei München, Germany Colzi et al. Galactic astrochemistry The increasing number of observations towards different environments in the Milky Way, as well as theoretical and experimental works, are improving our knowledge of the astrochemical processes in the interstellar medium (ISM). In this chapter we report some of the main projects to study the chemical complexity and isotopic ratios across the Galaxy. High-sensitivity spectral surveys covering broad bandwidths towards Galactic Center molecular clouds (e.g. G+0.693-0.027) and star-forming regions (e.g. the hot core G31.41+0.31) are revealing very rich astrochemical reservoirs, which include molecules of prebiotic interest. At the same time, isotopic ratios (e.g. ^12C/^13C and ^14N/^15N) can give important information on the Galactic chemical evolution, as well as on chemical local processes due to the physical conditions of the molecular clouds. We also highlight the role of cosmic rays as a key agent affecting the interstellar chemistry described above. Astrochemistry on Galactic scales L. Colzi1 V. M. Rivilla1 M. T. Beltrán2 C. Y. Law3,4 E. Redaelli5 M. Padovani2 Received: 29-02-2024; Accepted: 07-07-2024 ============================================================================================================ § INTRODUCTION Astrochemistry is now living in a golden age. As of February 2024, more than 305 molecules have been detected in the interstellar medium (ISM) or circumstellar shells, of which ∼90 from 2020, without including isotopologues. In recent years, the improved sensitivity and broadband capabilities of current telescopes (e.g. GBT 100m, Yebes 40m, IRAM 30m, and ALMA) have allowed the discovery of molecular species with increasing complexity. Many observational campaigns have been carried out to study the molecular complexity within the whole Milky Way, from its center to the outer parts. In the Galactic Center, the Sgr B2(N) hot molecular core and surrounding envelope have provided the first detections in space of many species, including proposed precursors of amino acids (amino acetonitrile, NH2CH2CN; ); branched species (iso-propyl cyanide, i-C3H7CN, ); and chiral molecules (propylene oxide, CH3CHCH2O, ). Towards the neighbouring molecular cloud G+0.693-0.027 many molecular RNA precursors have first been detected, such as hydroxylamine (NH2OH; ), ethanolamine (NH2CH2CH2OH, ), 1-2-ethendiol ((CHOH)2, ). Many more chemical studies will be done in the Galactic Center in the context of the ACES (The ALMA CMZ Exploration Survey) Large Program (ID: 2021.1.00172.L; PI: Steven Longmore). In the Galactic Disk, the GOTHAM and QUIJOTE ultra-deep spectral surveys have demonstrated that the dark cloud TMC-1 exhibits an extremely rich chemistry, including aromatic species, such as benzonitrile (), indene (; ), or cyanonaphtalene (), among many others. Towards star-forming regions, detailed studies of archetypical low-mass and high-mass protostellar environments, such as IRAS 16293-2422 B () and the G31.41+0.31 hot core (), respectively, have provided a complete view of rich chemical reservoir. And recently, complex chemistry has been revealed in sources located in the outer edge of the Galaxy, at galactocentric distances >12 kpc (). Within the molecules detected in the ISM, there are also many isotopologues, i.e. molecules in which one of the atoms is substituted by a less abundant isotope. Isotopic ratios of elements, such as ^14N/^15N, can probe the Milky Way chemical evolution, and can be used as indicators of stellar nucleosynthesis (e.g. ). Molecular isotopic ratios are also affected by fractionation processes, which favour the rarer isotopic substitution over the most abundant isotopologue, depending on the local physical conditions of the cloud (e.g. density, temperature, UV field), shaping the measured ratios from the initial one produced by nucleosynthesis. The chemical processes that synthesize the molecules and their isotopologues in the ISM are affected by the presence of cosmic rays. In particular, cosmic rays with energy below 1 GeV influence the thermochemistry of the shielded molecular gas, which results from the ionisation of both atomic and molecular hydrogen. After the ionisation of H_2, H_2^+ readily reacts with nascent H_2 to form the trihydrogen cation H_3^+ . This initiates a series of reactions that lead to the formation of more and more complex species up to prebiotic molecules <cit.>. In this chapter, we will focus on the most recent discoveries regarding chemical complexity in the Galactic Center molecular cloud G+0.693-0.027 and the hot core G31.41+0.31 (Sect. <ref>), isotopic ratios measured across the Galaxy (Sect. <ref>), and we will highlight the importance of cosmic rays for astrochemical processes (Sect. <ref>). § CHEMICAL COMPLEXITY IN THE GALAXY §.§ A Galactic Center molecular cloud: G+0.693-0.027 The central molecular zone (CMZ, inner 300 pc of the Galaxy) contains 80% of the dense molecular gas in the Galaxy, but the star formation rate is one order of magnitude lower than in the disk (∼0.1 M_⊙/yr vs. 1.5–2 M_⊙/yr, ). The CMZ presents extreme conditions, like a high level of turbulence due to the large internal cloud velocity dispersion (∼15-50 ), and widespread high kinetic temperatures (from 50 up to >100 K; e.g. ), which could prevent star formation. The G+0.693-0.027 molecular cloud (hereafter G+0.693) is located ∼55 towards the NE of the Sgr B2(N) hot core. The source presents typical linewidths of 15–25 , a density of ∼10^4 cm^-3 and T_ kin>100 K (e.g. ). G+0.693 does not show any signposts of ongoing star formation, such as ultracompact HII regions, H_2O masers, or dust continuum point sources (). G+0.693 is probably affected by low-velocity shocks as a consequence of a large-scale cloud–cloud collision (), likely responsible for its rich chemistry due to the sputtering of molecules from dust grains (see below). The molecular emission from this region is sub-thermally excited because of its low density, and presents excitation temperatures of 5–20 K (much lower than the T_ kin of 50–150 K). This makes the spectrum much less crowded of lines since only those with low energies are excited, and allow for the identification of many molecular species without contamination as in hot cores (e.g. ). <cit.> studied the D/H ratios of HCN, HNC, HCO^+, and N_2H^+ towards G+0.693 using multiple rotational transitions. The authors discovered the presence of two line components: (i) a turbulent one with a linewidth of 20 typical of the GC and already known from other studies for this source (broad component), (ii) a new less turbulent component with a linewidth of 9 , which is very clear from the high-J transitions of the molecules studied (narrow component). <cit.> estimated for the broad component a temperature, T_ kin, of 100 K and H_2 densities of 0.3–3×10^4 cm^-3, and for the narrow component T_ kin≤30 K and H_2 densities increased by at least one order of magnitude to 0.05–1×10^6 cm^-3. This is the first indication that a substantial fraction of gas in the GC (10% in column density) has the density and temperature ideal to form the new generation of stars. Regarding the chemical complexity in G+0.693, <cit.> and <cit.> already showed that it is rich in complex O- and N-bearing species, respectively. In recent years, to fully understand the chemical complexity that can be reached in G+0.693, a broadband ultra-high sensitivity spectral survey has been carried out, using the GBT, Yebes 40m, IRAM 30m and APEX telescopes. These observations cover frequencies from ∼12.7 up to ∼276 GHz, reaching sensitivities down to sub-mK levels (e.g. ). Remarkably, this survey has provided the discovery of 16 new interstellar species since 2019. These molecules are members of increasing complexity of different chemical families that contain the five chemical elements essential for life (CHONPS): CH-bearing species, such as (CH3)2CCH2 (); O-bearing species (HCOCOOH, ; (CHOH)2, ; or C3H7OH, ); N-bearing species (HNCN, ; C2H3NH2, ; or HCCCHNH, ); NO-bearing species (NH2OH, ; C2H5NCO, ; or NH2CH2CH2OH, ); S-bearing species (HCOSH, ; or HOCS^+, ); and P-bearing species (PO^+, ). Many of these new molecules have been proposed as key molecular precursors of RNA nucleotides in prebiotic experiments that mimic the conditions of early Earth (), which suggests that interstellar chemistry could have played a relevant role in the origin of Life in our planet. §.§ A Galactic Disk star-forming region: the G31.41+0.31 hot molecular core G31.41+0.31 (hereafter G31) is a well-known hot molecular core located at a distance of 3.75 kpc, with a luminosity ∼5×10^4 L_⊙ (). Interferometric observations of CH_3CN and other complex organic molecules (COMs) have revealed a velocity gradient in the core, interpreted as the rotation of a toroid (), as well as evidence of infall (e.g., ), while SiO observations have detected several outflows associated with the core (). ALMA and VLA high-resolution observations have resolved the dust continuum and free-free emission into at least four sources, indicating that G31 is hosting a massive protocluster (). G31 is very chemically rich core (e.g., ), presenting prominent emission in a large number of COMs. Indeed, the first detection of glycolaldehyde (CH_2(OH)CHO) outside the GC was made towards G31 (). Recently, the “G31 Unbiased ALMA sPectral Observational Survey” (GUAPOS, ; ID: 2017.1.00501.S; PI: Maite Beltrán) has studied the molecular emission across the whole ALMA Band 3, from ∼84 GHz up to ∼116 GHz, with an angular resolution of 1.2 (4500 au). Figure <ref> shows the spectrum towards the center of G31, adapted from <cit.> and <cit.>. The GUAPOS project has allowed to identify so far more than 40 molecules in the region (and multiple isotopologues), including 23 COMs (). A summary of the main results is presented below: ∙ <cit.> analyzed the emission of three C_2H_4O_2 isomers, that is, methyl formate (CH_3OCHO), acetic acid (CH_3COOH), and glycolaldehyde (CH_2OHCHO), and observed that it is very compact and associated with the inner part of the core. The abundances of methyl formate and acetic acid toward G31 are higher than those detected toward the well-studied prototype of hot cores, such as Sgr B2(N), and of hot corinos, such as IRAS IRAS 16293-2422B. The comparison of the abundances of the three isomers with those predicted by chemical models suggests the necessity of grain-surface routes, at least for the formation of CH_3OCHO see e.g. ). ∙ Among COMs, those containing the NCO backbone (peptide-bond) are of great interest since peptide-bonds can link amino acids to form proteins (e.g. ). <cit.> studied peptide-like bond molecules, such as isocyanic acid (HNCO), formamide (HC(O)NH_2), methyl isocyanate (CH_3NCO), and also more complex species such as acetamide (CH_3C(O)NH_2) and N-methylformamide (CH_3NHCHO). These molecules have been observed together for the first time in the disk of our Galaxy, outside the Galactic Center. The results from this work suggest that these molecules have likely been formed on the surface of interstellar grains during the earliest (and cold) stages of star formation. Once the protostars are born, they start to heat their surroundings and drive molecular outflows. These physical processes are responsible for the evaporation and/or the sputtering of the molecules from the surface of the dust grains. ∙ The analysis of nine O-bearing and six N-bearing COMs by <cit.> indicated that their abundances range from 10^-10 to 10^-6. From the comparison of the abundances of these species with those estimated for other twenty-seven sources, it is evident that there is not a unique template for the abundances of COMs in hot molecular cores. This could be the result of the different physical properties of the sources and their different evolution with time, which suggests the importance of the thermal history for their chemistry. ∙ <cit.> have detected P-bearing molecules in positions separated from the hot core and located along the outflows identified in G31 in previous studies. PN is clearly detected, while PO is tentatively detected. The fact that PN is tightly associated with SiO and other typical shock tracers, such SO, and the lack of a clear detection of PN towards the hot core provides a robust confirmation that PN is likely a product of shock chemistry, as previously shown by <cit.> and <cit.>, and allows us to rule out formation pathways in hot gas. ∙ <cit.> have analyzed more than 50 species, including oxygen, nitrogen, sulfur, phosphorus, and chlorine species and have compared their abundances with those observed in other chemically-rich sources that represent the initial and last stages of the formation of stars and planets: the hot corino in the Solar-like protostar IRAS 16293–2422 B, and the comets 67P/Churyumov- Gerasimenko and 46P/Wirtanen. The comparative analysis reveals that O- and N-bearing molecules exhibit a good correlation for all sources, suggesting a chemical heritage of these species during the process of star formation, and hence an early phase formation of the molecules. On the other hand, S- and P-bearing species do not follow the same correlation. While they are less abundant in the gas phase of star-forming regions, likely because they are predominantly trapped on the surface of icy grains, the cosmic abundances are recovered in the coma of the comet. § ISOTOPIC RATIOS IN THE GALAXY Isotopic ratios measured using molecules and their isotopologues in molecular clouds depend on the chemical evolution of the Galaxy due to stellar nucleosynthesis, and also on local effects due to chemical fractionation. In the following these two processes are described. §.§ Nucleosynthesis processes Isotopic ratios can be used as indicators of stellar nucleosynthesis. For example, ^12C is purely a primary element, i.e., it forms from a mixture of H and He in the He-burning zones of stars of all masses. The main isotope of nitrogen, ^14N, has a much more complex origin, being partly of primary and partly of secondary origin. In the latter case, its production needs the presence of ^12C and ^16O seeds already present at the star’s birth (). The synergy between molecular observations and Galactic chemical evolution (GCE) models has imposed new important constraints onto the chemical evolution of the Milky Way. <cit.> performed the first study of the ^14N/^15N ratio of HCN and HNC as a function of the galactocentric distance, R_ GC, towards a large sample of ∼100 star-forming clumps observed with the IRAM 30m radiotelescope. They found that the N-isotopic ratio increases from 2 up to 11 kpc, confirming what suggested by previous works (e.g. ), and it decreases in the outer Galaxy (), with a new local ISM ^14N/^15N value of 375±75. This observational trend has been compared with GCE models (), and the production rate of elements and isotopes due to different kinds of stars across the Galaxy has been constrained. In the outer Galaxy, accurate ^14N stellar production rates in low-metallicity massive stars, which were previously unknown, have been determined. Moreover, the authors discovered that ^15N has an important secondary production from nova outbursts produced in stellar binary systems in the inner Galaxy (R_ GC <12 kpc). Regarding carbon, several observations towards star-forming regions in the inner Galaxy have shown that the ^12C/^13C ratio increases with R_ GC (, and references within). As for nitrogen, a Galactic gradient in the carbon isotopic ratio may be due to stellar nucleosynthesis processes (). However, single-dish studies have limitations, including the large beam sizes, which sample less well-defined areas. Furthermore, only a handful of studies have extensive source number statistics (N≥50, e.g. ), and, in general, with relatively large error bars. Recent work by Law et al. (2024, under consortium review) attempted to determine the carbon isotopic ratio as a function of R_ GC using molecular line observations of high-mass star-forming regions from the ALMA Evolutionary study of High Mass Protocluster Formation in the Galaxy (ALMAGAL) survey (ID: 2019.1.00195.L; PIs: Molinari, Schilke, Battersby, Ho). In contrast to many previous single-dish studies, the study reveals a lack of a radial carbon isotopic ratio gradient, which can be interpreted as evidence for local fractionation processes (e.g. ). However, the study also discusses the possibility that optical depth plays a crucial role in biasing systematically the carbon isotope ratio, making the interpretation difficult. The main results establish that the isotopic ratio measurement requires an accurate multi-line analysis to correct for optical depth and to properly determine the excitation temperature. In practice, joint single-dish and interferometric observations would provide a way out. Other possible ways out include studies of the isotopic ratio with heavier species, which are optically thinner, and doubly substituted isotopologues (e.g. ), which require longer integration times, making survey-like observations difficult. §.§ Local nitrogen fractionation As mentioned before, molecular isotopic ratios are also governed by local fractionation processes. Low-temperature isotopic-exchange reactions are able to explain the D/H ratios (e.g. ), but they do not reproduce the observed ^14N/^15N ratios (e.g. ). Recent works have shown that other chemical processes could play an important role: (i) isotope selective photodissociation of N_2 (e.g. ), and (ii) different rates for the dissociative recombination of N_2H^+ (e.g. ). The effects of the first process have been investigated by <cit.>, who studied for the first time N-fractionation of N_2H^+ at high spatial resolution (∼0.03 pc) towards the massive star-forming protocluster IRAS 05358+3543. They found a lower ^14N/^15N ratio of N_2H^+ in the inner denser cores (∼0.03 pc) of the cluster (100–200) with respect to the more diffuse gas (∼0.5 pc) in which the cores are embedded (>250), in agreement with the prediction of the chemical model of <cit.>. In the more diffuse part of a molecular clump, where the external UV radiation field is not fully shielded, the ^14N^15N molecule could be photodissociated where the N_2 is not (thanks to ^14N_2 being more abundant and thus easier to self-shield), causing an increase of the N_2/^14N^15N ratio. Since N_2H^+ is a daughter molecule of N_2, the ^14N/^15N ratio of N_2H^+ follows the N_2 ratio and increases. In the low-mass regime, there is also growing evidence of the importance of selective photodissociation on the N isotopic ratio. Both <cit.> and <cit.> reported a spatial gradient of the ^14N/^15N ratio in HCN and NH_3, respectively, towards the prestellar core L1544 (∼0.02 pc of resolution). In particular, the ratio in the two molecules presents similar values and decreases towards the outskirts of the core, which is more exposed to the interstellar UV field. At the same time, at the source's center the ^14NH_3/^15NH_3 is a factor of two lower than that of N_2H^+, which reaches ^14N/^15N∼1000. <cit.> showed that N_2H^+ presents anti-fractionated behaviour towards cold and dense pre-stellar sources, whilst its isotopic ratio is consistent with the elemental value of the local ISM towards protostellar cores, where the feedback heats the gas and causes the thermal desorption of CO. These pieces of evidence suggest that, beyond selective photodissociation, isotope-dependency of N_2H^+ dissociative recombination (DR) might also play a role. This would alter the N_2H^+/^15N_2H^+ ratio in the cold and dense environments where DR is the dominant pathway of destruction for this species. No dedicated laboratory study is available for the ISM physical conditions, but at high temperatures (300K) these reaction rates are known to be isotope-dependent <cit.>. § THE IMPORTANCE OF COSMIC RAYS The interpretation of observed molecular abundances necessarily involves the use of astrochemical models. The latter are based on massive chemical networks (e.g. KIDA, UMIST) that consider the different reaction mechanisms between neutrals, charged particles, dust grains as well as photo- and cosmic-ray processes. As soon as the H_2 column density is larger than about 10^21  cm^-2, the UV photons of the interstellar radiation field are completely absorbed and cosmic rays drive the processes of ionisation, excitation, dissociation and heating <cit.>. Cosmic rays with energy below 1 GeV can ionise both atomic and molecular hydrogen, to finally form the trihydrogen cation H_3^+. This initiates a series of reactions that lead to the formation of more and more complex species up to prebiotic molecules. Several observational techniques at radio and infrared frequencies <cit.> provide an estimate of the spectrum of low-energy cosmic rays in interstellar clouds by determining the cosmic-ray ionisation rate, ζ, which is the main parameter of astrochemical codes. Observations indicate that the assumption of a constant ζ, derived from the foundational research of L. Spitzer in the 1950s, is not accurate. Indeed, ζ exhibits a significant variation, spanning at least 4-5 orders of magnitude, ranging from 10^-15 s^-1 in diffuse clouds <cit.> to 10^-19 s^-1 in circumstellar-disc midplanes <cit.>. Resolved maps of ζ are now also available <cit.>, showing potential gradients of the ionisation fraction through star-forming regions. In addition, higher rates have been estimated in protostellar sources (e.g. ) and at the surface of protostellar shocks <cit.>, up to a few 10^-14 s^-1, which cannot be attributed to the Galactic cosmic ray flux. All this poses a crucial warning for modellers of astrochemical codes and non-ideal magnetohydrodynamical simulations. L.C. and V.M.R. acknowledge funding from grants No. PID2019-105552RB-C41 and PID2022-136814NB-I00 by the Spanish Ministry of Science, Innovation and Universities/State Agency of Research MICIU/AEI/10.13039/501100011033 and by ERDF, UE. V.M.R. also acknowledges support from the grant number RYC2020-029387-I funded by MICIU/AEI/10.13039/501100011033 and by "ESF, Investing in your future", and from the Consejo Superior de Investigaciones Científicas (CSIC) and the Centro de Astrobiología (CAB) through the project 20225AT015 (Proyectos intramurales especiales del CSIC). aa
http://arxiv.org/abs/2409.03637v1
20240905155159
A splitting method for numerical relativistic magnetohydrodynamics
[ "Serguei Komissarov", "David Phillips" ]
astro-ph.HE
[ "astro-ph.HE" ]
iint TXFiint
http://arxiv.org/abs/2409.03756v1
20240905175952
Spectra of adjacency and Laplacian matrices of Erdős-Rényi hypergraphs
[ "Soumendu Sundar Mukherjee", "Dipranjan Pal", "Himasish Talukdar" ]
math.PR
[ "math.PR", "math.CO" ]
§ ABSTRACT We study adjacency and Laplacian matrices of Erdős-Rényi r-uniform hypergraphs on n vertices with hyperedge inclusion probability p, in the setting where r can vary with n such that r / n → c ∈ [0, 1). Adjacency matrices of hypergraphs are contractions of adjacency tensors and their entries exhibit long range correlations. We show that under the Erdős-Rényi model, the expected empirical spectral distribution of an appropriately normalised hypergraph adjacency matrix converges weakly to the semi-circle law with variance (1 - c)^2 as long as d_/r^7→∞, where d_ = n-1r-1 p. In contrast with the Erdős-Rényi random graph (r = 2), two eigenvalues stick out of the bulk of the spectrum. When r is fixed and d_≫ n^r - 2log^4 n, we uncover an interesting Baik-Ben Arous-Péché (BBP) phase transition at the value r = 3. For r ∈{2, 3}, an appropriately scaled largest (resp. smallest) eigenvalue converges in probability to 2 (resp. -2), the right (resp. left) end point of the support of the standard semi-circle law, and when r ≥ 4, it converges to √(r - 2) + 1/√(r - 2) (resp. -√(r - 2) - 1/√(r - 2)). Further, in a Gaussian version of the model we show that an appropriately scaled largest (resp. smallest) eigenvalue converges in distribution to c/2ζ + [c^2/4ζ^2 + c(1 - c)]^1/2 (resp. c/2ζ - [c^2/4ζ^2 + c(1 - c)]^1/2), where ζ is a standard Gaussian. We also establish analogous results for the bulk and edge eigenvalues of the associated Laplacian matrices. [NO \title GIVEN] [NO \author GIVEN] September 9, 2024 ====================== § INTRODUCTION §.§ Hypergraphs and associated matrices and tensors A hypergraph = (V, E) consists of a vertex set V and a set E ⊆ 2^V of hyperedges. If |e| = r for all e ∈ E, then is called r-uniform. If hyperedges of different sizes exist, then it is called non-uniform. Usually V is taken to be the set [n] := {1, …, n }. Hypergraphs are generalisations of graphs (indeed, a simple undirected graph is just a 2-uniform hypergraph) and are very useful for modelling higher-order interactions in various types of complex networks. In particular, hypergraphs have been used for community detection in networks <cit.>, in biology <cit.>, for modelling chemical reactions <cit.>, in modelling citation networks <cit.>, in recommendation systems <cit.> and for processing image data <cit.>, among other areas. Adjacency matrices of graphs naturally generalise to adjacency tensors for hypergraphs. Let be an r-uniform hypergraph, where r ≥ 3 is an integer. The adjacency tensor 𝒜∈{0,1}^n^r associated with is defined as _i_1, i_2, ⋯, i_r = 1 if {i_1, i_2, …, i_r}∈ E, 0 otherwise, and _i_1, i_2, ⋯, i_r = _σ(i_1), σ(i_2), ⋯, σ(i_r) for any σ∈_n, where _n is the set of all permutations of [n]. This is a tensor of order r and dimension n. In this article, we consider an adjacency matrix A associated to , which is a contraction of the adjacency tensor and is defined as follows: A_ij = 1/(r - 2)!∑_i_3, i_4, ⋯, i_r_i, j, i_3, i_4 ⋯, i_r. Observe that A_ij = ∑_e ∈ E(i, j ∈ e) if i j, 0 otherwise. We note that different aspects of the matrix A have been studied in the literature <cit.>. Let M = nr and {e_1, …, e_M} be an enumeration of the hyperedges of the complete r-uniform hypergraph on n vertices. Consider the following matrices Q_ℓ := _ℓ_ℓ^⊤ - (_ℓ), 1 ≤ℓ≤ M, where (_ℓ)_i = 1 if i ∈ e_ℓ, 0 otherwise, and for a vector , () denotes the diagonal matrix whose diagonal entries are given by . Note then that A admits the following representation: A = ∑_ℓ = 1^M h_ℓ Q_ℓ, where h_ℓ= (e_ℓ∈ E) indicates if hyperedge e_ℓ is present in the hypergraph . We also consider the associated Laplacian matrices. Recall that for r = 2, i.e. when A is the adjacency matrix of a simple undirected graph, the associated (combinatorial) graph Laplacian is defined as L_A = (A) - A. Where is an n × 1 vector with all entries 1. More generally, for any matrix X, define the associated Laplacian matrix as L_X := (X ) - X. We also consider the variant L̃_X := 1/r - 1(X ) - X. Note that for r = 2, L_X = L̃_X, as such both of these matrices generalise the graph Laplacian. §.§ Preliminaries on random matrices Let A_n be an n × n Hermitian matrix with ordered eigenvalues λ_1 ≥⋯≥λ_n. The probability measure μ_A_n := 1/n∑_i = 1^n δ_λ_i is called the Empirical Spectral Distribution (ESD) of A_n. If entries of A_n are random variables defined on a common probability space (Ω, , ) then μ_A_n is a random probability measure. In that case, there is another probability measure associated to the eigenvalues, namely the Expected Empirical Spectral Distribution (EESD) of A_n, which is defined via its action on bounded measurable test functions f as follows: ∫ f d_A_n = ∫ f dμ_A_n, where denotes expectation with respect to . In random matrix theory, one is typically interested in an ensemble (A_n)_n ≥ 1 of such matrices of growing dimension n. If the weak limit, say μ_∞, of the sequence (_A_n)_n ≥ 1, exists, then it is referred to as the Limiting Spectral Distribution (LSD). Often one is able to show that the random measure μ_A_n also converges weakly (in probability or in almost sure sense) to μ_∞. For a comprehensive introductory account of the theory of random matrices, we refer the reader to <cit.>. The preeminent model of random matrices is perhaps the Wigner matrix. For us a Wigner matrix W_n will be a Hermitian random matrix whose upper triangular entries W_n, i, j are i.i.d. zero mean unit variance random variables and the diagonal entries W_n, i, i are i.i.d. zero mean random variables with finite variance. Moreover, the diagonal and the off-diagonal entries are mutually independent. If the entries are jointly Gaussian with the diagonal entries having variance 2, then resulting ensemble of matrices is called the Gaussian Orthogonal Ensemble (GOE). In this article, we will denote the centered Gaussian distribution with variance σ^2 by ν_, σ^2. We also define the semi-circle distribution with variance σ^2, henceforth denoted by ν_, σ^2, as the probability distribution on with density f(x) := 1/2πσ^2√(4 σ^2 - x^2) if |x| ≤ 2 σ, 0 otherwise. E. Wigner proved in his famous paper <cit.> that the EESD of n^-1/2 W_n converges weakly to the standard semi-circle distribution ν_, 1. It is well known that 2k-th moment of the standard semi-circle distribution is the k-th Catalan number, defined as C_k := 1/k + 12kk. Catalan numbers have many interesting combinatorial interpretations. Most notably, C_k counts the number of Dyck paths of length 2k, i.e. the number of non-negative Bernoulli walks of length 2k that both start and end at the origin. §.§ Note on asymptotic notation Before proceeding further, we define here various asymptotic notations used throughout the paper. For functions f, g : →, we write (i) f(n) = O(g(n)), if there exist positive constants n_0 and C such that |f(n)| ≤ C |g(n)| for all n ≥ n_0; (ii) f(n) = o(g(n)) or f(n) ≪ g(n) if lim_n →∞f(n)/g(n) = 0 (we also write f(n) ≫ g(n) if g(n) ≪ f(n)); (iii) f(n) = Θ(g(n)) or f(n) ≍ g(n) if f(n) = O(g(n)) and g(n) = O(f(n)); (iv) f(n) ∼ g(n) if lim_n →∞f(n)/g(n) = 1; (v) f(n) ≫ g(n) if lim_n →∞g(n)/f(n) = 0. For a sequence of random variables {X_n}_n ≥ 1, we write X_n = O_P(1) if for any ϵ > 0, there exists M > 0 such that sup_n (|X_n| > M) ≤ϵ. For two sequence of random variables {X_n}_n ≥ 1 and {Y_n}_n ≥ 1 we write X_n = O(Y_n) to mean X_n = Z_n Y_n with Z_n = O_P(1). §.§ Our random matrix model In this article, our main objective is to study the ESDs of adjacency matrices of Erdős-Rényi random r-uniform hypergraphs, where each hyperedge is included independently with probability p, i.e. h_ℓi.i.d.∼(p). Towards that end, consider the centered (and normalised) matrix à = A - [A]/√(p(1 - p))= ∑_ℓ=1^M(h_ℓ - [h_ℓ])/√((h_ℓ)) Q_ℓ = ∑_ℓ = 1^M Y_ℓ Q_ℓ. Note that Y_ℓ's are i.i.d. zero mean unit variance random variables. Thus Ã_ij = ∑_ℓ = 1^M Y_ℓ·(i, j ∈ e_ℓ) if i j, 0 otherwise. Note that ( Ã_ij ) = ( ∑_ℓ = 1^M Y_ℓ·(i, j ∈ e_ℓ)) = ∑_ℓ = 1^M(Y_ℓ) (i, j ∈ e_ℓ) = n-2r-2. Therefore, we shall consider the following normalised matrix (viewed as a matrix-valued function of the vector = (Y_ℓ)_1≤ℓ≤ M) H_n ≡ H_n() := Ã/√(N) = 1/√(N)∑_ℓ = 1^M Y_ℓ Q_ℓ, where N = n - 2r - 2, so that the all off-diagonal entries of H_n have zero mean and unit variance. This normalisation ensures that the variance of the EESD of H_n is of order n^2 (comparable to that of a usual Wigner matrix). Let = (Y_ℓ)_1 ≤ℓ≤ M be independent zero mean unit variance random variables. The matrix H_n = H_n() defined in (<ref>) will be referred to as a Generalised Hypergraph Adjacency Matrix (GHAM). A principal feature of this ensemble is that the entries show long-range correlation. Indeed, (Ã_ij, Ã_i'j') = ∑_ℓ = 1^M∑_ℓ' = 1^M(Y_ℓ, Y_ℓ') (i, j ∈ e_ℓ) (i', j' ∈ e_ℓ') = ∑_ℓ = 1^M(Y_ℓ) (i, j ∈ e_ℓ) (i', j' ∈ e_ℓ) = n-4r-4 if |{i, j}∩{i', j' }| = 0, n-3r-3 if |{i, j}∩{i', j' }| = 1. Therefore (H_n, ij, H_n, i^'j^') = n-4r-4/n-2r-2 = (r-2)(r-3)/(n-2)(n-3) =: ρ_n if |{i, j}∩{i^', j^'}| = 0, n-3r-3/n-2r-2 = r-2/n-2 =: γ_n if |{i, j}∩{i^', j^'}| = 1. In this article, we are interested in the spectrum of n^-1/2 H_n in the regime where r grows with n in such a way that r/n→ c ∈ [0, 1). Our main result regarding the bulk spectrum of n^-1/2 H_n is the following. Suppose the entries (Y_ℓ)_1 ≤ℓ≤ M satisfy the Pastur-type condition given in Assumption <ref>. Then the EESD of n^-1/2 H_n converges weakly to ν_, (1 - c)^2. For two probability measures μ an ν, let μ⊞ν denote their free additive convolution (defined in Section <ref>).   Suppose the entries (Y_ℓ)_1 ≤ℓ≤ M satisfy the Pastur-type condition given in Assumption <ref>. (i) Suppose that r is fixed. Then the EESD of n^-1/2L_H_n converges weakly to ν_, r - 1⊞ν_, 1 and the EESD of n^-1/2L̃_H_n converges weakly to ν_,1/r - 1⊞ν_, 1. (ii) Suppose r →∞ such that r/n → c ∈ [0, 1). Then the EESD of (nr)^-1/2 L_H_n converges weakly to ν_, 1⊞ν_, c and the EESD of n^-1/2L̃_H_n converges weakly to ν_, (1 - c)^2. In fact, for L_H_n we only need the weaker Assumption <ref> on the entries. Theorems <ref> and <ref> follow from a universality result (see Theorem <ref>) and an analysis of the Gaussian case, where (Y_ℓ)_1 ≤ℓ≤ M are i.i.d. standard Gaussians (see Theorems <ref> and <ref>). In the regime r/n → 0, we also prove effective concentration inequalities for the ESD (see Corollary <ref>) assuming further conditions on the entries Y_ℓ, which enable us to show in-probability weak convergence of the ESDs (see Corollary <ref>). Further, if r √(log n) / n → 0, we have almost sure weak convergence of the ESDs. The question of almost sure weak convergence of the ESDs in the full regime r / n → c ∈ [0, 1) will be considered in a future work. In Figure <ref>, we show the ESDs of 300 × 300 GHAMs with Gaussian entries for various choices of r. Let us now briefly comment on the implications of the above result in the context of hypergraphs. Let the hyperedge inclusion probability p be potentially dependent on both n and r. Let us also assume for simplicity that p is bounded away from 1. In analogy with random graphs, the quantity d_ = n - 1r - 1 p may be thought of as the “average degree” of a vertex. In fact, it is the expected number of hyperedges containing the said vertex. For r = 2, i.e. random graphs, it is well known that the limit of the EESD is the semi-circle law as long as d_ = (n - 1) p →∞, see <cit.> for more details. We show in Example <ref> that our assumptions on the entries are satisfied as long as d_ / r^7 →∞ (in fact, for the Laplacian L_H_n we only need that d_/r^4 →∞). In the setting of fixed r, we thus have a semi-circle limit as long as d_→∞. However the the factor r^7 is likely sub-optimal and an artefact of the Lindeberg exchange argument <cit.> that we employ. Consider also the setting where r / n → c ∈ (0, 1). Let I(x) = - x log x - (1 - x) log (1 - x) denote the entropy of a (p) random variable. Then note that d_/r^7∼exp(I(c) n)/n^15/2 c^7 √(c(1 - c)) p. Thus for obtaining the limit ν_, (1 - c)^2, we need p ≫ n^15/2exp(- I(c) n). It would be interesting to understand what happens at complementary regimes of sparsity. We leave this for future work. Unlike the random graph setting (r = 2), two eigenvalues stick out of the bulk of the spectrum. We can derive limits of appropriately scaled largest and smallest eigenvalues for the Gaussian GHAM. The following is an abridged version of Theorem <ref>. Let H_n be a Gaussian GHAM and suppose that ζ is a standard Gaussian. In the regime r/n → c ∈ (0,1), we have λ_1(H_n)/nc/2ζ + √(c^2/4ζ^2 +c(1-c)) and λ_n(H_n)/nc/2ζ - √(c^2/4ζ^2 + c(1-c)). When r →∞ and r/n → 0, we need a rescaling: λ_1(H_n)/√(nr) 1 and λ_n(H_n)/√(nr) -1. Finally, when r is fixed, we have λ_1(H_n)/√(n) √(r-2) + 1/√(r-2) if r ≥ 4, 2 if r ≤ 3, λ_n(H_n)/√(n) -√(r-2) - 1/√(r-2) if r ≥ 4, -2 if r ≤ 3. Noteworthy here is the phase transition at r = 3. The underlying reason for this is the so-called Baik-Ben Arous-Péché (BBP) transition for deformed Wigner matrices. This phase transition result remains valid for the original hypergraph adjacency matrix as well due to the strong universality results proved in <cit.>, provided that p ≫log^4 n/n which in terms of d_ reads d_≫ n^r - 2log^4 n (see the discussion in Section <ref>). The behaviour of the extreme eigenvalues of the two Laplacian matrices is given in Theorem <ref>. §.§ Related works on hypergraph and tensor spectra Spectra of hypergraphs had drawn the attention of researchers at least twenty years ago. Indeed, <cit.> studied the spectrum of (d,r)-regular hypergraphs and identified the LSD, which generalizes the work of <cit.> for random regular graphs. <cit.> studied the spectra of the so-called loose Laplacians of an Erdős-Rényi hypergraph. <cit.> studied the adjacency tensor of (deterministic) uniform hypergraphs. <cit.> studied various aspects of the adjacency tensor for random k-uniform hypergraphs. Friedman <cit.> was the first to define the “second eigenvalue" of a multilinear form, analogous to the bilinear forms associated with the adjacency matrix of a graph, for both deterministic and random d-regular r-uniform hypergraphs, and provided explicit expressions for it. In a follow-up work, <cit.> introduced the notion of regular infinite hypertrees and established that the “first eigenvalue” of an infinite hypertree agrees with the “second eigenvalue" of a random regular hypergrpah of the same degree within a logarithmic factor. On the other hand, <cit.> attempted to define an analogue of the Laplacian for hypergraphs, but encountered various obstacles that made the generalisation difficult. It was Chung <cit.>, who paved the way by defining the Laplacian for hypergraphs in full generality using homological considerations. <cit.> further enriched the study by introducing an adjacency graph of hypergraphs, examining the spectra of the Laplacian for d-regular hypergraphs, and improving the lower bound for the “spectral value” of the Laplacian defined in <cit.>. <cit.> proposed another definition of a Laplacian tensor associated with even and uniform hypergraphs and analysed its connection with edge and vertex connectivity. <cit.> examined the eigenvalues of adjacency tensor also known as 'hypermatrix' of uniform hypergraphs and established several natural analogues of basic results in spectral graph theory. <cit.> introduced another definition for the Laplacian tensor of an even uniform hypergraph, proved a variational formula for its second smallest Z-eigenvalue, and used it to provide lower bounds for the bipartition width of the hypergraph. <cit.> proposed a definition for the signless Laplacian tensor of an even uniform hypergraph, studied its largest and smallest H-eigenvalues and Z-eigenvalues, as well as its applications in the edge cut and the edge connectivity of the hypergraph. In <cit.> they also investigated the largest and the smallest Z-eigenvalues of the adjacency tensor of a uniform hypergraph. Pearson and Zhang <cit.> studied the H-eigenvalues and the Z-eigenvalues of the adjacency tensor of a uniform hypergraph. <cit.> studied H^+-eigenvalues of normalized Laplacian tensor of uniform hypergraphs and showed that the second smallest H^+-eigenvalue is positive if and only if the hypergraph is connected. Until now, the literature has primarily focused on uniform hypergraphs. <cit.> was the first to introduce adjacency hypermatrix, Laplacian hypermatrix and normalized Laplacian hypermatrix for general hypergraphs, and extensively studied various spectral properties of these tensors. In addition to adjacency and Laplacian tensors, researchers have also suggested different types of adjacency and Laplacian matrices associated with hypergraphs. In a series of papers <cit.>, Rodríguez introduced an n × n Laplacian matrix similar to ours and studied how the spectrum of this Laplacian relates to various structural properties of hypergraphs such as the diameter, the mean distance, excess, etc., as well as its connection to the hypergraph partition problem. <cit.> also studied adjacency and Laplacian matrices, which are the same as the matrices we consider up to minor scaling factors and showed how the various structural notions of connectivity, vertex chromatic number and diameter are related to the eigenvalues of these matrices. Building on the work of <cit.>, <cit.> examined the spectral properties of the Laplacian matrix in greater detail. Specifically, they obtained bounds on the spectral radius of uniform hypergraphs in terms of some invariants of hypergraphs such as the maximum degree, the minimum degree, etc. A more general treatment is provided in <cit.>, where the authors generalized the concepts of adjacency and Laplacian matrices for hypergraphs by introducing general adjacency operators and general Laplacian operators. The study of Laplacian matrix for Wigner matrices was initiated in <cit.>. Later, <cit.> considered the Laplacian matrix for sparse Erdős-Réyni random graphs. The normalized Laplacian in the non-sparse setting was studied in <cit.>. For Wigner matrices with a variance profile, the spectrum of the Laplacian matrix was analysed in <cit.>. More generally, in the random tensor front, <cit.> considered a tensor version of the Gaussian Orthogonal Ensemble (GOE). He showed that the expectation of an appropriate generalisation of the resolvent can be described by a generalised Wigner law, whose even moments are the Fuss-Catalan numbers. There has been a flurry of activity surrounding random tensors in the past few years <cit.>. Among these, the most relevant to our setting are <cit.>. The papers <cit.> considered contractions of random tensors to form random matrices. <cit.> considered the above-mentioned tensor version of the GOE and showed that for any contraction along a direction ∈^n - 1, one gets a semi-circle law as the LSD. This result also follows from the work of <cit.> who studied more general contractions. <cit.> obtained the same result for more general entries. They also established joint convergence of a family of contracted matrices in the sense of free probability. The only difference between our adjacency tensor and a Wigner random tensor is that for us _i_1, …, i_r = 0 if |{i_1, …, i_r}| < r. Modulo this difference, it is clear that one obtains our hypergraph adjacency matrix A by contracting the adjacency tensor along the direction / √(n). Using the Hoffmann-Wielandt inequality it is not difficult to show that if r is fixed, then zeroing out the elements _i_1, …, i_r of a Wigner random tensor for any (i_1, …, i_r) such that |{i_1, …, i_r}| < r does not affect the bulk spectra of its contractions. Therefore for fixed r, our Theorem <ref> follows from the Theorem 1.5 of <cit.> (or Theorem 2 of <cit.> or Theorem 4 of <cit.> in the tensor GOE case). We emphasize that unlike our setting where the order r of the tensor grows with the dimension n, the above-mentioned existing works on random tensors or contractions thereof consider the setting of fixed r. To the best of our knowledge only two lines of work consider settings where r, the common size of the hyperedges, grow with n, namely in quantum spin glasses <cit.> and in the Sachdev-Ye-Kitaev (SYK) model for black holes <cit.>. Both of these models consider matrices representable as random linear combinations of deterministic matrices, with the i.i.d. random weights being indexed by the hyperedges of a complete r-uniform hypergraph (exactly like the representation (<ref>) of a GHAM). In both of these works, a phase transition phenomenon is observed at the phase boundary r ≍√(n): When r ≪√(n), the LSD is Gaussian; when r ≫√(n), the LSD is the standard semi-circle law; and when r/√(n)→ c > 0, a different LSD emerges. §.§ Related works on Hermitian matrices with dependent entries We will refer to Hermitian random matrices with dependent entries as dependent/correlated Wigner matrices. Various different types of correlated Wigner matrices has been studied in the literature in great detail. Here we will recall some of these existing works and compare them to our results. Chapter 17 of <cit.> considered a class of random n × n real symmetric matrices of the form H = H^(0) + n^-1/2 W, where H^(0) is a given real symmetric matrix and W = {W_jk}_j,k = -m^m, n = 2m + 1, is a real symmetric random matrix, the n × n central block of the double-infinite matrix W_∞ = {W_jk}_j,k = -∞^∞ whose entries are Gaussian random variables with W_jk = 0, W_j_1 k_1 W_j_2 k_2 = C_j_1k_1 ; j_2 k_2. The authors assume that C := sup_j_1,k_1 ∈∑_j_2, k_2 ∈|C_j_1k_1 ; j_2 k_2| < ∞. Approximate versions of conditions like these are clearly not satisfied by our model where the sum of the pairwise absolute correlations between a given entry and all the other entries grows like r^2. <cit.> and <cit.> considered correlated Gaussian Wigner models where the entries come from stationary Gaussian fields. However, this specific structural assumption makes their models incompatible with ours. (Further, they also require summability of the pairwise correlations.) Another dependent model was considered by <cit.> where the entries (X_ij)_1 ≤ i ≤ j ≤ n form a dependent random field. One of their main assumptions however is that [X_ij |_ij ] = 0, where _ij is the σ-algebra generated by the entries other than X_ij. This implies that [X_ij X_i'j'] = [ [X_ij X_i'j' | _ij] ] = [X_i'j' [X_ij|_ij]] = 0, i.e. the matrix entries are uncorrelared. This rules out a direct application of their method to our set-up. Notably, their proof technique for showing universality, which uses an interpolation between their random matrix model and a Wigner matrix, can be modified to work in a situation where [X_ij|_ij] 0 (we note here that the same result can also be proved by the Lindeberg exchange argument of <cit.>). However, the resulting universality result turns out to be too weak to be applicable to our setting (in our notation, it requires r to go to 0). To the best of our knowledge, bulk universality results for the most general correlated model is obtained in <cit.> with appropriate decay conditions on the joint cumulants of the entries. For a Gaussian model, they require the operator norm of the covariance matrix of the entries to be O(N^ϵ) for any ϵ > 0. In our setting the operator norm is of order r^2 and hence their results do not apply to the setting of polynomially growing r. It is also worth mentioning the remarkably general work of <cit.>, who consider matrices of the form X = Z_0 + ∑_i= 1^n Z_i, where Z_0 is a deterministic matrix and Z_i's are d× d independent self-adjoint random matrices. Under appropriate conditions on these matrices the authors of <cit.> prove strong universality results. Owing to the representation (<ref>), our adjacency matrix clearly falls under this rather general model. Unfortunately, the universality results proved in <cit.> do not work beyond the regime r ≪ n^1/4. This is still powerful enough to give us universality of the edge eigenvalues for fixed r and hence of the BBP-type phase transition at r = 3 described earlier. Below we quickly work out this universality result, assuming that the Y_ℓ's are uniformly bounded. Let d_H(A, B) denote the Hausdorff distance between two subsets A, B ⊂. For a symmetric matrix A, let (A) denote its spectrum. Theorem 2.6 of <cit.> shows that if the matrices Z_i are uniformly bounded, then (d_H((X), (G)) > C ε(t)) ≤ d e^-t, where G is a Gaussian random matrix with the same expectation and covariance structure as X and ε(t) = σ_*(X) t^1/2 + R(X)^1/3σ(X)^2/3 t^2/3 + R(X) t, with v(X) := (X)_^1/2, σ(X) := [(X - X)^2]_^1/2, σ_*(X) = sup_v = w = 1[|⟨ v, (X - X) w⟩|]^1/2, R(X) := max_1 ≤ i ≤ nZ_i__∞. Let us now calculate these parameters for X = n^-1/2 H_n. First note that v(X) = (X)_^1/2 = O([maximum row sum of (X)]^1/2) = 1/√(n)· O([Θ(n) ·r/n + Θ(n^2) ·r^2/n^2]^1/2) = O(r/√(n)). Since (H_n^2)_ij = ∑_k i, j H_n, ik H_n, kj = (r - 2) and (H_n^2)_ii = ∑_k i H_n, ik^2 = (n - 1), we have H_n^2 = (n - 1) I + (r - 2) (J - I) = (n - r + 1) I + (r - 2) J, where J = ^⊤. It follows that σ(X):= [(X - X)^2]^1/2 = 1/√(n)· H_n^2 _^1/2 = √(n + r - 1 + n (r - 2))/√(n) = Θ(√(r)). We also have σ_*(X) ≤ v(X) = O(r/√(n)). Finally, assuming that the Y_ℓ's are uniformly bounded, by say K, we have R(X) := 1/√(n)·max_1 ≤ℓ≤ MY_ℓ Q_ℓ__∞≤K r/√(n). Thus ε(t) = O( r t^1/2/√(n) + (Kr/√(n))^1/3· r^1/3 t^2/3 + K r t/√(n)). Choosing t = 2 log n, yields that with probability at least 1 - 1/n, we have d_H((n^-1/2H_n()), (n^-1/2 H_n()) = O( r √(log n)/√(n) + K^1/3·(r^2 log^2 n/√(n))^1/3 + K r log n/√(n)). It is evident that the above upper bound is small if r ≪n^1/4/√(K)log n, assuming that K is constant, which is, for instance, the case for the hypergraph, where one can take K = max{√(p/1 - p), √(1 - p/p)}. Assuming that p < 1/2, the above condition for universality simplifies to r ≪(np)^1/4/log n. §.§ Organisation of the paper. The rest of the paper is organised as follows: In Section <ref>, we formally describe the main results of this paper. Section <ref> contains the proofs of these results. Finally, in Appendix <ref>, we collect some useful results from matrix analysis and concentration of measure which are used throughout the paper, and in Appendix <ref> we provide proofs of some technical results. § MAIN RESULTS In the first three subsections, we study the bulk of the spectrum. In the last subsection, we study the edge eigenvalues. §.§ Replacing general entries with Gaussians We first employ a Lindeberg swapping argument <cit.> to replace the independent zero mean unit variance random variables Y_ℓ in the definition of H_n by i.i.d. standard Gaussians. Recall that M = nr and N = n - 2r - 2. We will need a Pastur-type condition on the entries (Y_ℓ)_1≤ℓ≤ M. [A Pastur-type condition] Suppose (Y_ℓ)_1 ≤ℓ≤ M are independent zero mean unit variance random variables satisfying the following condition: For every ε > 0, r^4/n^2 N∑_ℓ = 1^M [Y^2_ℓ(|Y_ℓ| > ε K_n)] → 0 as n →∞, where K_n = √(n N)/r^4. For dealing with the Laplacian L_H_n we need a weaker assumption. [A (weaker) Pastur-type condition] Suppose (Y_ℓ)_1 ≤ℓ≤ M are independent zero mean unit variance random variables satisfying the following condition: For every ε > 0, r^3/2/n^2 N∑_ℓ = 1^M [Y^2_ℓ(|Y_ℓ| > ε K_n')] → 0 as n →∞, where K_n' = √(n N)/r^5/2. For r fixed, Assumptions  <ref> and <ref> are clearly equivalent. For r = 2, they become Pastur's condition <cit.>. In general, Assumption <ref> is stronger than Assumption <ref>. Indeed, K_n' > K_n and hence r^3/2/n^2 N∑_ℓ = 1^M [|Y_ℓ|^2 (|Y_ℓ| > ε K_n')] ≤r^3/2/n^2 N∑_ℓ = 1^M [|Y_ℓ|^2 (|Y_ℓ| > ε K_n)] ≤1/r^5/2r^4/n^2 N∑_ℓ = 1^M [|Y_ℓ|^2 (|Y_ℓ| > ε K_n)]. So if (Y_ℓ)_1 ≤ℓ≤ M satisfy Assumption <ref>, then they also trivially satisfy (<ref>). For i.i.d. random variables (Y_ℓ)_1 ≤ℓ≤ M with mean 0 and variance 1, Assumption <ref> becomes the following tail condition: [Y_1^2 (|Y_1| > ε K_n)] = o(r^-2) for any ε > 0. Our first result shows the universality of the EESD in the limit, assuming that the entries satisfy Assumption <ref>, i.e. the limiting EESD, if it exists, is the same as that of a GHAM with Gaussian entries. The proof of this result is via a comparison of Stieltjes transforms. Recall that the Stieltjes transform S_μ of a probability measure μ on is an analytic function defined for z ∈ℂ_+ := {z ∈ : (z) > 0} as follows: S_μ(z) := ∫dμ(x)/x-z. Suppose {μ_n}_n ≥ 1, μ are probability measures on ℝ with Stieltjes transforms {S_μ_n}_n ≥ 1 and S_μ, respectively. It is well known than S_μ_n→ S_μ pointwise on _+ if and only if μ_n →μ weakly (see, e.g., <cit.>). With this result in mind, the universality of the limiting EESD is given by our first theorem. Suppose r/n → c ∈ [0, 1). Suppose = (Y_ℓ)_1 ≤ℓ≤ M satisfies Assumption <ref>. Let = (Z_ℓ)_1 ≤ℓ≤ M be a vector of i.i.d. standard Gaussians. Then for any z ∈_+, lim_n → 0| S__B_n()(z) - S__B_n()(z)| = 0 where B_n() is any one of the matrices n^-1/2 H_n(), (nr)^-1/2L_H_n() or n^-1/2L̃_H_n(). In fact, for the result with B_n() = (nr)^-1/2 L_H_n(), we only need Assumption <ref> on the entries of . We now look at some situations where Assumption <ref> holds. Clearly, the tail condition in (<ref>) (and hence Assumption <ref>) is satisfied by i.i.d. bounded random variables (Y_ℓ)_1 ≤ℓ≤ M with zero mean and unit variance. This already covers the case of dense hypergraphs, where the hyperedge inclusion probability p is bounded away from zero. Suppose that (Y_ℓ)_1 ≤ℓ≤ M are i.i.d. zero mean unit variance random variables with |Y_1|^2 + δ≤ C for some δ, C > 0. Then Assumption <ref> is satisfied for any sequence r_n satisfying 2 ≤ r_n < n - ⌈ 3 + 2/δ⌉. As a consequence, i.i.d. standard Gaussians satisfy Assumption <ref> for 2 ≤ r_n ≤ n - 4. Indeed, by Hölder's inequality, we have for conjugate exponents p, q ≥ 1 satisfying 1/p + 1/q = 1, [Y_1^2 (|Y_1| > K_n)] ≤ ( |Y_1|^2p)^1/p (|Y_1| > K_n)^1/q ≤ ( |Y_1|^2p)^1/p (|Y_1|^2 + δ)^1/q K_n^-(2 + δ)/q (by Markov's inequality). For 2p = 2 + δ (which gives q = (2 + δ) / δ), this gives [Y_1^2 (|Y_1| > ε K_n)] ≤ |Y_1|^2 + δ/ε^δ K_n^δ = |Y_1|^2 + δ/ε^δ (nN)^δ/2 r^-4δ. The right hand side is o(r^-2) if r^2 + 4 δ/(nN)^δ = o(1). Now since r^2 + 4 δ/(nN)^δ = (r/n)^δ r^2 + 3 δ/N^δ≤r^2 + 3 δ/N^δ, we need nr≫ r^3 + 2/δ. We claim that this is always true. Let a = ⌈ 3 + 2 /δ⌉+ 1. Then for a ≤ r ≤n/2, nr≥na≥n^a/a^a≥ C_δ n^⌈ 3 + 2 / δ⌉ + 1≫ n^3 + 2 / δ≥ r^3 + 2/δ. On the other hand, for 2 ≤ r < a, (<ref>) is trivially true. For n/2 < r < n - ⌈ 3 + 2/δ⌉, by symmetry of the binomial coefficients, nr = nn - r≥na≫ n^3 + 2/δ, as before. Suppose that (Y_ℓ)_1 ≤ℓ≤ M are i.i.d. zero mean unit variance random variables and consider the regime where r ∼ c n, where c ∈ (0, 1), we have N = n - 2r - 2∼c^2 exp(I(c) n)/√(c(1 - c) n), where I(x) = -x log x - (1 - x) log (1 - x) is the entropy of a (x) random variable. In this regime, K_n = Θ(e^C n) for some constant C > 0 (depending on c) and the tail condition (<ref>) becomes rather mild: [Y_1^2 (|Y_1| > ε e^C n) ] = o(n^-2) for any ε > 0. For instance, a random variable whose density decays as fast as 1/|x|^3 (log |x|)^3 + δ as |x| →∞, for some δ > 0 will satisfy this condition. Such random variables need not possess a (2 + η)-th moment for any η > 0. [Dependence on sparsity] Consider the setting of hypergraphs, where (Y_ℓ)_1 ≤ℓ≤ M are i.i.d. with Y_1 = B - p/√(p(1 - p)), where B ∼(p). If p is not bounded away from 0 and 1, then |Y_1| is not uniformly bounded anymore. Instead, |Y_1|≤max{√(p/1-p),√(1-p/p)}≤1/√(min{p, 1-p}). The left hand side of (<ref>) vanishes for every ε > 0 as long as nN/r^8min{p,1-p}→∞. Assume now that p is bounded away from 1. For d_ = n - 1r - 1 p, the average degree of a vertex, we note that d_ = n - 1r - 1 p ≍n/r N p ≍ r^7 ·nN/r^8min{p, 1 - p}. The condition (<ref>) thus becomes equivalent to the condition d_ / r^7 →∞. For comparison, in the random graph case (i.e. r = 2), it is well known that the semi-circle law emerges as the LSD of the adjacency matrix as long as the average degree d_ = (n - 1) p →∞. Thus in the setting of fixed r, we do get a semi-circle limit as long as d_→∞. We note here that Assumption <ref> is implied by the condition d_/r^4→∞. Thus for the Laplacian matrix (nr)^-1/2 L_H_n, we only need this weaker condition on d_. §.§ LSD of the Gaussian GHAM In view of of the universality result of Theorem <ref>, it sufficient to consider the Gaussian GHAM. Let (, d) be a metric space. For a real-valued function f on , define its Lipschitz seminorm by f_ := sup_x ≠ y|f(x) - f(y)|/d(x,y). A function f is called l-Lipschitz if f_≤ l. Define the class of Bounded Lipschitz functions as _ := { f ∈^ : f_ + f_∞≤ 1 }. Then the bounded Lipschitz metric on the set () of probability measures on is defined as follows: d_(μ, ν) := sup_f ∈_{|∫ f dμ - ∫ f dν|}. It is well known that d_ metrises weak convergence of probability measures on ( ) (see, e.g., <cit.>). Below we have =. Let = (Z_ℓ)_1≤ℓ≤ M be a vector of i.i.d. Gaussians. Suppose r/n → c ∈ [0, 1). Then the EESD of n^-1/2 H_n converges weakly to ν_, (1 - c)^2. In fact, one has d_(_n^-1/2 H_n(), ν_, (1 - c)^2) = O(max{|r/n - c|, 1/√(n)}). Theorem <ref> is proved by representing the Gaussian GHAM via an ANOVA-type decomposition as a low rank perturbation of a scaled Gaussian Wigner matrix minus its diagonal. We will now consider the Laplacian matrix L̃_H_n. To describe the limit we need the concept of free additive convolution of measures. Let μ_1 and μ_2 be two probability measures on with Stieltjes transforms S_μ_1(z) and S_μ_2(z) respectively. Then there exists a probability measure μ whose Stieltjes transform is the unique solution of the system f(z) = S_μ_1( z - Δ_2(z)/f(z)), f(z) = S_μ_2( z - Δ_1(z)/f(z)), f(z) = 1 - Δ_1(z) - Δ_2(z)/-z, where the functions Δ_i(z), i = 1, 2, are analytic for (z) ≠ 0 and satisfy Δ_i(z) → 0 as (z) →∞. The measure μ is denoted by μ_1 ⊞μ_2 and is called the free additive convolution of μ_1 and μ_2. For further details, we refer the reader to <cit.>, <cit.>. Let H_n be a Gaussian GHAM. (i) Suppose that r is fixed. Then EESD of n^-1/2L_H_n converges weakly to ν_, r - 1⊞ν_, 1 and the EESD of n^-1/2L̃_H_n converges weakly to ν_,1/r - 1⊞ν_, 1. (ii) Suppose r →∞ such that r/n → c ∈ [0, 1). Then the EESD of (nr)^-1/2 L_H_n converges weakly to ν_, 1⊞ν_, c and the EESD of n^-1/2L̃_H_n converges weakly to ν_, (1 - c)^2. It is well known <cit.> that the LSD of the Laplacian of Erdős-Rényi random graphs is the free additive convolution of the standard Gaussian and the standard semi-circle laws. Theorem <ref> generalises this result to the hypergraph setting. §.§ Concentration of the ESD In this section, we study concentration of the ESDs around the EESDs. Our main result in this regard is the following.   (i) The map T_1 : (^M, ·_2) → (_n(), ·_F) defined by T_1() = n^-1/2 H_n() is r/√(n)-Lipschitz. (ii) The map T_2 : (^M, ·_2) → (_n(), ·_F) defined by T_2() = (nr)^-1/2L_H_n() is √(r)-Lipschitz. (iii) The map T_3 : (^M, ·_2) → (_n(), ·_F) defined by T_3() = n^-1/2L̃_H_n() is √(r/r - 1 + r^2/n)-Lipschitz. With the aid of Theorem <ref>, one can easily obtain concentration of the ESD around the EESD via standard techniques (see, e.g., <cit.>). Recall that a probability measure ν on satisfies a logarithmic Sobolev inequality (LSI) with (not necessarily optimal) constant κ, if for any differentiable function f, ∫ f^2 logf^2/∫ f^2 dν dν≤ 2κ∫ |f'|^2 dν. We say in this case that ν satisfies _κ. Via the so-called Herbst argument, a measure ν satisfying an LSI can be shown to possess sub-Gaussian tails (see, e.g., <cit.>). Examples of probability measures satisfying an LSI include the Gaussian <cit.> and any probability measure ν that is absolutely continuous with respect to the Lebesgue measure and satisfies the so-called Bobkov and Götze condition <cit.>. See, e.g., <cit.> for more on logarithmic Sobolev inequalities in the context of concentration of measure. (Concentration of the ESD)   Suppose each entry of = (Y_ℓ)_1 ≤ℓ≤ M satisfies _κ for some κ > 0. There are universal constants C_j, Ĉ_j, C̃_j > 0, j = 1, 2, such that for any t > 0, (i) (d_(μ_n^-1/2H_n(),_n^-1/2H_n()) > t) ≤C_1/t^3/2exp(- C_2 n^2 t^2/κ r^2); (ii) (d_(μ_(nr)^-1/2 L_H_n(), _(nr)^-1/2 L_H_n()) > t) ≤Ĉ_1/t^3/2exp(-Ĉ_2 n t^2/κ r); (iii) (d_(μ_n^-1/2L̃_H_n(),_n^-1/2L̃_H_n()) > t) ≤C̃_1/t^3/2exp(- C̃_2 min{n, n^2/r^2} t^2/κ). Suppose now that the entries of are uniformly bounded by K > 0. There exist absolute constants C_j, Ĉ_j, C̃_j > 0, j = 3, 4, 5, 6, such that with δ_1(n) = C_5 K/n, δ̂_1(n) = Ĉ_5 K/n, δ̃_1(n) = C̃_5 K/n and Q = C_6 K, Q̂ = Ĉ_6 K, Q̃ = C̃_6 K we have for any t, t̂, t̃ > 0 satisfying t > (C_3 (Q + √(t)) δ_1(n))^2/5, t̂ > (Ĉ_3 (Q̂ + √(t̂)) δ̂_1(n))^2/5 and t̃ > ( C̃_3 (Q̃ + √(t̃))δ̃_1(n))^2/5 that (iv) (d_(μ_n^-1/2H_n(), _n^-1/2H_n()) > t) ≤C_3 (Q + √(t))/t^3/2exp(- C_4 n^2/r^2·(t^5/2/C_3 (Q + √(t)) - δ_1(n))^2); (v) (d_(μ_(nr)^-1/2 L_H_n(), _(nr)^-1/2 L_H_n()) > t̂) ≤Ĉ_3 (Q̂ + √(t̂))/t̂^3/2exp(- Ĉ_4 n/r·(t̂^5/2/Ĉ_3 (Q̂ + √(t̂)) - δ̂_1(n))^2); (vi) (d_(μ_n^-1/2L̃_H_n(), _n^-1/2L̃_H_n()) > t̃) ≤C̃_3 (Q̃ + √(t̃))/t̃^3/2exp(- C̃_4 ·min{n, n^2/r^2}·(t̃^5/2/C̃_3 (Q̃ + √(t̃)) - δ̃_1(n))^2). We note that the concentration inequalities of Corollary <ref> are only effective in the regime r/n → 0. As such we are able to show in-probability convergence of the ESDs only in this regime. Suppose that the entries of = (Y_ℓ)_1 ≤ℓ≤ M satisfy Assumption <ref>. Further suppose that they are either uniformly bounded or each of them satisfy _κ for some κ > 0. (i) If r/n → 0, then d_(μ_n^-1/2 H_n(), ν_, 1) 0, and if r √(log n)/n→ 0, then d_(μ_n^-1/2 H_n(), ν_, 1) 0. (ii) For the Laplacian L_H_n, we have the following: For fixed r, d_(μ_n^-1/2 L_H_n(), ν_, r - 1⊞ν_, 1) 0. If r →∞ and r/n→ 0, then d_(μ_(nr)^-1/2 L_H_n(), ν_, 1) 0, and if r →∞ and r log n/n→ 0, then d_(μ_(nr)^-1/2 L_H_n(), ν_, 1) 0. (iii) For the Laplacian L̃_H_n, we have the following: For fixed r, d_(μ_n^-1/2L̃_H_n(), ν_,1/r - 1⊞ν_, 1) 0. If r →∞ and r/n→ 0, then d_(μ_n^-1/2L̃_H_n(), ν_, 1) 0, and if r →∞ and r√(log n)/n→ 0, then d_(μ_n^-1/2L̃_H_n(), ν_, 1) 0. Recall that a d-dimensional random vector satisfies a Poincaré inequality with constant σ^2 if for any bounded smooth function g on ^d one has (g()) ≤σ^2 ∇ g()_2^2, We say that satisfies (σ^2). Suppose each of the entries Y_ℓ satisfies (σ^2). Combining Theorem <ref> with Lemma 7.1 of <cit.>, it follows that the n-dimensional vector of the eigenvalues of n^-1/2H_n() satisfies (σ^2 r^2 / n). Let d_(μ, ν) := sup_x |F_μ(x) - F_ν(x)| denote the Kolmogorov-Smirnov distance between two probability measures μ and ν on , where F_μ(x) = μ((-∞, x]) is the distribution function of μ. As a consequence of Corollary 6.2 of <cit.>, we have [d_(μ_n^-1/2H_n(), _n^-1/2 H_n())] ≤ C (σ r/n)^2/3log^2(n/σ r) for some constant C > 0. Unfortunately, C depends on the Lipschitz constant of F__n^-1/2H_n(). §.§ Asymptotics for the edge eigenvalues Using the low rank representation of a Gaussian GHAM, we study its spectral edge. Our results generalise the results of <cit.> for Erdős-Rényi random graphs. For any n × n symmetric matrix B, let λ_1(B) ≥λ_2(B) ≥⋯≥λ_n(B) denote its eigenvalues in non-increasing order. We first present our results on the extreme eigenvalues of a Gaussian GHAM. Let H_n be a Gaussian GHAM. Let c_n := r/n and suppose that lim sup c_n <1. Let ζ denote a standard Gaussian variable. (i) If c_n → c∈ (0,1), then λ_1(H_n)/nc/2ζ + √(c^2/4ζ^2 + c(1-c)), λ_n(H_n)/nc/2ζ -√(c^2/4ζ^2 +c(1-c)). (ii) If r →∞ and c_n→ 0, then λ_1(H_n)/√(nr) 1, λ_n(H_n)/√(nr) -1. (iii) If r is fixed, then λ_1(H_n)/√(n) √(r-2) + 1/√(r-2) if r ≥ 4, 2 if r ≤ 3, λ_n(H_n)/√(n) -√(r-2) - 1/√(r-2) if r ≥ 4, -2 if r ≤ 3. (iv) Let k ∈. Then, √(n/log n)(λ_1+k(H_n)/√(n) - 2(1-c_n)) = O_P(1), √(n/log n)(λ_n-k(H_n)/√(n) + 2(1-c_n)) = O_P(1). As mentioned in Section <ref>, these results remain valid for a GHAM where the entries Y_ℓ are uniformly bounded, by say K, and r ≪n^1/4/√(K)log n. Now we present our results on the extreme eigenvalues of the two types of Laplacians of a Gaussian GHAM. Let H_n be a Gaussian GHAM. Let c_n := r/n and suppose that lim sup c_n < 1. Let k ∈ and ζ be a standard Gaussian variable. (A) We have the following estimates for the extreme eigenvalues of L_H_n: √(log n)( λ_k(L_H_n)/n√( 2 log n) -√(c_n(1-c_n))) =O_P(1), √(log n)( λ_n+1-k(L_H_n)/n√( 2 log n) +√(c_n(1-c_n))) =O_P(1). (B) We have the following estimates for the largest and smallest eigenvalues of L̃_H_n. 5pt (i) If r ≪√(log n), then √(log n)/r((r-1) λ_1(L̃_H_n)/n√(2 log n) -√(c_n(1-c_n))) = O_P(1), √(log n)/r((r-1) λ_n(L̃_H_n)/n√(2 log n) +√(c_n(1-c_n))) = O_P(1). (ii) If c_n → c ∈ (0,1), then λ_1(L̃_H_n)/nc/2ζ +√(c^2/4ζ +c(1-c)), λ_1(L̃_H_n)/nc/2ζ -√(c^2/4ζ +c(1-c)). (C) We have the following estimates for the other extreme eigenvalues of L̃_H_n. 5pt (i) If r ≪√(n), then min{√(n log n)/r, √(log n)}((r-1) λ_1+k(L̃_H_n)/n√(2 log n) -√(c_n(1-c_n))) = O_P(1), min{√(n log n)/r, √(log n)}((r-1) λ_n-k(L̃_H_n)/n√(2 log n) +√(c_n(1-c_n))) = O_P(1). (ii) If r ≫√(n log n), then r/√(n log n)(λ_1+k(L̃_H_n)/√(n) -2 (1-c_n)) =O_P(1), r/√(n log n)(λ_n-k(L̃_H_n)/√(n) +2 (1-c_n)) =O_P(1). § PROOFS §.§ Proof of Theorem <ref> The proofs are based on Propositions <ref> and <ref> below, whose proofs are extensions of the argument given in <cit.> to the GHAM setting. We first recall the main Lindeberg swapping result for <cit.>. Let 𝐗 = (X_1, …, X_n) and 𝐘 = (Y_1,…, Y_n) be two vectors of independent random variables with finite second moments, taking values in some open interval I and satisfying, for each i, X_i = Y_i and X^2_i = Y^2_i. Let f: I^n → be thrice differentiable in each argument. If we set U= f(𝐗) and V= f(𝐘), then for any thrice differentiable g : → and any K >0, | g(U) - g(V)| ≤ C_1(g) λ_2(f) ∑_i = 1^n [[X_i^2 (|X_i|> K)] + [Y_i^2 (|Y_i|> K)] ] + C_2(g) λ_3(f) ∑_i = 1^n [[|X_i|^3 (|X_i|≤ K)] + [|Y_i|^3 (|Y_i| ≤ K)] ], where C_1(g) = g^'_∞ + g^''_∞, C_2(g) = 1/6g^'_∞ + 1/2g^''_∞ + 1/6g^'''_∞ and λ_s(f) ;= sup{|∂^q_i f()|^s/q : 1 ≤ i ≤ n, 1 ≤ q ≤ s, x ∈ I^n}, where ∂^q_i denotes q-fold differentiation with respect to the i-th coordinate. Let z = u + iv ∈_+. For 𝐱 = (x_1, …, x_M)^⊤, consider the functions H_n(𝐱) = 1/√(N)∑_ℓ = 1^M x_ℓ Q_ℓ, R_n(𝐱) = (1/√(n)H_n(𝐱) - z I)^-1 and G_n(𝐱) = 1/n R_n(𝐱). Let 𝐘 = (Y_1, Y_2, …, Y_M)^⊤ and 𝐙 = (Z_1, Z_2, …, Z_M)^⊤, where the Y_ℓ's are independent zero mean and unit variance random variables and they satisfy Assumption <ref> and the Z_ℓ's are i.i.d. standard Gaussians. Then, we have for any K > 0, |(G_n (𝐘)) - (G_n(𝐙))| ≤ 4 max(v^-3, v^-4) r^2(r - 1)^2/n^2 N∑_ℓ = 1^M [ [Y^2_ℓ(|Y_ℓ| > K)] + [Z^2_ℓ(|Z_ℓ| > K)] ] + 12 max(v^-6, v^-9/2, v^-4) r^3(r - 1)^3/n^5/2 N^3/2∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ K)] + [|Z_ℓ|^3(|Z_ℓ| ≤ K)] ]. Let f() = G_n(). We will apply Theorem <ref> separately on the real and imaginary parts of f. Writing f() = f() + ι f() (with ι := √(-1)), it is easy to see that max{|∂^q_i f|, |∂^q_i f|}≤ |∂^q_i f| for any 1 ≤ i ≤ M and 1 ≤ q ≤ 3. As a result, max{λ_2( f), λ_2( f) }≤λ_2(f) and max{λ_3( f), λ_3( f)}≤λ_3(f). In view of this, it is enough to show that λ_2(f) = sup_{|∂_ℓ f()|^2, |∂^2_ℓ f()| : 1 ≤ℓ≤ M }≤ 2max( v^-3, v^-4) r^2(r - 1)^2/n^2N, λ_3(f) = sup_{ |∂_ℓ f()|^3, |∂^2_ℓ f()|^3/2, |∂^3_ℓ f()| : 1 ≤ℓ≤ M }≤ 6 max( v^-6, v^-9/2, v^-4) r^3(r - 1)^3/n^5/2N^3/2. First note that ∂^q f()/∂ x_ℓ^q = 1/n∂^q R_n()/∂ x_ℓ^q, q ≥ 1. Differentiating the relation (1/√(n) H_n() - z I) R_n() = I, one obtains that ∂ R_n() /∂ x_ℓ = - 1/√(n) R_n() ∂ H_n() /∂ x_ℓ R_n(), ∂^2 R_n() /∂ x^2_ℓ = 2/n R_n() ∂ H_n() /∂ x_ℓ R_n() ∂ H_n() /∂ x_ℓ R_n(), ∂^3 R_n() /∂ x^3_ℓ = - 6/n^3/2 R_n() ∂ H_n() /∂ x_ℓ R_n() ∂ H_n() /∂ x_ℓ R_n()∂ H_n() /∂ x_ℓ R_n(). Therefore ∂ f() /∂ x_ℓ = - 1/n√(n)( R_n() ∂ H_n() /∂ x_ℓ R_n()) = - 1/n^3/2(∂ H_n() /∂ x_ℓ R^2_n()), and similarly, ∂^2 f() /∂ x^2_ℓ = 2/n^2( ∂ H_n() /∂ x_ℓ R_n() ∂ H_n() /∂ x_ℓ R^2_n()), ∂^3 f() /∂ x^3_ℓ = - 6/n^5/2(∂ H_n()/∂ x_ℓ R_n() ∂ H_n() /∂ x_ℓ R_n() ∂ H_n() /∂ x_ℓ R^2_n()). Observe that (∂ H_n() /∂ x_ℓ R^2_n()) = ∑_1 ≤ i, j ≤ n( ∂ H_n() /∂ x_ℓ)_ij( R^2_n())_ji. Let Λ = ( λ_1, …, λ_n), where λ_i's are the eigenvalues of 1/√(n) H_n(). Then from the spectral decomposition 1/√(n)H_n() = U Λ U^⊤, we have R^2_n() = U (Λ - zI)^-2 U^⊤. Note that |((Λ - zI)^-2)_ii| = |1/λ_i - z|^2 ≤1/v^2. Therefore |(R^2_n())_ij| = |∑_k ((U - zI)^-2)_kk U_ik U_kj| ≤1/v^2∑_k|U_ik| |U_kj| ≤1/v^2((∑_k U_ik^2) (∑_k U_kj^2))^1/2 = 1/v^2. Hence |(∂ H_n() /∂ x_ℓ R^2_n()) | ≤1/v^2∑_1 ≤ i, j ≤ n| ( ∂ H_n() /∂ x_ℓ)_ij|. Note that ∂ H_n() /∂ x_ℓ = 1/√(N) Q_ℓ and for any ℓ, ∑_1 ≤ i, j ≤ n|( ∂ H_n() /∂ x_ℓ)_ij| = r(r - 1)/√(N). Therefore ∂ f() /∂ x_ℓ_∞≤1/v^2r(r - 1)/n^3/2√(N). For bounding ∂^q f()/∂ x_ℓ^q_∞, q = 2, 3, we will need the the Frobenius and operator norms of ∂ H_n()/∂ x_ℓ. The Frobenius norm is easy to calculate: ∂ H_n()/∂ x_ℓ_F = 1/√(N)Q_ℓ_F = √(r(r - 1)/N). Also, relabeling the vertices if necessary, we may assume that Q_ℓ = [ J_r - I_r 0; 0 0 ]. Hence Q_ℓ_op = r - 1 and so ∂ H_n()/∂ x_ℓ_op = 1/√(N)Q_ℓ_op = r - 1/√(N). Now, (∂ H_n()/∂ x_ℓ R_n() ∂ H_n()/∂ x_ℓ R^2_n()) ≤∂ H_n()/∂ x_ℓ_F ·R_n() ∂ H_n()/∂ x_ℓ R^2_n()_F (by Cauchy-Schwartz) ≤∂ H_n()/∂ x_ℓ_F ·R_n()_op·∂ H_n()/∂ x_ℓ R^2_n()_F ≤∂ H_n()/∂ x_ℓ_F ·R_n()_op·∂ H_n()/∂ x_ℓ_F ·R^2_n()_op ≤∂ H_n()/∂ x_ℓ_F^2 ·R_n()_op^3 ≤r(r - 1)/N1/v^3. This gives ∂^2 f() /∂ x^2_ℓ_∞≤2 r(r - 1)/n^2 N v^3. For the third derivative, we do the following (∂ H_n()/∂ x_ℓ R_n()∂ H_n()/∂ x_ℓ R_n() ∂ H_n()/∂ x_ℓ R^2_n()) ≤∂ H_n()/∂ x_ℓ R_n() ∂ H_n()/∂ x_ℓ_F · R_n() ∂ H_n()/∂ x_ℓ R^2_n()_F (by Cauchy-Schwartz) ≤∂ H_n()/∂ x_ℓ_op·R_n() ∂ H_n()/∂ x_ℓ_F ·R_n()∂ H_n()/∂ x_ℓ R^2_n()_F ≤∂ H_n()/∂ x_ℓ_op·R_n()_op·∂ H_n()/∂ x_ℓ_F ·R_n()∂ H/∂ x_ℓ R^2_F ≤∂ H_n()/∂ x_ℓ_op·∂ H_n()/∂ x_ℓ_F^2 ·R_n()_op^4 ≤r - 1/√(N)·r(r - 1)/N·1/v^4 ≤r(r - 1)^2/N^3/2·1/v^4. This yields ∂^3 f() /∂ x^3_ℓ_∞≤6 r(r - 1)^2/n^5/2 N^3/2 v^4. Using (<ref>), (<ref>) and (<ref>), we obtain λ_2(f) ≤ 2max( v^-3, v^-4) r^2(r - 1)^2/n^2N, λ_3(f) ≤ 6 max( v^-6, v^-9/2, v^-4) r^3(r - 1)^3/n^5/2N^3/2. This completes the proof. For the two Laplacian matrices (nr)^-1/2L_H_n and n^-1/2L̃_H_n, we have the following results. Their proofs are similar to the proof of Proposition <ref> and hence are relegated to the appendix. Let z = u + iv ∈_+. For = (x_1, …, x_M)^⊤, consider the functions H_n() = 1/√(N)∑_ℓ = 1^M x_ℓ Q_ℓ, L_H_n() ≡ L_H_n() = (H_n() ) - H_n(), R̂_n() = (1/√(nr)L_H_n() - z I)^-1 and Ĝ_n() = 1/nR̂_n(). Let = (Y_1, Y_2, …, Y_M)^⊤ and = (Z_1, Z_2, …, Z_M)^⊤, where the Y_ℓ's are independent random variables with zero mean and unit variance satisfying Assumption <ref> and the Z_ℓ's are i.i.d. standard Gaussians. Then, we have for any K > 0, |(Ĝ_n (𝐘)) - (Ĝ_n(𝐙))| ≤ 8 max(v^-3, v^-4) r^3/2/n^2 N∑_ℓ = 1^M [ [Y^2_ℓ(|Y_ℓ| > K)] + [Z^2_ℓ(|Z_ℓ| > K)] ] + 16 max(v^-6, v^-9/2, v^-4) r^9/2/n^5/2 N^3/2∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ K)] + [|Z_ℓ|^3(|Z_ℓ| ≤ K)] ]. Let z = u + iv ∈_+. For = (x_1, …, x_M)^⊤, consider the functions H_n() = 1/√(N)∑_ℓ = 1^M x_ℓ Q_ℓ, L̃_H_n() ≡L̃_H_n() = (H_n() )/r - 1 - H_n(), R̃_n() = (1/√(n)L̃_H_n() - z I)^-1 and G̃_n() = 1/nR̃_n(). Let = (Y_1, Y_2, …, Y_M)^⊤ and = (Z_1, Z_2, …, Z_M)^⊤, where the Y_ℓ's are independent random variables with zero mean and unit variance satisfying Assumption <ref> and the Z_ℓ's are i.i.d. standard Gaussians. Then, we have for any K > 0, |(G̃_n (𝐘)) - (G̃_n(𝐙))| ≤ 4 max(v^-3, v^-4) r^4/n^2 N∑_ℓ = 1^M [ [Y^2_ℓ(|Y_ℓ| > K)] + [Z^2_ℓ(|Z_ℓ| > K)] ] + 12 max(v^-6, v^-9/2, v^-4) r^6/n^5/2 N^3/2∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ K)] + [|Z_ℓ|^3(|Z_ℓ| ≤ K)] ]. In the notation of Proposition <ref> we have S__n^-1/2H_n() = G_n() and S__n^-1/2H_n() = G_n(). We take K = ε K_n where K_n = √(nN)/r^4. Observe that ∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ε K_n)] + [|Z_ℓ|^3(|Z_ℓ| ≤ε K_n)] ] ≤ε K_n ∑_ℓ = 1^M [ [|Y_ℓ|^2(|Y_ℓ| ≤ K_n)] + [|Z_ℓ|^2(|Z_ℓ| ≤ K_n)] ] ≤ 2 ε K_n M. Therefore the second term in (<ref>) is at most 2 r^3 (r - 1)^3 ε K_n M/n^5/2N^3/2≤2 ε K_n r^4/√(n N). Therefore, with our choice of K_n, the second term in (<ref>) is at most 2ε. Now, i.i.d standard Gaussian random variables satisfy Assumption <ref>. By Assumption <ref> on (Y_ℓ)_1 ≤ℓ≤ M, the first term in (<ref>) goes to zero. The proof for the Laplacian matrix n^-1/2L̃_H_n follows from Proposition <ref> in an identical fashion. To deal with (nr)^-1/2 L_H_n, it follows from Proposition <ref> via the same argument as above that we want lim_n →∞r^3/2/n^2 N∑_ℓ = 1^M [|Y_ℓ|^2 (|Y_ℓ| > ε K_n')] = 0, where K_n' = √(nN)/r^5/2. This is precisely Assumption <ref>. §.§ Proofs of Theorems <ref> and <ref> We first state and prove a perturbation inequality that will be used repeatedly in the proofs. For this we will use the 1-Wasserstein metric. Recall that for q ≥ 1, the q-Wasserstein distance between probability measures μ and ν on is given by d_W_q(μ, ν) := inf_π∫ |x - y|^q dπ(x, y), where the infimum is over all coupings π of μ and ν. By the Kantorovich-Rubinstein duality (see, e.g., <cit.>), one also has the following representation for d_W_1: d_W_1(μ, ν) = sup_f_≤ 1{|∫ f dμ - ∫ f dν|}. From this it is evident that d_≤ d_W_1. Let A and B be n × n Hermitian random matrices. Then d_W_1(_A, _B) ≤A - B_F/√(n)≤[A - B_F^2/n]^1/2. Using the Kantorovich-Rubinstein duality, d_W_1(_A, _B) = sup_f_≤ 1|∫ f d_A - ∫ f d_B| = sup_f_≤ 1|∫ f dμ_A - ∫ f dμ_B| ≤sup_f_≤ 1|∫ f dμ_A - ∫ f dμ_B| = [d_W_1(μ_A, μ_B)]. The first inequality in (<ref>) follows now from the fact that d_W_1≤ d_W_2 and the Hoffman-Wielandt inequality (Lemma <ref>). The final inequality in (<ref>) is just Jensen's inequality applied to the function x ↦√(x). Recall the Lévy distance between two probability measures μ and ν on : d_(μ, ν) := inf{ϵ >0 : F_μ(x - ϵ) - ϵ≤ G_ν(x) ≤ F_μ(x + ϵ) + ϵ , ∀ x ∈}. It is well known that (see, e.g., <cit.>) d_≤ 2 d_≤ 2 d_. We will make use of the inequality d_≤ 2 d_ several times below. Consider i.i.d. standard Gaussian random variables U, (V_i)_1 ≤ i ≤ n and an independent GOE random matrix n^-1/2 Z_n (i.e. (Z_ij)_1 ≤ i < j ≤ n are i.i.d. standard Gaussians and (Z_ii)_1 ≤ i ≤ n is another independent collection of i.i.d. Gaussians with variance 2). Consider a symmetric matrix G_n with entries G_n,ij = α_n U + β_n (V_i + V_j) + θ_n Z_ij. Then we may write G_n as follows: G_n = α_n U_n _n _n^⊤ + β_n (_n^⊤ + _n ^⊤) + θ_n Z_n, where _n is an n × n vector of 1's, = (V_i)_1≤ℓ≤ M. We shall choose α_n, β_n, θ_n in such a way that G'_n = G_n- (G_n) yields the same covariance structure as a GHAM. To this end, note that for i ≠ j and i' ≠ j', (G'_n,ij, G'_n,i'j') = α_n^2 if |{i, j}∩{i', j'}| = 0, α_n^2 + β_n^2 if |{i, j}∩{i', j'}| = 1, α_n^2 + 2 β_n^2 + θ_n^2 if |{i, j}∩{i', j'}| = 2. In addition, (G_n,ii) = α_n^2 + 4 β_n^2 + 2θ_n^2. To match this with our Gaussian GHAM, we must set α_n^2 = ρ_n, α_n^2 + β_n^2 = γ_n, α_n^2 + 2β_n^2 + θ_n^2 = 1, which yields α_n = √(ρ_n), β_n = √(γ_n - ρ_n), θ_n = √(1 - 2 γ_n + ρ_n). As r/n → c, we have γ_n → c and ρ_n → c^2. It follows that θ_n → 1 -c. With these parameter choices, the entries of G'_n and H_n have the same joint distribution. Now we write d_(_n^-1/2 H_n, ν_, 1 - c) ≤ d_ (_n^-1/2 H_n, _n^-1/2 G'_n) + d_(_n^-1/2 G'_n, _n^-1/2 G_n)) + d_(_n^-1/2 G_n, _n^-1/2θ_n Z_n) + d_ (_n^-1/2θ_n Z_n, _n^-1/2 (1-c) Z_n) + d_( _n^-1/2 (1-c) Z_n, ν_, 1 - c). Now we bound each of the terms on the right hand side. The first term vanishes since the entries of H_n and G'_n have the same joint distribution. For the second term, we use the inequality d_≤ d_W_1 and then appeal to Lemma <ref> to get d_(_n^-1/2 G'_n, _n^-1/2 G_n) ≤ d_W_1(_n^-1/2 G'_n, _n^-1/2 G_n) ≤((G_n)_F^2)^1/2/n = √((G_n, 11)/n)≤√(4/n), where the last inequality holds because (G_n, 11) = α_n^2 + 4 β_n^2 + 2 θ_n^2 = 1 + 2 β_n^2 + θ_n^2 ≤ 4. For the third term, using the fact that (G_n - θ_n Z_n) ≤ 2, we obtain d_(_n^-1/2 G_n, _n^-1/2θ_n Z_n) ≤ 2d_(_n^-1/2 G_n, _n^-1/2θ_n Z_n) = sup_x | F_μ_n^-1/2G_n(x)- F_μ_n^-1/2θ_n Z_n(x)| ≤sup_x |F_μ_n^-1/2G_n(x)- F_μ_n^-1/2θ_n Z_n(x)| = [ d_(μ_n^-1/2 G_n, μ_n^-1/2θ_n Z_n) ] ≤[(n^-1/2(G_n - θ_n Z_n))/n] (using Lemma <ref>) ≤2/n. The fourth term is bounded using the same strategy as the second term: d_ (_n^-1/2θ_n Z_n, _n^-1/2 (1-c) Z_n) ≤ |θ_n -(1-c)| 1/n(𝔼Z_n^2_F)^1/2 = |θ_n-(1-c)|. Since θ_n, (1-c) are bounded away form 0 and bounded above by 1, using the Lipschitzness of the function x ↦√(x) away from 0, |θ_n-(1-c)| =|√(1-2(r/n) +(r/n)^2 +O(1/n))-√(1-2c+c^2)| = O(max{|r/n-c|,1/n}). Therefore d_ (_n^-1/2θ_n Z_n, _n^-1/2 (1-c) Z_n) ≤ O(max{|r/n-c|,1/n}). For the final term, using Theorem 1.6 of <cit.> we get that d_(_n^-1/2 (1-c) Z_n, ν_, 1 - c) ≤ 2 d_( _n^-1/2 (1-c) Z_n, ν_, 1 - c) ≤C/n. for some absolute constant C > 0. This concludes the proof. Now we restate Theorem 2.1 of <cit.> in our notation for orthogonally invariant matrices (see the discussion on page 280 of <cit.>). <cit.> Let B_n = U_n^⊤ F_1,n U_n + V_n^⊤ F_2,n V_n, where F_1,n and F_2,n are (potentially) random n × n symmetric matrices with arbitrary distributions and U_n, V_n are orthogonal matrices uniformly distributed over the orthogonal group (n) according to the Haar measure. We assume that the matrices F_1, n, F_2, n, U_n, V_n are independent. Assume that the ESDs μ_n^-1/2 F_r,n, r = 1, 2, converge weakly in probability as n →∞ to non-random probability measures μ_r, r = 1, 2, respectively and that sup_n ∫ |λ| _n^-1/2 F_r, n(dλ) < ∞ for at least one value of r ∈{1, 2}. Then μ_n^-1/2 B_n converges weakly in probability to μ_1 ⊞μ_2, the free additive convolution of the measures μ_1 and μ_2. To prove (i), we use the low rank representation of a Gaussian GHAM. We only give the details for L̃_H_n. The stated result for L_H_n follows in a similar manner. First note that L̃_G_n - L̃_G_n' = -r - 2/r - 1(G_n), and thus d_W_1(_n^-1/2L̃_G'_n, _n^-1/2L̃_G_n)^2 ≤(r - 2)^2/n^2 (r - 1)^2(G_n)_F^2 = O(1/n). Therefore it is enough to consider L̃(G_n). Since the map X ↦L̃_X is linear, we have L̃_G_n = L̃_P_n + θ_n L̃_Z_n. where P_n = α_n U ^⊤ + β_n (^⊤ + ^⊤). Further, using the rank-inequality (Lemma <ref>), it is clear that we may focus on X_n = (P_n )/r - 1 + θL̃_Z_n. Now (P_n ) = (n α_n U + nβ_n V̅ + nβ_n ). Consider the matrix X̃_n = nβ_n/r - 1() + θ_n L_Z_n. Using Lemma <ref>, we see that d_W_1(_n^-1/2X̃_n, _n^-1/2 X_n)^2 ≤1/n^2(r - 1)^2(nα_n U + nβ_n V̅)_F^2 = n/(r - 1)^2(α_n U + β_n V̅)^2. Now (α_n U + β_n V̅)^2 = (α_n U + β_n V̅) = α_n^2 + β_n^2 / n = O(1/n^2). Hence d_W_1(_n^-1/2X̃_n, _n^-1/2 X_n) = O_P(1/√(n)). In fact, since β_n = √(r - 2)/√(n) +O(1/n) and θ_n = O(1/n), it is enough to consider (again by Hoffman-Wielandt) the matrix Ŵ_n = √(n(r - 2))/r - 1() + L̃_Z_n = √(n(r - 2))/r - 1() + (Z_n)/r - 1 - Z_n. By a modification of the proof of Lemma 4.12 of <cit.> one can instead consider the matrix (see Lemma <ref> for a precise statement of this replacement and its proof) W̃_n = √(n(r - 2))/r - 1() + √(n) ()/r - 1 + Z_n, where is a vector of i.i.d. standard Gaussians independent of and Z_n. Recall that n^-1/2 Z_n is a GOE random matrix which is orthogonally invariant. It is clear using the strong law of large numbers that the ESD of n^-1/2[√(n(r - 2))/r - 1() + √(n)()/r - 1] converges weakly almost surely to ν_, 1/r - 1. Now, using Theorem <ref> it follows that the ESD of n^-1/2[ √(n(r - 2))/r - 1() + √(n) ()/r - 1 + Z_n ] convereges weakly in probability to ν_, 1/r - 1⊞ν_. This proves (i) (the weak convergence of the EESD follows from the in-probability weak convergence of the ESD via the dominated convergence theorem). Now we prove (ii). First consider the matrix L̃_H_n. We invoke Lemma <ref>: d_W_1(_n^-1/2L̃_H_n, _n^-1/2 H_n)^2 ≤1/n(n^-1/2H_n )/r - 1_F^2 = 1/(r - 1)^2 n^2∑_i (∑_j H_n, ij)^2. Since ∑_i (∑_j H_n, ij)^2 = ∑_i (∑_j H_n, ij) = ∑_i (n + n(n - 1) γ_n) = O(n^2 r), we conclude that d_W_1(_n^-1/2L̃_H_n, _n^-1/2 H_n) = O(1/√(r)). Since r →∞, it follows that _n^-1/2L̃_H_n and _n^-1/2 H_n have the same weak limit, namely ν_, (1 - c)^2. Now we consider the usual Laplacian L_H_n. Again by Lemma <ref>, d_W_1(_(nr)^-1/2 L_G_n, _(nr)^-1/2(G_n )) ≤1/n(nr)^-1/2 G_n_F^2 =1/n^2 r∑_i, j [G_n, ij^2] =O(r^-1). Now G_n = n α_n U + n β_n V̅ + nβ_n + θ_n Z_n . Another application of Lemma <ref> gives d_W_1(_(nr)^-1/2(G_n ), _(nr)^-1/2(nα_n U + nβ_n )) ≤1/n(nr)^-1/2(nβ_n V̅ + θ_n Z_n )_F^2 =1/n^2 r∑_i [nβ_n V̅ + θ_n ∑_j Z_ij]^2 =1/n r(nβ_n V̅ + θ_n ∑_j Z_1j) =1/n r (n β_n^2 + n θ_n^2) =O(r^-1). Finally note that (nr)^-1/2(n α_n U + nβ_n ) = (√(r/n) U + ) = √(r/n) U I + (). Since the first matrix on the right hand side above is orthogonally invariant, using Theorem <ref> we conclude that _(nr)^-1/2(n α_n U + nβ_n )ν_, c⊞ν_, 1. This completes the proof. §.§ Proof of Theorem <ref> Theorem <ref> follows from Lemmas <ref> and <ref> below. We have the following. (i) The map ↦ n^-1/2H_n() is Δ_n, r-Lipschitz, where Δ_n, r^2 = 1/nN∑_s = 0^r (s^2 - s) rsn - rr - s. * (ii) The map ↦ (nr)^-1/2 L_H_n() is Γ_n,r-Lipschitz, where Γ_n,r^2 = 1/n r N∑_s = 0^r ((r^2 - 2r) s + s^2) rsn - rr - s. * (iii) The map ↦ n^-1/2L̃_H_n() is Ξ_n,r-Lipschitz, where Ξ_n,r^2 = 1/nN∑_s = 0^r s^2 rsn - rr - s. We first prove (i). Note that (Q_ℓQ_ℓ') = ((_ℓ_ℓ^⊤ - (_ℓ)) (_ℓ'_ℓ'^⊤ - (_ℓ')) = (_ℓ_ℓ^⊤_ℓ'_ℓ'^⊤ - (_ℓ)_ℓ'_ℓ'^⊤ - _ℓ_ℓ^⊤(_ℓ') + (_ℓ) (_ℓ')) = (_ℓ^⊤_ℓ')^2 - _ℓ^⊤_ℓ' = |e_ℓ∩ e_ℓ'|^2 - |e_ℓ∩ e_ℓ'|. Therefore n^-1/2H_n() - n^-1/2H_n(')_F^2 = 1/nN∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' (Q_ℓQ_ℓ') = 1/nN∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' (|e_ℓ∩ e_ℓ'|^2 - |e_ℓ∩ e_ℓ'|) = 1/nN∑_s = 0^r (s^2 - s) ∑_ℓ, ℓ' : |e_ℓ∩ e_ℓ'|= s ( - ')_ℓ ( - ')_ℓ'. Now two applications of Cauchy-Schwartz gives ∑_ℓ, ℓ' : |e_ℓ∩ e_ℓ'|= s ( - ')_ℓ ( - ')_ℓ' = ∑_ℓ = 1^M ( - ')_ℓ∑_ℓ' : |e_ℓ∩ e_ℓ'|= s( - ')_ℓ' ≤(∑_ℓ = 1^M ( - ')_ℓ^2)^1/2(∑_ℓ = 1^M (∑_ℓ' : |e_ℓ∩ e_ℓ'|= s( - ')_ℓ')^2)^1/2 ≤ - '[∑_ℓ = 1^M rsn - rr - s∑_ℓ' : |e_ℓ∩ e_ℓ'|= s( - ')_ℓ'^2]^1/2 = - '[rsn - rr - s∑_ℓ' = 1^M ( - ')_ℓ'^2 ∑_ℓ : |e_ℓ∩ e_ℓ'|= s 1 ]^1/2 = rsn - rr - s - '^2. Therefore n^-1/2H_n() - n^-1/2H_n(')_F^2 ≤[1/nN∑_s = 0^r (s^2 - s) rsn - rr - s] - '^2 = Δ_n, r^2 - '^2. This completes the proof of (i). Let us now prove (ii). Notice that L_H_n() = (H_n() ) - H_n() = 1/√(N)∑_ℓ_ℓ ( Q_ℓ) - 1/√(N)∑_ℓ_ℓ Q_ℓ = 1/√(N)∑_ℓ (r - 1) _ℓ ( _ℓ) - 1/√(N)∑_ℓ_ℓ Q_ℓ = 1/√(N)∑_ℓ_ℓ ((r - 1)( _ℓ) - Q_ℓ) = 1/√(N)∑_ℓ_ℓ (r ( _ℓ) - _ℓ_ℓ^⊤) = 1/√(N)∑_ℓ_ℓQ̂_ℓ, where Q̂_ℓ := r ( _ℓ) - _ℓ_ℓ^⊤. Note that (Q̂_ℓQ̂_ℓ' ) = ((r ( _ℓ) - _ℓ_ℓ^⊤) (r ( _ℓ') - _ℓ'_ℓ'^⊤ )) = (r^2 (_ℓ)(_ℓ') - r (_ℓ) _ℓ'_ℓ'^⊤ - r ( _ℓ') _ℓ_ℓ^⊤ + _ℓ_ℓ^⊤_ℓ'_ℓ'^⊤ ) = r^2 _ℓ^⊤_ℓ' - r _ℓ^⊤_ℓ' - r _ℓ^⊤_ℓ' + (_ℓ^⊤_ℓ')^2 = (r^2 - 2r) _ℓ^⊤_ℓ' + (_ℓ^⊤_ℓ')^2 = (r^2 - 2r) |e_ℓ∩ e_ℓ'| + |e_ℓ∩ e_ℓ'|^2. Now using the bound (<ref>), we get (nr)^-1/2 L_H_n() - (nr)^-1/2 L_H_n(')_F^2 = 1/n r N∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' (Q̂_ℓQ̂_ℓ') = 1/n r N∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' [(r^2 - 2r) |e_ℓ∩ e_ℓ'| + |e_ℓ∩ e_ℓ'|^2 ] = 1/n r N∑_s = 0^r ((r^2 - 2r) s + s^2) ∑_ℓ, ℓ' : |e_ℓ∩ e_ℓ'|= s ( - ')_ℓ ( - ')_ℓ' ≤[1/n r N∑_s = 0^r ((r^2 - 2r) s + s^2) rsn - rr - s] - '^2 = Γ_n,r^2 - '^2. This proves (ii). Finally, we prove (iii). Notice that L̃_H_n() = ( H_n() )/r - 1 - H_n() = 1/√(N)(r - 1)∑_ℓ_ℓ ( Q_ℓ) - 1/√(N)∑_ℓ_ℓ Q_ℓ = 1/√(N)∑_ℓ_ℓ( _ℓ) - 1/√(N)∑_ℓ_ℓ Q_ℓ = 1/√(N)∑_ℓ_ℓ ( ( _ℓ) - Q_ℓ) = 1/√(N)∑_ℓ_ℓ (2 ( _ℓ) - _ℓ_ℓ^⊤) = 1/√(N)∑_ℓ_ℓQ̃_ℓ, where Q̃_ℓ := 2 ( _ℓ) - _ℓ_ℓ^⊤. Note that (Q̃_ℓQ̃_ℓ' ) = ((2 ( _ℓ) - _ℓ_ℓ^⊤) (2 ( _ℓ') - _ℓ'_ℓ'^⊤ )) = (4 (_ℓ)(_ℓ') - 2 (_ℓ) _ℓ'_ℓ'^⊤ - 2 ( _ℓ') _ℓ_ℓ^⊤ + _ℓ_ℓ^⊤_ℓ'_ℓ'^⊤ ) = 4 _ℓ^⊤_ℓ' - 2 _ℓ^⊤_ℓ' - 2 _ℓ^⊤_ℓ' + ( _ℓ^⊤_ℓ' )^2 = ( _ℓ^⊤_ℓ' )^2 = |e_ℓ∩ e_ℓ'|^2. Now using the bound (<ref>), we get n^-1/2L̃_H_n() - n^-1/2L̃_H_n(')_F^2 = 1/nN∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' (Q̃_ℓQ̃_ℓ') = 1/nN∑_ℓ, ℓ' ( - ')_ℓ ( - ')_ℓ' |e_ℓ∩ e_ℓ'|^2 = 1/nN∑_s = 0^r s^2 ∑_ℓ, ℓ' : |e_ℓ∩ e_ℓ'|= s ( - ')_ℓ ( - ')_ℓ' ≤[1/nN∑_s = 0^r s^2 rsn - rr - s] - '^2 = Ξ_n,r^2 - '^2. This proves (iii). For any r ≥ 2, we have the following estimates. (i) Δ_n, r^2 ≤r^2/n. (ii) Γ_n,r^2 ≤ r. (iii) Ξ_n,r^2 ≤r/r - 1 + r^2/n. Let X ∼(n, r, r). Then [X] = r^2/n, (X) = r^2/n·n - r/n·n - r/n - 1 = r^2/n(1 - r/n)^2 (1 - 1/n)^-1≤r^2/n(1 - r/n)^2. We note that 1/nr∑_s = 0^r (s^2 - s) rsn - rr - s =(X^2 - X) = (X) + ([X])^2 - [X] ≤r^2/n(1 - r/n)^2 + r^2/n(r^2/n - 1) = r^2/n(1 - 2r/n + r^2/n^2 + r^2/n - 1) = r^3/n^2(r/n + r - 2 ) ≤r^3(r - 1)/n^2. Therefore Δ_n, r^2 ≤nr/n N·r^3(r - 1)/n^2 = n (n-1)/nr (r - 1)·r^3(r - 1)/n^2≤r^2/n. This proves (i). For (ii), note that r Ξ_n, r^2 = nr/nN∑_s = 0^r (r - 1)^2 s rsn - rr - s/nr + Δ_n, r^2 = nr/nN (r - 1)^2 [X] +Δ_n, r^2 = (n - 1)/r(r - 1) (r - 1)^2 r^2/n + Δ_n, r^2 ≤ (n - 1) r^2/n + r^2/n = r^2. Finally, for (iii), we have Γ_n, r^2 = nr/nN∑_s = 0^r s rsn - rr - s/nr + Δ_n, r^2 = nr/nN[X] +Δ_n, r^2 = (n - 1)/r(r - 1)r^2/n + Δ_n, r^2 ≤r/r - 1 + r^2/n. This completes the proof. It is clear from the proof of Lemma <ref> that in fact, Δ_n, r = Θ(r/√(n)), Γ_n, r = Θ(√(r)), and Ξ_n, r = Θ(√(r/(r - 1) + r^2/n)). Now we will prove Corollary <ref>. Henceforth we will use the notation ⟨ f, μ⟩ := ∫ f dμ for brevity. The proof of the following result is standard (see <cit.>). Suppose that all the entries of = (Y_ℓ)_1 ≤ℓ≤ M satisfy _κ for some κ > 0. There are universal constants C_j, Ĉ_j, C̃_j > 0, j = 1, 2, such that for any 1-Lipschitz function f, we have for any t > 0, (i) (|⟨ f, μ_n^-1/2H_n()⟩ - ⟨ f, _n^-1/2H_n()⟩| > t) ≤ C_1 exp(-C_2 n^2 t^2/κ r^2); (ii) (|⟨ f, μ_(nr)^-1/2 L_H_n()⟩ - ⟨ f, _(nr)^-1/2 L_H_n()⟩| > t) ≤Ĉ_1 exp(-Ĉ_2 n t^2/κ r); (ii) (|⟨ f, μ_n^-1/2L̃_H_n()⟩ - ⟨ f, _n^-1/2L̃_H_n()⟩| > t) ≤C̃_1 exp(-C̃_2 min{n, n^2 / r^2} t^2/κ). For any 1-Lipschitz function f, |⟨ f, μ_n^-1/2H_n()⟩ - ⟨ f, μ_n^-1/2H_n(')⟩| ≤1/n∑_i = 1^n |f(λ_i) - f(λ_i')| ≤1/n∑_i = 1^n |λ_i - λ_i'|. Using the Cauchy-Schwartz and Hoffman-Wielandt inequalities, the last quantity can be bounded by (1/n∑_i = 1^n (λ_i - λ_i')^2)^1/2 ≤(1/nn^-1/2 H_n() - n^-1/2 H_n(')_F^2 )^1/2 ≤Δ_n, r/√(n) - ' ≤r/n - '. Thus the map ↦⟨ f, μ_n^-1/2H_n()⟩ is r/n-Lipschitz. If all entries (Y_ℓ)_1 ≤ℓ≤ M satisfy _κ, then by the tensorization property of LSI, their joint law also satisfies _κ (see, e.g., <cit.>). Then by concentration of Lipschitz functions for measures satisfying LSI, we have (|⟨ f, μ_n^-1/2H_n()⟩ - ⟨ f, _n^-1/2H_n()⟩| > t) = (|⟨ f, μ_n^-1/2H_n()⟩ - ⟨ f, μ_n^-1/2H_n()⟩| > t) ≤ C_1 exp(-C_2 n^2 t^2/κ r^2). This proves (i). The proofs of (ii) and (iii) are similar. For (iii), we just note that the Lipschitz constant of the map ↦⟨ f, μ_n^-1/2L̃_H_n()⟩ is Ξ_n, r/√(n) = √(r/n(r - 1) + r^2/n^2) = Θ(max{1/√(n), r/n}). This completes the proof. Similarly, using Talagrand's inequality, one can prove the following. Suppose the entries of = (Y_ℓ)_1 ≤ℓ≤ M are uniformly bounded by K > 0. There exist absolute constants C_j, Ĉ_j, C̃_j > 0, j = 3, 4, 5, 6, such that with δ_1(n) = C_5 K/n, δ̂_1(n) = Ĉ_5 K/n, δ̃_1(n) = C̃_5 K/n and Q = C_6 K, Q̂ = Ĉ_6 K, Q̃ = C̃_6 K we have for any 1-Lipschitz function f and any t, t̂, t̃ > 0 satisfying t > (C_3 (Q + √(t)) δ_1(n))^2/5, t̂ > (Ĉ_3 (Q̂ + √(t̂))δ̂_1(n))^2/5 and t̃ > (C̃_3 (Q̃ + √(t̃))δ̃_1(n))^2/5, that (i) (|⟨ f, μ_n^-1/2H_n()⟩ - ⟨ f, _n^-1/2H_n()⟩| > t) ≤C_3 (Q + √(t))/t^3/2exp(- C_4 n^2/r^2·(t^5/2/C_3 (Q + √(t)) - δ_1(n))^2); (ii) (|⟨ f, μ_(nr)^-1/2 L_H_n()⟩ - ⟨ f, _(nr)^-1/2 L_H_n()⟩| > t̂) ≤Ĉ_3 (Q̂ + √(t̂))/t̂^3/2exp(- Ĉ_4 n/r·(t̂^5/2/Ĉ_3 (Q̂ + √(t̂)) - δ̂_1(n))^2); (iii) (|⟨ f, μ_n^-1/2L̃_H_n()⟩ - ⟨ f, _n^-1/2L̃_H_n()⟩| > t) ≤C̃_3 (Q̃ + √(t̃))/t̃^3/2exp(- C̃_4 ·min{n, n^2/r^2}·(t̃^5/2/C̃_3 (Q̃ + √(t̃)) - δ̃_1(n))^2). Using a standard covering argument (see the proofs of Theorems 1.3 and 1.4 in <cit.>), we can obtain from Lemmas <ref> and <ref> the inequalities given in (i)-(vi). We skip the details. §.§ Proofs of Theorems <ref> and <ref> Define G_n and G'_n as in (<ref>) and (<ref>), respectively and recall that H_n and G'_n have the same distribution. Also, n^-1/2 Z_n is a GOE random matrix. Note that by Weyl's inequality, max_1 ≤ i ≤ n|λ_i(G_n) - λ_i(G'_n)| ≤(G_n)_ = max_i |G_n,ii| = O_P(√(log n)). Write P_n = α_n U ^⊤ + β_n (^⊤ + ^⊤). Then max_1 ≤ i ≤ n|λ_i(G_n) - λ_i(P_n)| ≤θ_n Z_n_ = O_P(√(n)). Thus, it is enough to prove (<ref>) by replacing H_n with P_n and (<ref>) by replacing H_n with G_n. Notice that , are almost surely linearly independent. Consider an ordered basis ℬ = {, , _3, _4, …, _n} of ^n, where {_3, _4, …, _n} is a linearly independent set which is orthogonal to both and . In the basis ℬ, P_n has the following matrix representation: [ nα_n U + nβ_n V̅ nα_n UV̅ +β_n ^2 0 0 ⋯ 0; nβ_n nβ_n V̅ 0 0 ⋯ 0; 0 0 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 0 ⋯ 0 ]. From this representation, one can easily compute the eigenvalues of P_n. Let s^2 := 1/n∑_i=1^n (V_i - V̅)^2. Then λ_1(P_n)/n = α_n/2 U + β_n V̅ + √((α_n/2U + β_n V̅)^2 + β_n^2 s^2), λ_n(P_n)/n = α_n/2U + β_n V̅ -√((α_n/2U + β_n V̅)^2 + β_n^2 s^2), λ_k(P_n) = 0 for k = 2, 3, …, n - 1. Note that V̅ 0 and s^2 1. Therefore it is natural to compare λ_1(P_n) to α_n/2 U + √(α_n^2/4U^2 + β_n^2). To that end, note that √(n)| λ_1(P_n)/n - c_n/2U - √(c_n^2/4U^2 + c_n(1-c_n))| = √(n)|1/2(α_n - c_n)U +β_n V̅ + (α_n/2U +β_n V̅)^2 +β_n^2 s^2 -c_n^2/4U^2 - c_n(1-c_n)/√((α_n/2U +β_n V̅)^2 + β_n^2 s^2) + √(c_n^2/4U^2 + c_n(1-c_n))| ≤1/2√(n)|α_n - c_n| · |U| + β_n √(n) |V̅| + |α_n U + c_n U + 2β_n V̅| · (√(n)|α_n - c_n| |U| + 2β_n √(n) |V̅|)/4√(c_n (1-c_n)) + √(n)|β_n^2 s^2 -c_n(1-c_n)|/√(c_n(1-c_n)). Observe that √(n)|α_n - c_n| = O(1/√(n)). Also, since √(n)V̅∼ N(0, 1), β_n √(n)V̅ = O_P(√(c)_n). Further, since √(n)(s^2 - 1) N(0, 2), √(n) |β_n^2 s^2 -c_n(1-c_n)| ≤√(n) s^2|β_n^2 - c_n(1-c_n)| + c_n (1-c_n) √(n)|s^2 -1| = O_P(1/√(n)) + O_P(c_n), Finally, notice that 1/√(c_n(1-c_n)) = O(1/√(c)_n). Combining these we get that √(n)(λ_1(P_n)/n - c_n/2U - √(c_n^2/4U^2 + c_n(1-c_n))) = O_P(1). From these the desired result follows for λ_1(P_n) and hence for λ_1(H_n). Similarly, one can tackle λ_n(H_n). This completes the proofs of (<ref>) and (<ref>). Now we prove (ii). Suppose that c_n → 0 and r →∞. From (<ref>) it follows that sup_1 ≤ i ≤ n|λ_i(G_n)/√(nr) - λ_i(P_n)/√(nr)| = O_P(1/√(r)). Further note that λ_1(P_n)/√(nr) = 1/√(c_n)·λ_1(P_n)/n = 1/√(c_n)[c_n/2U + √(c_n^2/4U^2 + c_n(1-c_n)) + O_P(1/√(n))] = √(c_n)/2U + √(c_n/4U^2 + (1-c_n)) + O_P(1/√(r)), from which it follows that λ_1(P_n)/√(nr) 1, and similarly, λ_n(P_n)/√(nr) -1. Now we consider (iii), the regime where r is fixed. In this regime, we have to consider G_n itself. We think of G_n as a low rank deformation of a scaled GOE matrix: G_n = P_n + θ_n Z_n. Since r is fixed, it is clear that nα_n = O(1), √(n)β_n →√(r-2) and θ_n → 1. Now, multiplying both sides of (<ref>) and (<ref>) by √(n) and using the facts that 0 and s^2 1, we get that λ_1(P_n)/√(n)√(r-2) and λ_n(P_n)/√(n) -√(r-2). Suppose _1 and _n are a orthonormal pair of eigenvectors corresponding to λ_1 and λ_n, respectively. Then P_n has the representation P_n = λ_1(P_n) _1 _1^⊤ + λ_n(P_n) _n _n^⊤. Define P̃_n = √(n(r-2))(_1 _1^⊤ - _n _n^⊤). Now, by virtue of (<ref>), we may conclude that 1/√(n)P_n - P̃_n_ 0. Therefore by Weyl's inequality, it is enough to consider 1/√(n)G̃_n instead of 1/√(n) G_n, where G̃_n is defined as G̃_n = P̃_n + θ_n Z_n. Now note that P̃_n and Z_n are independent and Z_n is orthogonally invariant. Further, the LSD of θ_n Z_n is the standard semi-circle law. Hence one may apply Theorem 2.1 of <cit.> on G̃_n to conclude that λ_1(G̃_̃ñ)/√(n) √(r-2) + 1/√(r-2) if √(r-2) > 1, i.e. if r > 3, 2 if √(r-2)≤ 1, i.e. if r ≤ 3. Similarly, λ_n(G̃_n)/√(n) - √(r-2) - 1/√(r-2) if √(r-2) > 1, i.e. if r > 3, -2 if √(r-2)≤ 1, i.e. if r ≤ 3. This gives us the desired result. To prove (iv), notice that for k ≥ 1, by Lemma <ref>, for any i≤ k+1, i' ≥ k+1, λ_i'(θ_n Z_n) + λ_n+k+1-i'(P_n) ≤λ_k+1(G_n) ≤λ_i(θ_n Z_n) + λ_k+2-i (P_n). Taking i=k and i' = k+2 and noticing that λ_2(P_n) = λ_n-1(P_n) =0, we get θ_n λ_k+2(Z_n) ≤λ_k+1 (G_n) ≤θ_n λ_k (Z_n). As a consequence of Tracy and Widom's seminal result on the fluctuations of extreme eigenvalues of the GOE (see <cit.>), n^2/3(λ_k(Z_n) -2) and n^2/3 (λ_k+2(Z_n) - 2) are O_P(1), implying that n^2/3 (λ_k+1(G_n) - 2 θ_n) = O_P(1). Therefore, n^2/3 (λ_k+1(G_n) - 2(1-c)) = O_P(1). This proves (<ref>). The proof of (<ref>) is similar to that of (<ref>), thus skipped. Here, we prove (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). Then, (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) can be proved following the same line of argument with minor modifications, so we skip them. First, we prove part (A). Since L_G_n = L_G'_n, it suffices to consider L_G_n. Since B ↦ L_B is a linear map, L_G_n = L_P_n + θ_n L_Z_n = D_P_n - P_n + θ_n L_Z_n. Using Theorem 1.2 of <cit.>, we have L_Z_n_ = O_P(√(n log n)). Note that D_P_n = n α_n U I + n β_n ( ) + n β_n V̅ I. Now, n α_n U I_ = O_P(n), n V̅ I _ = O_P(√(n)), and P_n_ = O_P(n). Combining these, we have by Weyl's inequality, |λ_k(L_G_n) - n β_n λ_k(())| ≤L_G_n - n β_n ()_ = O_P(n). Notice that λ_k(()) is nothing but the k-th largest order statistic of n i.i.d. standard Gaussians. Thus, using Lemma <ref>, we get log n(λ_k(())/√(2 log n) - 1 +loglog n/4 log n) = O_P(1). Taking into account the error terms, we can conclude that √(log n) ( λ_k(L_G_n)/n√(2 log n) -β_n ) =λ_k(L_G_n)-n β_nλ_k(())/√(2) n + β_n √(log n)(λ_k(())/√(2 log n)-1-loglog n/4 log n) + β_n loglog n/4 √(log n) =O_P(1). which yields (<ref>). Now we prove parts (B) and (C). Notice that L̃_G'_n - L̃_G'_n_op = 1/rmax_i |G_n,ii| = O_P(√(log n)/r). We thus see that (<ref>), (<ref>), (<ref>), (<ref>) hold for L̃_G_n if and only if they hold for L̃_G'_n. Thus, it is enough to work with L̃_G_n. Note that L̃_G_n can be expressed as L̃_G_n = 1/r-1D_P_n - P_n +θ_n/r-1 D_Z_n -θ_n Z_n. Using Lemma <ref>, we obtain the following estimates: D_P_n_ ≤ n α_n |U| + n β_n ( )_ + n β_n V̅_ = O_P(n√(log n)), D_Z_n_ = √(n)max_i |1/√(n)∑_j Z_n,ij| =O_P(√(n log n)). Therefore |λ_1(L̃_G_n) - λ_1(P_n)| ≤1/r-1D_P_n_ +θ_n/r-1D_Z_n_ +θ_n Z_n_ = O_P(n√(log n)/r) + O_P(√(n log n)/r) + O_P(√(n)) = O_P(n√(log n)/r) + O_P(√(n)). Using the proof of (<ref>), |λ_1(L̃_G_n)/n - c_n/2U -√(c_n^2/4U +c_n(1-c_n))| ≤1/n |λ_1(L̃_G_n) - λ_1(P_n)| + |λ_1(P_n)/n -c_n/2U -√(c_n^2/4U +c_n(1-c_n))| =O_P(√(log n)/r) + O_P(1/√(n)). This proves (<ref>). In order to prove (<ref>), notice that |λ_1(L̃_G_n) - 1/r-1λ_1(D_P_n)| ≤P_n_ +θ_n/r-1D_Z_n_ +θ_n Z_n_ = O_P(n), which, together with the proof of (<ref>), implies that |(r-1) λ_1(L̃_G_n)/n√(2 log n) -√(c_n(1-c_n))| ≤|(r-1) λ_1(L̃_G_n)- λ_1(D_P_n)/n√(2 log n)| + |λ_1(D_P_n)/n√(2 log n) - β_n| + |β_n - √(c_n(1-c_n))| = |-(r-1)P_n + θ_n D_Z_n - (r-1)θ_n Z_n /n √(2 log n)|+O_P(1/√(log n)) + O_P(1/n) = O_P(r/√(log n)) + O_P(1/√(n)) + O_P(r/√(n log n)) + O_P(1/√(log n)) + O_P(1/n) = O_P(r/√(log n)). This completes the proof of (B). For (<ref>) and (<ref>), using Weyl's inequality as in the proof of (<ref>), it is enough to consider the matrix L̃'_G_n := L̃_G_n - P_n = 1/r-1 D_P_n +θ_n/r-1 D_Z_n + θ_n Z_n. Note that L̃'_G_n - 1/r-1 D_P_n_ = O_P(√(n log n)/r) +O_P(√(n)), L̃'_G_n - θ_n Z_n_ = O_P(n √(log n)/r). Therefore if r ≪√(n log n), then |(r-1) λ_k(L̃'_G_n)/n√(2 log n) -√(c_n(1-c_n))| ≤|(r-1) λ_k(L̃'_G_n)- λ_k(D_P_n)/n√(2 log n)| + |λ_k(D_P_n)/n√(2 log n) - c_n(1-c_n)| = O_P(1/√(log n)) + O_P(r/√(n log n)), which proves (<ref>). On the other hand, if r ≫√(n log n), |λ_k(L̃'_G_n)/√(n) -2 (1-c)| ≤1/√(n)L̃'_G_n - θ_n Z_n_ + | θ_n λ_k(1/√(n)Z_n) - 2(1-c) | = O_P(√(n log n)/r) + O_P(1/n^2/3) = O_P(√(n log n)/r) proving (<ref>). This completes the proof of the theorem. § ACKNOWLEDGEMENTS We thank Arijit Chakrabarty for helpful discussions. SSM was partially supported by the INSPIRE research grant DST/INSPIRE/04/2018/002193 from the Dept. of Science and Technology, Govt. of India, and a Start-Up Grant from Indian Statistical Institute. alpha § AUXILIARY RESULTS Here we collect lemmas and results borrrowed from the literature. First we define some notations. _n() := The set of all n × n matrices with complex entries. For A ∈_n(), define the Frobenius norm of A by A_F := √(∑_1 ≤ i ≤ j ≤ n|A_ij|^2). For x ∈^n, let x = √(∑_i = 1^n x^2_i). The Operator norm of A is defined as A_ := sup_x = 1Ax. For a random matrix A with eigenvalues λ_1, …, λ_n, let F_A(x) := 1/n∑_i = 1^n (λ_i ≤ x) be the empirical distribution function associated with the eigenvalues. Let _n denotes the set of all permutations of the set {1, 2, …, n}. Let A,B ∈_n() are two normal matrices, with eigenvalues λ_1(A),λ_2(A), …, λ_n(A) and λ_1(B),λ_2(B), …, λ_n(B) respectively. Then we have min_σ∈_n∑_i = 1^n | λ_i(A) - λ_σ(i)(B)|^2 ≤A - B ^2_F. An immediate consequence of this is that d_W_2(μ_A, μ_B)^2 ≤A - B^2/n. Let A, B ∈_n() are two Hermitian matrices. Then, sup_x ∈ |F_A(x) - F_B(x)| ≤rank(A - B)/n. Let A, B ∈_n() be two Hermitian matrices with decreasing sequence of eigenvalues λ_1(A), λ_2(A), …, λ_n(A) and λ_1(B), λ_2(B), …, λ_n(B), respectively. Then, for i ∈ [n], λ_j'(A) + λ_i-j'+n(B) ≤λ_i(A+B) ≤λ_j(A) + λ_i-j+1(B) for any j≤ i and j' ≥ i. A consequence of this is that for any 1 ≤ i ≤ n, |λ_i(A + B) - λ_i(A)| ≤max{|λ_1(B)|, |λ_n(B)|} = B_. The following lemma is an amalgamation of Theorems 1.5.3 and  2.2.2 from <cit.>. If ξ_1, ξ_2, …, ξ_n are i.i.d. standard normal variables, then for k ∈, [√(2 log n)(ξ_(k) - √(2 log n) + loglog n + log (4π)/2 √(2 log n))≤ t ] → e^-e^-x∑_j=0^k-1e^-jx/j!, where ξ_(k) denotes the k-th largest order statistic of ξ_1, ξ_2, …, ξ_n. In particular, 2 log n(ξ_(k)/√(2 log n)-1 +loglog n/4 log n) = O_P(1). Since the ξ_i's are symmetric, a similar result holds for ξ_n+1-k, modulo the obvious sign change in the centering parameter. § MISCELLANEOUS RESULTS AND PROOFS We can adapt Lemma 4.12 of <cit.> to our setting. Let Ŵ_n = √(n(r - 2))/r - 1() + (Z_n)/r - 1 - Z_n and W̃_n = √(n(r - 2))/r - 1() + √(n + 1) ()/r - 1 + Z_n, where n^-1/2 Z_n is a GOE random matrix and and are vectors of i.i.d. standard Gaussian random variables. Further, , and Z_n are independent. Then, for fixed r and every k ∈, lim_n →∞ n^-(k + 1) [ (Ŵ_n^2k) - (W̃_n^2k)] = 0. Let = _n and _n + 1 = [ '_n; V_n + 1 ] where '_n is an i.i.d copy of _n and V_n + 1 is a standard Gaussian independent of '_n. Let Z_n + 1 be the matrix defined by Z_n + 1 = [ Z'_n ; ^⊤ Z^1 ], where Z'_n is an i.i.d copy of Z_n, is an n × 1 vector with i.i.d. Gaussian entries and Z^1 is a mean 0 variance 2 Gaussian random variable. Now consider the matrix W'_n = √(n(r - 2))/r - 1('_n) + (Z'_n )/r - 1 - Z_n. Then W'_n and W̃_n has same distribution. So, it is enough to prove that lim_n →∞ n^-(k + 1) [ (Ŵ_n^2k) - ((W'_n)^2k)] = 0. Now [(Ŵ_n^2k) - ((W'_n)^2k)] = ∑_π [Ŵ_π - W'_π]', where the sum is over all circuits π : {0, …, 2k }→ [n] with π(0) = π(2k) and Ŵ_π = ∏_i = 1^2kŴ_π(i - 1), π(i). A word w of length k is a sequence of numbers of length k, e.g., 12234 is a word of length 5. Set each word w of length 2k to be a circuit assigning w[0] = w[2k]. We associate a word with each circuit of length 2k via the relation w[i] = π(i), 1 ≤ i ≤ 2k. Let Π(w) denote the collection of circuits π such that the distinct letters of w are in a one-to-one correspondence with the distinct values of π. Let v(w) be the number of distinct letters in the word w. We note that #Π(w) ≤ n^v(w). Define f_n(w) := Ŵ_w - W'_w. We show that, for any word w, there exits a constant C_w > 0 such that for all n ≥ 1, |f_n(w)| = |Ŵ_w - W'_w| ≤ C_w(n/r - 1 + 3)^k - v(w) + 1/2. Let q = q(w) be the number of indices 1 ≤ i ≤ 2k for which w[i] = w[i - 1]. If q(w) = 0 then Ŵ_w is product of only off-diagonal entries of Ŵ_n and off-diagonal entries of Ŵ_n and W'_n are same. Thus f_n(w) = 0 and f_n(w) ≠ 0 only if q(w) ≥ 1. Let _w be the graph with vertex set as the distinct letters of w and edge set { (w[i - 1], w[i] ) : 1 ≤ i ≤ 2k }. Let u = u(w) be the number of edges of _w with distinct endpoints in w, which appear exactly once along the circuit w, e.g u(12234) = 4. Then by independence of the entries of W'_n we have that W'_w = 0 as soon as u(w) ≥ 1. For u(w) > q(w), let there are b number of off-diagonal entries in Ŵ_w and ( 2k - b) diagonal entries. Then b > 2k - b. Since u(w) > 1, among these b off-diagonal entries there will be at least one entry, say Ŵ_ij such that the edge {i,j} is traversed exactly once in _w. By Wick's formula Ŵ_w = 0. Thus, to prove (<ref>) it is enough to consider q(w) ≥ u(w). It is easy to check that excluding the q loop-edges (each vertex connecting to itself), there are at most k + ⌊ (u - q)/2 ⌋ distinct edges in w. These distinct edges form a connected path through v(w) vertices, which for u ≥ 1 must also be a circuit. From the proof of Lemma 4.12 of <cit.>, we have v(w) ≤ k + ( u(w) = 0) + ⌊ (u(w) - q(w) )/2 ⌋≤ k. Proceeding to bound |f_n(w)|, suppose first that u ≥ 1, in which case f_n(w) = Ŵ_w. To compute this expectation we employ Wick's formula. Note that if we match u off-diagonal entries with the u diagonal entries with which they are correlated, and remaining ( q - u ) diagonal entries are either self matched, or they can match among themselves, then only contribution to the expectation will be non-zero. Other matching configurations will result in 0 due to the independence of the entries, e.g for the word w = 11223333, we have Ŵ_w = [Ŵ_11Ŵ_12] [Ŵ_22Ŵ_23] [Ŵ_33Ŵ_33] [Ŵ_33Ŵ_31]. Further, observe that covariance terms, i.e [ Ŵ_iiŴ_ij ] = 1/r - 1, [ Ŵ_ijŴ_jj] = 1/( r - 1)^2 and [Ŵ^2_ii] = n/r - 1 + 1/(r - 1)^2 + 2 - 4/r - 1 < n/r - 1 + 3, for any 1 ≤ i ≤ n. Then |f_n(w)| ≤ C^3_w(1/r - 1)^u [ C^1_w( 1/(r - 1)^2)^( q - u )/2 + C^2_w(n + r - 3/r - 1 + ( r - 2/r - 1)^2)^( q - u )/2] ≤ C_w [ 1 + C'_w ( n/r - 1 + 2 )^( q - u )/2]. By our bound (<ref>) on v(w), this implies that (<ref>) holds. Consider next words w for which u(w) = 0. There will be some diagonal entries and some off-diagonal entries which can appear twice or more in f_n ( w). Let a_1, …, a_q be the q vertices for which { a_i , a_i } is an edge of _w. Then Ŵ_w = ∏_i = 1^q Ŵ_a_i, a_iV̂_w, where V_w is the product of ( 2k - q ) off-diagonal entries of Ŵ_n that correspond to the edges of w that are in _w. Similarly, we can write W'_w = ∏_i = 1^q W'_a_i , a_i V'_w. Since off-diagonal entries of Ŵ_n are W'_n are same, we can replace V̂_w by V'_w in Ŵ_w. Let Ŵ_ii = D̂_i + Ŝ_i and W'_ii = D'_i + S'_i, for i = 1,…, 2k, where D̂_i = √(n(r - 2))/r - 1_i + 1/r - 1∑_j = 1^2k Z_ij - Z_ii, D'_i = √(n(r - 2))/r - 1'_i + 1/r - 1∑_j = 1^2k Z'_ij - Z_ii and Ŝ_i = ∑_j = 2k + 1^n Z_ij and S'_i = ∑_j = 2k + 1^n Z'_ij. Note that we may and shall replace each Ŝ_i by S'_i without altering Ŵ_w, as Z_ij and Z'_ij have same distribution. f_n(w) = [ V'_w [ ∏_i = 1^q ( D̂_a_i + S'_a_i) - ∏_i = 1^q (D'_a_i + S'_a_i) ]] = ∑_i = 1^q [ V'_w ( D_a_i - D'_a_i ) ∏_j = 1^i - 1Ŵ_a_j, a_j∏_j = i + 1^q W'_a_j, a_j] ≤∑_i = 1^q ([V'^4_w])^1/4[( D_a_i - D'_a_i )^4 ]^1/4√([ ∏_j = 1^i - 1Ŵ^2_a_j, a_j∏_j = i + 1^q W'^2_a_j, a_j]). Note that V'_w is product of some i.i.d standard Gaussians, so [V'^4_w] is constant. For each i, D_a_i - D'_a_i is a Gaussian random variable with mean 0 variance 2n(r - 2)/( r - 1)^2 + 4k/(r - 1)^2 = O(n) , so [( D_a_i - D'_a_i )^4 ]^1/4 = O( √(n)). We know [ W'^2_ii] = n/r - 1 + 1/(r - 1)^2 + 2 < n/r - 1 + 3 and [Ŵ^2_ii] = n/r - 1 + 1/(r - 1)^2 + 2 - 4/r - 1 < n/r - 1 + 3 and [ Ŵ_ii W_jj] = - 2/r - 1 + 2. By Wick's formula, we have [ ∏_j = 1^i - 1Ŵ^2_a_j, a_j∏_j = i + 1^q W'^2_a_j, a_j] = O(( n/r - 1 + 3 )^q). Hence f_n(w) ≤ C_w √(n)( n/r - 1 + 3 )^q/2. It is clear that (<ref>) implies that (<ref>) holds and hence the proof of the lemma is complete. Let g() = Ĝ_n(). As before, we have to control λ_2(g) and λ_3(g). We have the identities: ∂ g() /∂ x_ℓ = - 1/n^3 / 2 r^1/2(∂ L_H_n()/∂ x_ℓR̂_n^2()), ∂^2 g()/∂ x^2_ℓ = 2/n^2 r(∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n^2()), ∂^3 g() /∂ x^3_ℓ = - 6/n^5/2 r^3/2(∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n^2()). Note that (H_n() ) = (1/√(N)∑_ℓ = 1^M x_ℓ Q_ℓ) = 1/√(N)∑_ℓ = 1^M x_ℓ (Q_ℓ). Hence ∂ L_H_n()/∂ x_ℓ = 1/√(N)((Q_ℓ) - Q_ℓ) = 1/√(N)[ r I_r - J_r 0; 0 0 ], where we have assumed (after a relabeling of nodes if necessary) that Q_ℓ = [ J_r - I_r 0; 0 0 ]. Now |(∂ L_H_n()/∂ x_ℓR̂_n^2())| ≤1/v^2∑_i,j| ( ∂ L_H_n()/∂ x_ℓ)_ij| = 1/v^22 r (r - 1)/√(N)≤1/v^22 r^2/√(N), and a fortiori, ∂ g() /∂ x_ℓ_∞≤2/v^2r^3/2/n^3/2√(N). From (<ref>) we also deduce that ∂ L_H_n()/∂ x_ℓ^2_F = r^2(r - 1)/N≤r^3/N and ∂ L_H_n()/∂ x_ℓ_ = r/√(N). Therefore (∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n^2()) ≤∂ L_H_n()/∂ x_ℓ^2_F·R̂_n()^3_≤r^3/N1/v^3 and so ∂^2 g()/∂ x^2_ℓ_∞≤2r^2/n^2Nv^3. Finally, (∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n() ∂ L_H_n()/∂ x_ℓR̂_n^2()) ≤∂ L_H_n()/∂ x_ℓ_·∂ L_H_n()/∂ x_ℓ_F^2 ·R̂_n()_^4 ≤r/√(N)·r^3/N·1/v^4 and therefore ∂^3 g() /∂ x^3_ℓ_∞≤6 r^5 / 2/n^5/2 N^3/2 v^4. We conclude that λ_2(g) ≤ 4 max( v^-4, v^-3) r^3/n^2 N and λ_3(g) ≤ 8 max( v^-6, v^-9/2, v^-4) r^9/2/n^5/2 N^3/2. Hence |(Ĝ_n (𝐘)) - (Ĝ_n(𝐙))| ≤ 8 max(v^-3, v^-4) r^3/n^2 N∑_ℓ = 1^M [ [Y^2_ℓ(|Y_ℓ| > K)] + [Z^2_ℓ(|Z_ℓ| > K)] ] + 16 max(v^-6, v^-9/2, v^-4) r^9/2/n^5/2 N^3/2∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ K)] + [|Z_ℓ|^3(|Z_ℓ| ≤ K)] ]. This completes the proof. Let g() = G̃_n(). As before, we have to control λ_2(g) and λ_3(g). We have the identities: ∂ g() /∂ x_ℓ = - 1/n^3/2(∂L̃_H_n()/∂ x_ℓR̃_n^2()), ∂^2 g()/∂ x^2_ℓ = 2/n^2( ∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n^2()), ∂^3 g() /∂ x^3_ℓ = - 6/n^5/2(∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n^2()). Note that (H_n() ) = (1/√(N)∑_ℓ = 1^M x_ℓ Q_ℓ) = 1/√(N)∑_ℓ = 1^M x_ℓ (Q_ℓ). Hence ∂L̃_H_n()/∂ x_ℓ = 1/√(N)( (Q_ℓ)/r - 1 - Q_ℓ) = 1/√(N)[ 2I_r - J_r 0; 0 0 ], where we have assumed (after a relabeling of nodes if necessary) that Q_ℓ = [ J_r - I_r 0; 0 0 ]. Now |(∂L̃_H_n()/∂ x_ℓR̃_n^2())| ≤1/v^2∑_i,j| ( ∂L̃_H_n()/∂ x_ℓ)_ij| =1/v^2r^2/√(N), and a fortiori, ∂ g() /∂ x_ℓ_∞≤1/v^2r^2/n^3/2√(N). From (<ref>) we also deduce that ∂L̃_H_n()/∂ x_ℓ^2_F = r^2/N and ∂L̃_H_n()/∂ x_ℓ_ = max{2, r - 2}/√(N)≤r/√(N). Therefore (∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n^2()) ≤∂L̃_H_n()/∂ x_ℓ^2_F·R̃_n()^3_≤r^2/N1/v^3 and so ∂^2 g()/∂ x^2_ℓ_∞≤2r^2/n^2Nv^3. Finally, (∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n() ∂L̃_H_n()/∂ x_ℓR̃_n^2()) ≤∂L̃_H_n()/∂ x_ℓ_·∂L̃_H_n()/∂ x_ℓ_F^2 ·R̃_n()_^4 ≤r/√(N)·r^2/N·1/v^4 and therefore ∂^3 g() /∂ x^3_ℓ_∞≤6 r^3/n^5/2 N^3/2 v^4. We conclude that λ_2(g) ≤ 2max( v^-4, v^-3) r^4/n^2 N and λ_3(g) ≤ 6 max( v^-6, v^-9/2, v^-4) r^6/n^5/2 N^3/2. Hence |(G̃_n (𝐘)) - (G̃_n(𝐙))| ≤ 4 max(v^-3, v^-4) r^4/n^2 N∑_ℓ = 1^M [ [Y^2_ℓ(|Y_ℓ| > K)] + [Z^2_ℓ(|Z_ℓ| > K)] ] + 12 max(v^-6, v^-9/2, v^-4) r^6/n^5/2 N^3/2∑_ℓ = 1^M [ [|Y_ℓ|^3(|Y_ℓ| ≤ K)] + [|Z_ℓ|^3(|Z_ℓ| ≤ K)] ]. This completes the proof.
http://arxiv.org/abs/2409.02786v1
20240904150151
Slope of Magnetic Field-Density Relation as An Indicator of Magnetic Dominance
[ "Mengke Zhao", "Guang-Xing Li", "Keping Qiu" ]
astro-ph.GA
[ "astro-ph.GA", "physics.comp-ph", "physics.plasm-ph" ]
0000-0003-0596-6608]Mengke Zhao School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing 210023, Jiangsu, People’s Republic of China 0000-0003-3144-1952]Guang-Xing Li South-Western Institute for Astronomy Research, Yunnan University, Kunming 650091, People’s Republic of China 0000-0002-5093-5088]Keping Qiu School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing 210023, Jiangsu, People’s Republic of China Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, Jiangsu, People’s Republic of China § ABSTRACT The electromagnetic field is a fundamental force in nature that regulates the formation of stars in the universe. Despite decades of efforts, a reliable assessment of the importance of the magnetic fields in star formation relations remains missing. In star-formation research, our acknowledgment of the importance of magnetic field is best summarized by the B-ρ relation log B(ρ) /G = { -5, if ρ ≲ 10^-20 g cm^-3 2/3· logρ + logρ_0, if ρ ≳ 10^-20 g cm^-3 . whose interpretation remains controversial <cit.>. The relation is either interpreted as proof of the importance of a magnetic field in the collapse <cit.>, or the result of self-similar collapse where the role of the magnetic is secondary to gravity <cit.>. Using simulations, we find a fundamental relation, M_ A-k_B-ρ(slope of B-ρ relation) relation: M_ A/ M_ A,c = k_B-ρ^ K≈ M_ A/7.5≈ k_B-ρ^1.7± 0.15. This fundamental B-ρ-slope relation enables one to measure the Alfvénic Mach number, a direct indicator of the importance of the magnetic field, using the distribution of data in the B-ρ plane. It allows us to drive the following empirical B-ρ relation B/B_c = exp((γ/ K)^-1( ρ/ρ_c)^γ/ K) ≈B/10^-6.3 G≈ exp(9 (ρ/10^-16.1 g cm^-3)^0.11) , which offers an excellent fit to the Cruther et al. data, where we assume M_ A-ρ relation (M_ A/ M_ A,c = (ρ/ρ_c)^γ≈ M_ A/7.5≈(ρ/10^-16.1 g cm^-3)^ 0.19). The foundational M_ A- k_B-ρ relation provides an independent way to measure the importance of magnetic field against the kinematic motion using multiple magnetic field measurements. Our approach offers a new interpretation of the <cit.>, where a gradual decrease in the importance of B at higher densities is implied. § BACKGROUND The magnetic field is a fundamental force that plays a crucial role in the evolution of interstellar gas and star formation <cit.> through magnetic force and it also changes the initial condition <cit.> by affecting the transport of cosmic rays. Despite decades of research, constraints on the importance of magnetic fields remain sparse. There are two approaches. The first one involves the measurement of dimension numbers which reflects the importance of the magnetic fields. One example is the mass-to-flux ratio, which is the ratio between mass and magnetic flux <cit.>. The second approach is to derive scaling relations between the magnetic field and density<cit.>, which serve as bridges between theory and observations. The empirical magnetic field density relation (B-ρ relation) is an empirical relation describing the magnetic field strength in the interstellar constructed using a sample of regions with Zeeman observations <cit.>. The relation can be described using different functions, such as log B(ρ) /G = { -5, if ρ ≲ 10^-20 g cm^-3 2/3· logρ + logρ_0, if ρ ≳ 10^-20 g cm^-3 . The classical B-ρ relation is an empirical relation, which does not explain how the B-field evolves with density growing. explain a part of classical B-ρ relation with the 2/3 power-law exponent being caused by the collapse of star formation This exponent of B-ρ in star-forming can be linked to the density profile of the region, where B∝ρ^1/2 is related to the density profile of ρ∼ r^-2 <cit.>. The B-ρ relation is affected transiently by galactic magnetic field morphology <cit.>. The magnetic field-density relation, although widely cited, is non-trivial to interpret because both the magnetic field and gravitational force are long-range forces where the scale information is critical <cit.>. For example, the energy balance of a cloud reads B^2r^3 ≈ρ^2 r^5, where the scale information is an essential part. To address the energy balance, one should perform joint analyses of the magnetic field-density relation and density-scale relation <cit.>. B-ρ relation derived from several observations targeting single regions can provide crucial insight into the importance of B field <cit.>. When targeting individual regions, one eliminates the uncertainties in the scale. B-ρ relation constructed in this way can better reflect the interplay between turbulence, gravity, and the B-field. The scope of this paper is to offer a direct interpretation of the slope of the B-ρ relation, as a direct indicator of the importance of magnetic field. Compared to the mass-to-flux ratio, which is indirect diagnostic and can be hard to measure, the Alfenvi Mach number is a direct indicator of the importance and is directly measurable using the slope. Our approach also allows for re-interpreting the <cit.>. Using numerical simulation, we establish a fundamental link between the slope, d( log B) / d( logρ) with the importance of magnetic field measured in terms of the Alfvén Mach number M_ A. This allows us to evaluate the importance of the magnetic field directly from observations even to approach the fundamental B-ρ relation. § ALFVÉNIC MACH NUMBER & FUNDAMENTAL B-Ρ RELATION §.§ Slope of B-ρ Relation: An Indicator of M_ A To establish ration between B-ρ and M_ A, we select a numerical magnetohydrodynamic (MHD) simulation <cit.>. The initial conditions are generated by a PPML code without self-gravity <cit.>, after which the main simulation was performed using AMR code Enzo<cit.> where self-gravity is included. In both simulations, the equation of state is isothermal. The relatively short simulation time (0.8 Myr) means the effect of self-gravity is only noticeable in the regions of the highest density [The simulations were terminated at around 0.76 Myr, The density after which self-gravity can have an effect should be around 2.6×10^-20 g cm^-3 (estimated using ρ_* = 1/(t_*^2· G), where t_* is the time at which the simulation terminates), which is far above the mean density (3.8×10^-21) of simulations and this density. ]. The observed B-ρ relation mostly reflects the interplay between the magnetic field and turbulence cascade, which is the main focus of this work. We choose three simulations of different initial degrees of magnetization β_0 = 8π c_s^2 ρ_0/B_0^2 = 0.2, 2, 20. The slopes of the B-ρ relation can be derived after some averages (Method <ref>). The Alfvén Mach number M_ A in various simulations can be calculated by the total energy ratio between the magnetic energy E_B and kinetic energy E_k: M_ A = √(4πρ)σ_v/B = 0.5ρσ_v^2/B^2/8π = ∑ e_ k/∑ e_B , where the M_ A in strong, medium, and weak magnetic field states can be estimated as 0.31, 0.49, and 1.29, respectively. The Alfvénic Mach numbers M_ A are measured at the time of the snapshots (Method <ref>), which is different from the Alfvénic Mach numbers inferred from the initial conditions, due to the dissipation of turbulence and the amplification of the magnetic field. We then plot the Mach number against the measured slope of the B-ρ (see Fig. <ref>), to reveal a surprisingly simple relation between the slope of the B-ρ relation (k_B-ρ = d (log_10 B) / d (log_10 ρ)) and the Alfvén Mach number M_ A, M_ A/ M_ A,c = k_B-ρ^ K , where the K = 1.7±0.15, and the M_ A,c is the characteristic M_ A (∼ 7.5±1.3, see Fig. <ref>). The degree of magnetization quantified using M_ A determines the power-law exponent between density and magnetic field. The M_ A- k_B-ρ relation could be a fundamental relation in natural. An intuitive explanation for the correlation between the stronger magnetic field and shallower magnetic field-density relation is that a strong field can regulate the way collapse occurs, such that the matter accumulates along the field lines, weakening the correlation between B and ρ. §.§ Determining the Importance of B-field using the M_ A- k_B-ρ relation M_ A is a fundamental physical quantity that directly characterizes the importance of the magnetic field. The fact that this quantity is directly related to the slope of the B-ρ relation provides a new way to study the importance of the magnetic field (see Sect. <ref>). However, to construct this single-region B-ρ relations, we require multi-scales, multi-densities observations towards single regions (Methods <ref>). Another case is the outer region of the Taurus molecular cloud, where multiple constraints of the magnetic field strength using Zeeman splitting <cit.> are available. We reveal vastly different roles of the magnetic field (see Fig. <ref>) by applying our results to both clouds. Toward the Taurus outer region, using Zeeman observations <cit.>, we estimate a M_ A of 0.8, indicative of a relatively strong (dynamically significant) magnetic field. Towards the NGC 6334 center region, the M_ A is estimated as 3.0, implying a weak (dynamically weak) magnetic field. Our assessment of the importance of the magnetic field is consistent with the behavior of these regions as reported in the literature <cit.>: The strong magnetic field observed towards the outer region of the Taurus molecular cloud, where M_ A (∼ 0.4) indicates that the B-field is strong enough to dominate the gas evolution. This is consistent with conclusions from previous studies <cit.>. At the Taurus outer region, people have observed striations which are taken as signs of the dominance MHD wave <cit.> – a phenomenon that occurs only if the B-field is strong enough. Towards the dense part where our observations target, it is found that denser gas has lost a significant amount of magnetic flux as compared to the diffuse envelope <cit.>, and this critical transition, and agrees well with our estimate where √(E_B / E_k)∝ M_ A. Towards the central region NGC 6334, we find that kinetic energy is far above the magnetic energy (M_ A≈1.6), dynamically insignificant magnetic field). This is consistent with NGC 6334 being one the most active star-forming regions with a high fraction of dense gas <cit.>. Using the fundamental M_ A- k_B-ρ relation (Eq. <ref>), we can use the slope of B-ρ relation to estimate the Alfvén Mach number M_ A, which determine the importance of B-field. §.§ Fundamental B-ρ relation In the past, a convenient way to study magnetic field was to plot the field strength against the gas density, forming the magnetic field-density relation <cit.>. This multi-region magnetic field-density relation is constructed from observations of different regions which have different sizes and geometries. The plural nature of this multi-region magnetic field-density relation makes it not particularly useful in estimating the M_ A. However, we can decipher this relation using a minimum of assumptions. We start from the fundamental M_ A- k_B-ρ relation (Eq. <ref>), and assume another relation between Alfvén Mach number M_ A and density ρ as a power low: M_ A/ M_ A,c = (ρ/ρ_c)^γ , The fundamental B-ρ relation can be derived by combining fundamental M_ A-k_B-ρ relation (Eq. <ref>) with the empirical M_ A-ρ relation (Eq.<ref>) : B/B_c = exp((γ/ K)^-1( ρ/ρ_c)^γ/ K) , and by fitting Eq.<ref> to the observational data from <cit.>, assuming K =1.7±0.15 (Sec.<ref>), we find γ = 0.19 ± 0.07, log_10ρ_ c / (g cm^-3) = 16.09 ± 1.02, and log_10 B_ c/ (G) = -6.29 ± 0.65. This relation (Eq. <ref>) perfectly crosses all Zeeman observations and three MHD simulations and agrees to the classical B-ρ relation <cit.> In the diffuse region, the B-field slowly grows with density increasing. In high density region (ρ ∼ 10^-18 g cm^-3), the slope of B-ρ relation is also up to 2/3, which is caused by star formation <cit.>. Our fitting implies M_ A/ M_ A,c = (ρ/ρ_c)^0.19 ± 0.07 , which is a general relation connecting the degrees of magnetization of clouds in the Milky Way with the density. At low density (10^-22 g cm^-3), the magnetic field is non-negligible. As the density increases, the importance of the magnetic field is gradually diminished. The high-density regions are mostly kinematically-dominated. This flat M_ A-ρ relation (M_ A ∝ ρ^0.19 ± 0.07) also agrees to actual observations (see the Fig.2 of ). This gradual decrease in the importance of the magnetic field is consistent with observational data. At the low-density end, the outer region of Taurus <cit.> and Musca <cit.> molecular clouds have striations (MHD wave, the feature of strong magnetic field state), indicative of strong, dynamically-important magnetic field. At intermediate densities, the increase of the gas density can be accompanied by an increase of the mass-to-flux ratio, and a gradual decrease of the importance of the magnetic field, as observed in the L1544 region <cit.>, and most high-density regions appears to be kinematically-dominated, such as NGC 6334 <cit.>, Ser-emb 8 <cit.>, and W51 <cit.>. § CONCLUSION We find that the slope of the magnetic field-density relation k_B-ρ = d (log_10 B)/ d (log_10ρ) linked to the Alfvén Mach number, M_ A using numerical simulation results, M_ A/ M_ A,c = k_B-ρ^ K≈ M_ A/11.0≈ k_B-ρ^ 1.7 ± 0.15, where the K = 1.7±0.15, and the M_ A,c is the characteristic M_ A (∼ 7.5±1.3). This fundamental relation results from the interplay between turbulence cascade and the implication of magnetic field in MHD turbulence. This fundamental relation allows us to estimate Ma using the slope of the B-rho relation. We apply this technique to several regions and find that the estimated M_ A values exhibit clear correlations between M_ A with the observed star formation activities. We find that the empirical B-ρ relation can be explained using our fundamental relation, together with a relation between Ma and gas density. The M_ A-ρ relation reads M_ A/ M_ A,c = (ρ/ρ_c)^γ≈ M_ A/7.5≈(ρ/10^-16.1 g cm^-3)^ 0.19± 0.07 , and the B-ρ relation takes the following form B/B_c = exp((γ/ K)^-1( ρ/ρ_c)^γ/ K) ≈B/10^-6.3 G≈ exp(9 (ρ/10^-16.1 g cm^-3)^0.11) , which provides a good fit to existing observational data. This fitting result reveals a general trend where a general decrease in the importance of the magnetic field accompanies the increase in the gas density. The fundamental M_ A- k_B-ρ relation provides an independent way to estimate the importance of the magnetic field, and it can be widely applied to further observations. The decreasing importance of magnetic field at higher density appears consistent with most of the existing observations and will be tested against future results. Method § SIMULATION The numerical simulation of molecular cloud selected in this work is identified within the three β_0 simulations applied by the constrained transport MHD option in Enzo (MHDCT) code <cit.>. The simulation conducted in this study analyzed the impact of self-gravity and magnetic fields on supersonic turbulence in isothermal molecular clouds, using high-resolution simulations and adaptive mesh refinement techniques. which detail in shown in <cit.>. The Enzo simulation with three dimensionless parameters provides us with massive information on physical processes: M_s = v_ rms/c_s = 9 α_vir = 5 v_ rms^2 / 3 G ρ_0 L_0^2 = 1 β_0 = 8π c_s^2 ρ_0/B_0^2 = 0.2, 2, 20 where v_rms is the rms velocity fluctuation, c_s is sonic speed, ρ_0 is mean density, and B_0 is the mean magnetic field strength. With the same sonic Mach number M_ s and viral parameter α_vir in initial conditions, the difference only exists in initial magnetic pressure β-0 with various B-field strength, The β_0 as 0.2, 2, and 20 present the strong B-field state, medium B-field, and weak B-field state. The clouds are in three different evolutionary stages started with self-gravity existing at 0.6t_ ff(∼ 0.76 Myr), where t_ ff represents 1.1((n_H / 10^3)^-1/2 Myr <cit.>. Due to the short timescale of evolution, the gravitational collapse could barely affect the B-ρ relation in the simulation The size of these molecular clouds is around 4.6^3 pc (256^3 pixels), with one-pixel size of approximately 0.018 pc <cit.>. Fig. <ref> show the distribution of B-field and density with three β in the timescale of 0.6 t_ ff (∼ 0.76 Myr). In this Enzo simulation, the thermal energy is located at the low state (M_s = 9). Their timescale of evolution stage is 0.6 t_ ff, around 0.76 Myr, which the timescale starts from when interstellar media collapse. Due to the short evolutionary time (∼ 3.85 × 10^-21 g cm^-3, or 821 cm^-3), the gravitational energy less affects the system in simulation. With the similar mean density ρ_0 in these simulations, the different β_0 (∼0.2, 2, 20) present the different magnetic field states, strong B-field, stronger B-field (weaker than strong B-field but stronger than weak B-field), and weak B-field. The energy ratio between kinetic energy E_ k and magnetic energy E_ B E_ k/E_B in the evolutionary timescale of 0.6 t_ff (∼ 0.76 Myr) is calculated: E_ k/E_ B = 1/2ρ v_ rms^2/B_0^2/8π The E_ k/E_B with stronger, medium, and weak B-field states are 0.09, 0.24, and 3.37, respectively. § RE-SCALING METHOD The values obtained from numerical simulations are without dimensions, and the numerical values of the physical quantities are obtained during the interpretation phase, where the pre-factors are inserted to ensure the dimensionless parameters such as the Mach number stay unchanged. When applying the result of numerical simulations to reality, one is allowed to change the numerical values of the prefactors, as long as the dimensionless controlling parameters stay unchanged. Each simulation has three dimensionless controlling parameters, including the Mach number, the plasma β, and the virial parameter. We argue that the virial parameter has little influence on the slope of the magnetic-field-density relation. This is because gravity is not important in determining the relation. This assertion is supported by the fact that the magnetic field-density relation do not evolve significantly during the collapse phase. We thus rescale our simulations using the following transform: B' = λ_B · B, ρ' = λ_ρ·ρ where λ_B and λ_ρ are the rescalling factors, the B and ρ are the B-field and density before rescalling, the B' and ρ' are the B-field and density after rescalling. In this simulation, initial M_ A,0 is amount to the plasma β_0, M_ A_0 = (ρ_0 v_ rms^2/B_0^2)^0.5 = (81 c_s^2 ρ_0 /B_0^2)^0.5 = (81/8πβ_0)^0.5. With evolving, the unchanged of M_ A means that plasma β is not changed. We thus require that this rescaling ensure that the Alfvenic Mach number M_ A of simulations do not change. The density after rescalling, ρ', is obey on the M_ A-ρ relation (See Eq. <ref>): ρ' = ( M_ A/ M_ A,0)^1/γ·ρ_c where the γ is the slope of M_ A-ρ empirical relation, the M_ A,c is the characteristic M_ A (see Eq. <ref>). The rescalling factor of density is calculated as: λ_ρ = ρ'/ρ = ( M_ A/ M_ A,0)^1/γ·ρ_c/ρ where the ρ is density before rescalling. The B-field strength after rescalling, B', obeys the B-ρ relation (See Eq. <ref>) and is calculated by rescalling density ρ': B' = exp((γ/ K)^-1( ρ'/ρ_c)^γ/ K) · B_c where the parameter, B_c, ρ_c, γ, and cal K, are the same as Eq. <ref>, the B is the B-field before scalling. The B-field rescalling factor λ_B is defined as: λ_B = B'/B = exp((γ/ K)^-1( ρ'/ρ_c)^γ/ K) ·B_c/B where the B is B-field before rescalling. The parameters before or after rescalling are shown in Tab. <ref>. The rescalling factors are shown in tab. <ref>. § FITTING SLOPE OF B-Ρ DISTRIBUTION Fig. <ref> show the distribution between B-field and density in simulation with M_ A = 0.31, 0.49, 1.29, respectively. Fitting the main structure of B-ρ distribution at the main density range of simulation in log-log space, the slope of B-ρ relation, d (log_10 B)/ d (log_10ρ) in simulations (M_ A = 0.31, 0.49, 1.29) can be derived as 0.15, 0.21, 0.35, respectively. table, with all parameters, energy ratios, § CALCULATED THE SLOPE OF B-Ρ RELATION IN OBSERVATION Calculating Slope of B-ρ relation have two method: i) Single source with various scales ii) Single source with various densities §.§ Single source with various scales The B-field B and density ρ of a single source is observed at various scales, such as NGC 6334 <cit.>(d (log_10 B)/ d (log_10ρ) = 0.41). §.§ Single source with various densities The region of a single source has sub-regions with various densities. For example, the diffuse region (N(H_2) 2× 10^22 cm^-2) in the Taurus could exist MHD wave <cit.>. As Fig. <ref> shows, the Taurus out region has sub-areas with various densities, 3C132, 3C133, L1544 <cit.>. Zeeman source 3C132 is located in the MHD Wave region and 3C133, L1544 is located at the boundary line of the MHD Wave region. <cit.> found that L1544 exists as the transition point between strong and weak magnetic fields. Fig. <ref> shows the distribution between the mean density and mean B-field of three sources, which mean ρ and B come from <cit.>. The slope of the B-ρ relation in the Taurus outer region is fitting as 0.17. § ENERGY SPECTRUM VELOCITY DISPERSION The energy spectrum of density ρ in three simulations presents the density structure focused on the 6-pixel scale (see Fig. <ref>). We use the 8-pixel as the sub-block size to calculate the velocity dispersion σ_v in each pixel of simulation data, which is close to the 6-pixel scale and can divide the whole scale of simulation data (256 pixels) without remainder: v_ijk, m = ∑ v_ijk·ρ/∑ρ∑ v_ijk σ_v = (∑ (v_ijk - v_ijk, m)^2)^1/2 where the i,j,k show the axis of x,y, and z. The velocity dispersion can be used to compute the kinetic energy density (e_k = 1/2ρσ_v^2).
http://arxiv.org/abs/2409.03166v1
20240905015154
Continual Skill and Task Learning via Dialogue
[ "Weiwei Gu", "Suresh Kondepudi", "Lixiao Huang", "Nakul Gopalan" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CL" ]
Bi-capacity Choquet Integral for Sensor Fusion with Label Uncertainty This material is based upon work supported by the National Science Foundation under Grant IIS-2153171-CRII: III: Explainable Multi-Source Data Integration with Uncertainty. Hersh Vakharia University of Michigan Ann Arbor, MI [email protected] Xiaoxiao Du University of Michigan Ann Arbor, MI [email protected] Accepted XXX. Received YYY; in original form ZZZ ======================================================================================================================================================================================================================================================== § ABSTRACT Continual and interactive robot learning is a challenging problem as the robot is present with human users who expect the robot to learn novel skills to solve novel tasks perpetually with sample efficiency. In this work we present a framework for robots to query and learn visuo-motor robot skills and task relevant information via natural language dialog interactions with human users. Previous approaches either focus on improving the performance of instruction following agents, or passively learn novel skills or concepts. Instead, we used dialog combined with a language-skill grounding embedding to query or confirm skills and/or tasks requested by a user. To achieve this goal, we developed and integrated three different components for our agent. Firstly, we propose a novel visual-motor control policy ACT with Low Rank Adaptation (ACT-LoRA), which enables the existing state-of-the-art Action Chunking Transformer <cit.> model to perform few-shot continual learning. Secondly, we develop an alignment model that projects demonstrations across skill embodiments into a shared embedding allowing us to know when to ask questions and/or demonstrations from users. Finally, we integrated an existing Large Language Model (LLM) to interact with a human user to perform grounded interactive continual skill learning to solve a task. Our ACT-LoRA model learns novel fine-tuned skills with a 100% accuracy when trained with only five demonstrations for a novel skill while still maintaining a 74.75% accuracy on pre-trained skills in the RLBench dataset where other models fall significantly short. We also performed a human-subjects study with 8 subjects to demonstrate the continual learning capabilities of our combined framework. We achieve a success rate of 75% in the task of sandwich making with the real robot learning from participant data demonstrating that robots can learn novel skills or task knowledge from dialogue with non-expert users using our approach. § INTRODUCTION define natural interaction as an interaction between a human and a robot that resembles the way of natural communication between human beings such as dialogues, gestures, etc. without requiring the human to have prior expertise in robotics. The capability of learning tasks and acquiring new skills from natural interactions is desirable for robots as they need to perform unique tasks for different users. One direction of this interaction channel is well studied as instruction following <cit.>, where the robot performs the tasks requested by the human via natural language. Our work focuses on the other side of this communication channel, where the robot starts the conversation with human when it needs their help. This reverse direction of communication plays an important role for robots to learn with non-expert human users as it enables robots to convey their lack of task knowledge to perform tasks in a way that non-expert users can understand. Furthermore, our framework can leverage the feedback from users and learn to perform the task. Human-Robot interaction via language is a well studied problem <cit.>. Robot agents have been able to interpret language instructions from the human users, and perform visual-motor policies to complete tasks <cit.>. These methods rely on the emergent behaviors of large models, and do not continually learn new skills or add to their task or skill knowledge. To address this issue, some works have proposed life-long learning for robot agents <cit.>. Some recent works learn neural visuo-motor skills in a continual setting <cit.>. However, these approaches are passive and do not query the user for novel skills that the agent might need to complete given tasks. We propose a framework that utilizes dialogue to enable the robot agent to express its need for new skill or task information actively. When encountering a novel task, our robot agent starts a conversation with the human user to learn to execute the task. Throughout the interaction, the robot agent specifies the help that it needs from the human user via natural language, such as a human enacting the skill to find a feasible skill within the existing set of skills to perform the task or requesting multiple robot demonstrations to learn a completely novel skill for this specific task. Our contributions are as follows: * We compare ACT-LoRA against the baseline ACT model on few-shot continual learning on RLBench dataset. Our model demonstrates its strong adaptability by achieving 100% success rate on the tasks that it finetuned on with only 5 demonstrations. Furthermore, it achieves an average success rate of 74.75% on the tasks that it is pre-trained on, showing that our policy is effective in preventing catastrophic forgetting. * We present a model that can determine whether a pair of demonstrations of different embodiments, in our case human enactment of a skill or a robot demonstration, are performing the same task. Our alignment model achieves an overall accuracy of 91.4% on the RH20T dataset on aligning demonstrations from humans and robot. * Finally, we conduct a novel two-phase human-subjects experiment with eight participants to show that our system is able to learn to reason over and perform novel skills by interacting with non-expert human users for the task of making a sandwich using dialog. We find a success rate of 75% in making sandwiches for participants where the participants taught a skill in a continual fashion to our robot enabling it to make a sandwich. § PROBLEM FORMULATION We formulate a task solving problem where both the robot and the human agent can take actions on their turns. There is a joint physical state s of the world shared by both the human and the robot. In each turn, n, either the human or the robot acts, one after the other. Each turn can take longer than one time step, t, and continues until the robot or the human indicates a turn to be over. The actions can be physical actions represented by a_h, and a_r for the human and the robot actions respectively, or speech acts l_h and l_r for the human and the robot speech respectively for the human-robot grounded dialog. The problem has an initial state s_0 and a task θ specified by the human using a speech act l_h^0. Each of these actions updates the joint physical state s of the world, and internal dialog state s^d of the robot. The dialog state is hidden from the human user, but the human receives speech observations for the same. Over multiple turns and actions taken by the human and the robot these physical and robot states update over time. The objective of this turn taking problem is to complete the task θ. We measure the task completion rates for this interaction problem. Moreover, in our specific instance of the problem the human also teaches behaviors to the robot, we also measure the success of the individual learned behaviors within the task in simulation. § METHODS The goal of our framework is a robot agent that 1) actively generalizes its known skills to novel tasks when it is applicable; 2) queries the user for unknown skills; and 3) learns new skills with only a few instances. When encountered a task θ, the robot agent first searches for a learned skill using semantic representation, which comes from the language embedding of the linguistic description of the skills and tasks. This is a challenging question as the robot needs to know what it does not know. This work is performed by our queryable skill library. If the agent fails to find any usable skill for the task based on the semantic information, it attempts to search for a learned skill using skill representations, which come from human demonstrations and robot trajectories. We developed a novel sample efficient continual skill learning approach ACT-LoRA for this task. The robot agent can directly execute the task τ whenever it finds a learned skill that aligns with τ in either the semantic space or the skill space, and learns a novel skill to execute the task otherwise. During this whole process, the robot agent needs to interact with the human user based on information from our queryable skill library for which we use an LLM. §.§ How to Know What the Robot Does not Know a.k.a. a Queryable Skill Library The skill library consists of four parts - a text encoder E_text; a human demonstration encoder E_human; a robot trajectory encoder E_robot; and a set of learned skills 𝒮 = {S_1,…, S_k}. Each learned skill S_i is a tuple of a linguistic description and a robot trajectory, denoted as S_i = (l_i, τ_i). The linguistic representation r^l_i and skill representation r^s_i of skill S_i can be obtained by encoding l_i and τ_i with the corresponding encoder, denoted as r^l_i = E_text(l_i), and r^s_i = E_robot(τ_i) respectively. Finding a usable skill from the skill library. The skill library is provided two inputs to find an appropriate skill to execute the task θ, the linguistic description l_θ and a human demonstration d_θ for the task. We obtain the linguistic or semantic representation and skill representation for the task by encoding the linguistic description and the human demonstration with the corresponding encoders. We then compute two sets of similarity scores between the task θ and any known skill S_i for both the linguistic representation and the skill representation. The state machine within the interaction module of the agent decides the skill to use to execute the task θ based on these scores. We use a pre-trained CLIP as a text encoder E_text. For E_robot and E_human, we first extract features from each frame using a Resnet-18 <cit.>, and then encode the sequence using a transformer encoder <cit.>. The robot trajectory encoder E_robot and the human demonstration encoder E_human are trained to encode the human demonstrations and robot trajectories into the same latent space. The two encoders are jointly trained with tuples (d, τ, y), where d denotes human demonstration videos, τ denotes robot trajectories, and y is the label of whether the human demonstration and the robot trajectory is in the same task. We use a cosine similarity loss to learn this embedding with a hyperparamter ϵ to act as a margin to declare a human demonstration to be the same skill as a skill the robot knows. More details about learning this embedding space are in the Appendix <ref>. §.§ Interaction Module using a Large Language Model (LLM) The dialog state s^d in our pipeline is maintained with an internal state machine. The state machine uses an LLM, ChatGPT 4 <cit.>, as the natural language generator to produce speech acts for the robot agent. This state machine with the LLM has two major functionalities. Firstly, it interacts with the human user to asks for demonstrations or explanations based on the checks from our queryable skill library. Secondly, the interaction module also interprets the user's language feedback to update the dialog state s^d. The interaction module is given the autonomy to continue the dialogue with the user until that it acquires the designated information for the agent. The module can also explain the dialog state s^d with language to the user explaining the robot's confusion. §.§ ACT-LoRA as Visual-motor Policy Combining Low-Rank Adaptor with Action Chunking.Adapter-based methods <cit.> have exhibited promising capabilities of light-weight and data-efficient fine-tuning of neural networks across various domains such as NLP <cit.>, and computer vision <cit.>. <cit.> extend Low-Rank Adaptor(LoRA) into robotics with TAIL, enabling a simulated robot to continually adapt to novel tasks without forgetting the old ones. Unfortunately in our experiments TAIL <cit.> fails to provide high precision control on the robot leading to a lot of failures in even short skills. On the other hand, Action Chunking Transformer(ACT) <cit.> is capable of performing fine-grained tasks with high precision, but cannot be directly used for continual learning due to catastrophic forgetting. Therefore, we introduce LoRA adaptor to the ACT model, obtaining both the precision from action chunking and the capability of continual learning from the LoRA adaptor. We want to point out that we are using TAIL <cit.> as the baseline in this work as it is the closest continual learning agent to our approach. We also want to point out that TAIL still requires our queryable skill library to function as a baseline. Continual Imitation Learning. Our policy needs to continually learn new skills from demonstrations throughout the agent's lifespan. The robot agent is initially equipped with K skills {S_1, …, S_K}. Whenever the robot agent encounters a task that requires a novel skill 𝒮_n, n>K, it needs to adapt its existing policy π to the novel skill without forgetting any of the existing skills S∈{S_1, …, S_n-1}. Provided a number of demonstration trajectories for each skill, the continual learning policy of the robot agent can then be optimized with a behavior cloning loss, which in this case we use L_1 loss for action chunks following <cit.>. On top of the policy of the vanilla ACT model π_ϕ, the LoRA adaptor introduce a small set of additional low-rank parameters ϕ_i for each skill S_i. During the pre-training phase, the additional parameters ϕ_1,…, ϕ_K for skills S_1, …, S_K are jointly trained with the model's parameter ϕ. When we are finetuning with a skill S_n, n>K, we freeze the model's original parameters ϕ, and only allow gradient updates to the parameters from the task-specific adaptor ϕ_n. Such finetuning strategy prevents the policy from catastrophic forgetting the skills that it already possessed when adapting to novel skills. §.§ Human-Subjects Experiment §.§.§ Robotics Domain for Sandwich Making Our human subjects' experiment was on a sandwich making robot domain where the robot does not know all the skills required to make a butter, cheese and lettuce sandwich. Specifically, the robot does not know how to slice cheese. The robotic setup includes a Franka FR3 Robot and three Realsense D435 cameras. We 3D printed tools needed to complete the tasks with planners capable of picking the tools of knife and spatula on demand. We use a 6D Spacemouse to collect data from human participants. Figure <ref> demonstrates our sandwich making domain. We used the sandwich-making task for two reasons. Firstly, the sandwich-making task includes a lot of contact-rich and dynamic sub-tasks, such as applying butter and slicing cheese. Secondly, sandwich-making is a multi-step process, allowing the robot agent and the participants to have multiple rounds of conversations. Fake food was used as our ingredients for environmental reasons. §.§.§ Study Design and Measures Participants interacted with the robot in two phases. During the first phase - the interaction phase, participants interact with the robot and teach the robot novel skills and task knowledge as they interact with it, this includes dialogue, human demonstration, and robot demonstration. In the second phase - the evaluation phase, participants request the robot to perform the same tasks and evaluate the performance of the robot agent. We needed a two-phase study because we wanted to collect data for skill learning in the first phase and then run a learned policy on the agent in the second phase. The participants came in for another session at least one day apart, allowing 5 hours of time to train novel skills using user demonstrations. This makes our study two separate 3 × 1 between-subjects experiment to measure our framework's ability to learn novel skills and task knowledge by interacting with non-expert human users. We do not compare any tasks between the two phases as they were performed on different days and their subjective metrics might be different depending on the subjects' memory of the experience. The objective metrics we used for the human-subjects experiment are as follows. We measured the overall success rate (SR) of completing the entire sandwich and the success rate for completing each independent sub-task. We make a distinction in the evaluation phase for skills that were taught by the participant vs pre-existing skills in our skill library. This demonstrates that we can add new skill without loss of performance to our existing skills. In the post-study survey, we administered the Godspeed Likability sub-scale <cit.>, System Usability Scale (SUS)<cit.>, and the NASA TLX <cit.>. §.§.§ Procedure The procedure of the study is as follows. Participants first filled out the consent form and a pre-study survey. Then, we handed out a general introduction of the experiment and administered the two phases sequentially. Before each phase, a demonstration video and the instructions for the corresponding phase were provided to the participant. The anonymized instruction manual and the link to these videos are provided in the Appendix and the associated webpage[https://sites.google.com/view/corl-24-dialog]. The interaction phase:- Here the participant requested the robot agent to make a type of sandwich. During the process, the robot agent asked the participant for task knowledge, human demonstrations, or robot demonstrations through dialogues. The participant answered task knowledge-relevant questions directly with their language responses and provided human demonstrations or robot demonstrations on request. We recorded all replies from the participants in audio and converted them to text using audio-to-text tools. At the end of the interaction phase, the participant was asked to fill out a survey to evaluate the experience with the robot. The three conditions that the participant observed here were: 1) Doing - the participant performed the skill that the robot does not know; 2) Learning - The participant provides a demonstration to the robot so it can learn the skill for the evaluation phase; 3) Dumb - The dialog state machine acts randomly and is not guaranteed to complete the sandwich. The participants observe all these conditions in random order. The evaluation phase:- The participant comes back and asks the robot agent to make the exact same sandwich as the one requested in the interaction phase. The participant watches the robot finish the task while helping in certain conditions, finally filling a survey to evaluate the robot's performance. The three conditions observed in this phase are - 1) Doing - the participant performed the skill that the robot does not know; 2) ACT-LoRA - the robot makes the whole sandwich based on the participants data from the interaction phase with our ACT-LoRA model; 3) TAIL - the robot makes the sandwich with participant data but with the TAIL <cit.> model. §.§.§ Hypotheses In the human-subjects study, we aim to verify the following hypotheses: Hypothesis 1 - Our framework allows the robot to learn skills continually with human data while performing better than the existing framework of TAIL. Hypothesis 2 - Our framework allows the robot to complete the task autonomously when compared to the existing baseline of TAIL. Hypothesis 3 - Our framework is preferred by the users when compared against a baseline of the robot asking for help and expecting the human to complete the unknown skill. § EXPERIMENTAL RESULTS 0.45 0.45! Model Pre-trained Skills(SR) Few-shot Skills(SR) ACT-LoRA 74.75 100.0 ACT 1.5 100.0 TAIL 0.25 5.0 Experimental results on RLBench simulator. Pre-trained skills(SR) measures the policies' average success rate on the 8 skills that policies are pre-trained on. Few-shot skills(SR) measures the policies' average success rate on the 6 new skills that they are finetuned on over 50 rollouts. Detailed results of each skill are in Appendix <ref>. In this section, we present three sets of experimental results. Firstly, we present the results of our policy on few-shot continual imitation learning in the simulated RLBench environment <cit.>. These experiment results show that our behavior cloning model is able to continually to learn novel skills with only few demonstrations and avoid catastrophic forgetting. Then, we evaluate our demonstration alignment model on a subset of the RH20T dataset <cit.>, and demonstrate that we are able to project demonstrations from different embodiments into the same latent space. Lastly, we present the results for the human subject study, which demonstrate that our framework can learn symbols and skills to execute task from interacting with true human users. §.§ Experiments on Continual Imitation Learning We evaluate our policy on few-shot continual imitation learning using the RLBench environment <cit.>. A total of 14 skills are chosen from the pre-defined skills of the environment, 8 for pre-training and 6 for continual training. We use 1000 demonstrations for each of the pre-training skills for training during the pre-training phase, and 5 demonstrations for each of the continual training skills in the continual training phase. The SoTA visual policy model ACT <cit.> and SoTA continual policy learning model TAIL <cit.> were chosen as the baselines for comparison against our model. We evaluate all models for 50 times on each of the 14 skills to measure the success rate. Our model learns novel skills with 100% accuracy while maintaing its pre-trained performance at 74.75% demonstrating its suitability for continual learning. We observed TAIL <cit.> to fail in tasks which require precision, and ACT fail to remember older skills. We evaluated these models with just one random seed due to computational challenges. §.§ Experiments with our alignment model Our alignment model is evaluated on a subset of the RH20T dataset <cit.>, which includes robot trajectories for diverse range of tasks and their corresponding human demonstration videos. Our alignment model achieves 91.4% in overall accuracy in distinguishing whether a pair of demonstrations are performing the same task. Detailed results are in Appendix <ref>. This high performing alignment model allows our robot to ask demonstration queries in our user study. §.§ Human Subject Study Results We conducted an IRB approved study with 8 participants and 6 pilot subjects. Only one of the subjects was female (12.5% of the user study). The age demographic of our users is 25.25 ± 3.059. The subjects spent 120 minutes in the interaction phase and then another 75 minutes for the evaluation phase. They were compensated with a $35 Amazon gift card. We present our objective success rate results in Tables <ref>,  <ref>. Our Hypothesis 1 is supported by table 2 as our ACT-LoRA when compared to TAIL <cit.> learns the novel skills (100% vs 5%) and existing skills (74.75% vs 0.25%) at a higher success rate. This allows better acquisition of novel skills few shot in a real robotics domain. We expected TAIL to perform better but we noticed it had issues with long horizon precise skills. Note that the doing model does not learn any new skills. Our Hypothesis 2 is supported by Table 2 demonstrating that our LoRA Act model completes the entire sandwich at a higher rate than TAIL <cit.> which is the learning based approach. Notice that the Doing model has a similar completion rate as even though the human can help with some tasks the robot might still fail in skills it has learned previously such as grasping bread. We reject Hypothesis 3 as we did not notice any preferences in Workload, System Usability or Likebility between our learning agent using ACT-LoRA and the Doing agent that always asks for help from users. While we thought that users would prefer an autonomous agent, subjects actually preferred helping the agent as humans can make a sandwich with trivial ease where as the robot performs the task at a much slower rate. § RELATED WORK Skill Discovery and Continual Learning. The area of visuo-motor continual learning is getting a lot of attention recently <cit.>. <cit.> discover new skills from segments of demonstrations by unsupervised incremental clustering. <cit.> learn the skill representation by aligning skills from different embodiments, and can re-compose the learned skills to complete a novel combination. <cit.> introduce task-specific adapters using low-rank adaptation techniques <cit.>, preventing the agent from forgetting the learned skills when learning the new skills. However, these frameworks assume the presence of the demonstrations for the new tasks, and only discover skills in a passive fashion. Our proposed framework actively reasons and requests the human users for the demonstrations of the unseen skills while performing the ones it knows. This reasoning is done in two stages: first the human enacts the behavior, once the robot has seen this behavior it decides if it can perform the enacted behavior or not. After this reasoning the robot can choose to source demonstrations from the human using a joystick. This is a more natural setup for a language enabled continual learning agent in the real world. Furthermore, our agent requires less than ten demonstrations from the user to discover the new task without forgetting any of the learned skills which is an improvement over existing passive continual learning methods <cit.>. Human-Robot Dialogue. Human-Robot dialog is a mature problem <cit.>. Traditional methods use statistical algorithms with a pre-defined grammar, such as semantic parsing <cit.>, to connect the semantics of the dialogue to the environment's perceptual inputs. On the other hand, recent advancements in natural language processing (NLP) have led to Large Language Models (LLMs) that process natural language in free form. Grounded with perceptive inputs from the environment, these LLMs have been used in robotics research generate executable plans <cit.>. Furthermore, <cit.> and <cit.> use LLMs to ask for human feedback for the robot agents demonstrating the importance of dialog. However, these approaches leverage planning with LLMs where as we are attempting to learn continuous visuo-motor skills on the robot by asking for help. Active Learning. Our work is related to active learning, where a learning agent actively improves its skills by asking a human for demonstrations <cit.>. Defining an appropriate metric that triggers the request for assistance or information gathering becomes the key research problem in this domain. <cit.> measure the semantic similarity between a newly introduced concept and the known concepts to ask for classifier labels. <cit.> train a confidence classifier conditioned on the current state of the agent, and request expert demonstrations when the confidence score does not meet a pre-defined threshold. <cit.> use the uncertainty of Gaussian Processes(GPs) as the metric to trigger the request for assistance. These existing methods reason over the semantic information in a task such as the goal condition or features of classifiers that identify the goal condition. We use a cosine distance metric to measure similarity for both the semantic information from language and the behavior information of a skill. § LIMITATIONS We present an approach to teach skills to robots using techniques from active learning and continual learning while using language as a modality to query and reason over the skills known to the agent. We acknowledge that we need to conduct a wider user study with more subjects over a larger number of cooking tasks using our approach. The turn-taking in our framework is tightly controlled, and not dynamic. Our ACT-LoRA approach while being sample efficient has been observed to have issues with heterogeneous demonstrations. We also want to compare such continual learning approaches with pre-trained policy approaches such as RT <cit.> to scale up the policy learning approach while maintaining sample efficiency allowing for novice users to personalize skills for their robots. § CONCLUSION In conclusion, we present a novel framework for robot agents to learn task relevant knowledge and skills from interactions with human users. To the best of our knowledge this is the first work to demonstrate skill learning while querying a user with dialog to express doubt. By maintaining metrics in semantic and skill similarity, our agent can actively interact with human users and adapt its known skills to novel tasks. Furthermore, our framework is able to learn a completely new visual-motor skill (at 100%) with only a few robot demonstrations, without affecting the performance of any existing skills (at 74.75%)fulfilling continual learning requirements in robotics. Finally, we conducted a human-subjects experiment to demonstrate our framework's ability to complete tasks such as sandwich making from interactions with participants at a 75% task success rate. If a paper is accepted, the final camera-ready version will (and probably should) include acknowledgments. All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. § APPENDIX § EXPERIMENT DETAILS 0.55 0.55! Model Precision Recall F_1 Accuracy Resnet + Transformer 88.0± 2.2 95.9 ± 2.0 91.8 ± 2.0 91.4 ± 2.11 Experimental results of our alignment model on aligning human videos with robot trajectories on subset of the RH20T dataset <cit.>. §.§ Detailed results of the alignment model on RH20T We present the detailed results of the alignment model in Table <ref>. We conduct five-split evaluation on the dataset, and report the mean score and standard deviation of each metric. Each model is trained on 80% of the trajectories and evaluated on the other 20%. In total, we use 1240 robot trajectories and 1193 human demonstrations across 98 tasks of the RH20T dataset configuration 5. As shown in Table <ref>, our model achieves 91.8% on the F_1 metric, and 91.4% on the overall accuracy metric. This strong performance of the alignment model enables the robot agent to actively adapt learned skills to perform novel task, or to understand that it needs to learn a novel skill from seeing a single human demonstration. §.§ Detailed results of the continual learning policy on each task of the RLBench We present the per-task success rate of the policies in the RLBench simulator. Table <ref> shows the performance of the three policies on each pre-trained task after fine-tuning, and Table <ref> demonstrates the performance of the policies on the tasks that they are finetuned on. All the three models are trained to predict joint positions for the same number of gradient steps. During the pre-train phase, each model is trained with 1000 robot demonstrations from each pre-train task for 1000 epochs. In the fine-tune phase, each model is trained with 5 robot demonstrations from each fine-tune task for 20000 epochs. Notice that due to the limitation of the visual-motor policies, we use a static location for all the finetune tasks during both training and evaluations. However, for all the pre-train tasks, we use randomized locations during both training and evaluation. As presented in the tables, TAIL achieves a near 0% success rate on majority of the tasks except for close fridge. This is because that close fridge is a relatively easier task in the environment, and the agent has a non-trivial chance to accidentally hit the fridge door and close it even if it is doing random behaviors. On the other hand, the baseline ACT model achieves a strong 100% success rate on the tasks that it is fine-tuned on, demonstrating its strong capability of learning fine-grained control. However, it also achieves a extremely low success rate on all the pre-train tasks after fine-tuning. This shows that ACT suffers from catastrophic forgetting and can no longer perform the pre-train tasks after fine-tuning. In comparison, ACT-LoRA achieves a 100% success rate on the fine-tune tasks, while still being able to perform on all the pre-train tasks with an overall success rate of 74.5%. This experiment result demonstrates that ACT-LoRA inherited the capability of fine-grained controls from ACT, and the ability to prevent catastrophic forgetting from the additional Low-Rank Adaptors, and hence is suitable for the use case where fine-grained control and continual learning are needed. Limitation. ACT-LoRA does not generalize well to situations when the objects are present in completely novel positions. Similarly learning tasks with heterogeneous demonstrations is difficult for ACT-LoRA as there is not encoding for the possibility of heterogeneous demonstrations. These limitations exist with our baselines as well. Diffusion based policies can solve this problem but do not work well with fewer than five demonstrations in our experience. §.§ Detailed results of the human-subjects study §.§.§ Study details We describe the details of the human-subjects study. Our human-subjects study is approved by the Institutional Review Board(IRB) of the university. We tested the study with 5 pilots before conducting the experiments on the participants. We fixed the issues of unclear instructions, short execution times for the learned skills and ambiguous phrases when the LLM was asking questions. We had to finetune the prompts of the LLMs a lot so the robot asked questions pertinant to the task of sandwich making. For the actual study a total of 10 participants were recruited through campus advertisements. We rejected the 2 of these users for the following reasons respectively. One user was over-excited to interact with the talking robot agent, and requested the robot to perform tasks that are not in our instructions forcing us to stop the study. These tasks were impossible to complete using the configurations of the study. The other user did not fully understand the instructions, and crashed the robot into the table forcing us to stop the study. The study is composed of two separate phases, the interaction phase that takes 150 minutes and the evaluation phase that takes 60 minutes, with a voluntary participation. The participants, including the pilots, are compensated with $35 Amazon gift card for their time. We designed the two-phase study for two major reasons. Firstly, the learning agent requires five hours to train for the novel skill. Secondly, we want to demonstrate a thorough comparison for the workload between our learning agent and the doing agent in the two phases. The learning agent requires the users to remotely control the robot arm to perform the task in the interaction phase, and is fully automated in the evaluation phase, whereas the doing agent behaves the same in both phases by requesting the users to directly perform the task that it does not know. We hypothesize that the users experience higher workload for our learning agent than the doing agent in the interaction phase, and a lower workload for the learning agent than the doing agent in the evaluation phase because we consider that for remotely controlling the robot arm to complete the task requires higher workload than directly completing the task themselves for the users, and the fully automated robot agent requests the least workload. We reject our hypothesis and accept the null hypothesis of – there is no difference in the users's perception of workload, likeability, anthropomorphism, and perceived intelligence between our method and a baseline where the robot just asks for help. There are two major reasons for this result. Firstly, the human users take much shorter time to complete the task than the robot agent. From the users' perspective, despite being fully automated, the agent is not saving their time by doing the task for them because they have to sit and watch the agent doing the task. This issue can be addressed by assigning distractor tasks for the human users <cit.>, which could not be achieved due to the limited time. Another reason is that a robot that asks for help seems more intelligent and human like compared to an autonomous sandwich making robot. Future studies will have to weigh these design decisions more carefully. We do perform better than the baseline of TAIL here but TAIL generally fails at completing the task itself which makes the result unsurprising. A more elaborate study with more choices and more participants can reduce the variance in the results. §.§.§ Detailed procedure We describe the detailed procedure for the study as follows. Interaction Phase. Participants first filled out the consent form and a pre-study survey. Then, we handed out a general introduction of the experiment. The participants were then asked to read the instructions for the interaction phase, and watch a demonstration video. The demonstration video introduces how the robot agent requests for different types of help differently, and how to answer different requests from the robot agent. We use a completely different domain(Opening a washing-machine) as example in the demonstration video. The instruction introduces domain relevant information, such as the configuration of the robot's workspace, the sandwich to make, and the steps to make the sandwich. The anonymized instructions and the video can be found on the associated webpage. Then, the participants interacted with the three agents, the dumb agent, the doing agent, and the learning agent, in a random order. The dumb agent never interacts with the users except for getting the initial instruction set from the user. The doing agent always asks the human users for help when it encounters any task that it is uncertain with. The learning agent interacts with the human users by asking task-relevant questions, asking for human demonstration, and asking for robot demonstrations. The details of how the learning agent behaves when encountered an unknown task can be found in Appendix <ref>. After interacting with each system, the participants were asked to fill-out a post-survey, including questions from NASA-TLX <cit.>, SUS <cit.>, and 4 sub-scales from the GodSpeed Questionnaire Series <cit.>(Likability, Animacy, Natural, Perceived Intelligence). After the participants finished the interaction phase, we fine-tuned the ACT-LoRA policy and the TAIL policy with the robot demonstrations collected from the users. Evaluation Phase. Participants came back to the lab. We handed the same instructions to the participants for them to ask the robot to make the same sandwich. The participants interacted with three robot agents, the doing agent, the learning agent with TAIL, and the learning agent with ACT-LoRA. All the three agents remember the instructions to make the sandwich provided by the participants from the interaction phase. The doing agent did not learn from the robot demonstrations from the interaction phase, and asked for help from the human users on the same task. Both learning agents learned the novel skill from the demonstration in the interaction phase, and did not interact with the human users except for the initial interactions. After watching each agent, the participants were asked to fill out the same post-survey for the system. After watching all the three systems, the participants were asked to rank the three systems on 7 different description(helpful, useful, efficient, competent, uncooperative, inefficient, incompetent). §.§.§ Detailed study results The objective results of the study are presented in Table <ref> and Table <ref>, and the subjective results of the study are presented in Table <ref> and Table <ref>. Although no hypothesis can be accepted in the comparison of any metric between the learning agent with ACT-LoRA and the doing agent with ACT-LoRA in both the interaction phase and the evaluation phase, the basic hypothesis that the learning agent with ACT-LoRA has a higher success rate than the dumb agent holds, because that the dumb agent will behave randomly on the skill that it doesn't know, and is fated to fail. For a similar reason, the basic hypothesis that the learning agent with ACT-LoRA has a higher success rate than the learning agent with TAIL also holds. Furthermore, in the evaluation phase, the learning agent with ACT-LoRA is considered better than the learning agent with TAIL by users in the following metrics: Comparative. Conditions for normality were not met for the data points to run a t-test. Hence, we conducted a Wilcoxon Signed-Rank test to compare the learning agent with ACT-LoRA with the learning agent with TAIL in the comparative metric. Results from Wilcoxon Signed-Rank test suggest that the ACT-LoRA learning agent is preferred by user in the direct comparison with the TAIL learning agent with significance(p<0.01, Z=2.47). SUS. Results from Shapiro Wilk test suggest that our data in the SUS metric satisfies the condition for a parametric test(W=0.93, p=0.24). Results from the paired t-test suggest that the LoRA-ACT learning agent is considered better than the TAIL learning agent by users in the SUS metric with significance(p<0.01, t=4.15). Natural. Conditions for normality were not met for the data points to run a t-test. Hence, we conducted a Wilcoxon Signed-Rank test to compare the learning agent with ACT-LoRA with the learning agent with TAIL in the natural metric. Results from Wilcoxon Signed-Rank test suggest that the ACT-LoRA learning agent is preferred by user in the direct comparison with the TAIL learning agent with significance(p<0.05, Z=2.25). Likability. Results from Shapiro Wilk test suggest that our data in the likability metric satisfies the condition for a parametric test(W=0.93, p=0.23). Results from the paired t-test suggest that the LoRA-ACT learning agent is considered better than the TAIL learning agent by users in the likability metric with significance(p<0.05, t=2.19). Animacy. Results from Shapiro Wilk test suggest that our data in the animacy metric satisfies the condition for a parametric test(W=0.93, p=0.27). Results from the paired t-test suggest that the LoRA-ACT learning agent is considered better than the TAIL learning agent by users in the animacy metric with significance(p<0.05, t=2.87). Perceived Intelligence. Results from Shapiro Wilk test suggest that our data in the perceived intelligence metric satisfies the condition for a parametric test(W=0.95, p=0.41). Results from the paired t-test suggest that the LoRA-ACT learning agent is considered better than the TAIL learning agent by users in the perceived intelligence metric with significance(p<0.01, t=3.39). §.§.§ Limitation of the study There are two major limitations on the human-subjects study. Firstly we need a better distractor based study which needs to be designed more carefully. Our current result just demonstrates the success of making sandwhichs but that does not mean people prefer such a robot. Secondly, due to the limited time, the study was conducted on a single sandwich with a small number of participants. We would like to conduct a more elaborate study with more domains and participants in the future. § IMPLEMENTATION DETAILS §.§ Implementation details of the dialogue state machine We describe the details of the implementation of the dialogue state machine. Algorithm <ref> is the pseudo code of the dialogue state machine. The robot agent first initializes the conversation with the human user, and repeatively asks questions until it obtains a clear list of instructions from this initial conversation. Then, the agent attempts to execute the list of actions sequentially until all the instructions are finished. During execution of each task, if the agent finds that the task can be executed with one of the known skills, the agent directly executes the task with the corresponding policy. We use a very high threshold for the cosine similarity metric to simulate an exact match to prevent the robot agent to ask trivial questions in the user study. If the robot agent fails to directly find an executable skill for the task, it first searches for a usable skill in the semantic space. If it finds a skill that has a higher similarity score than the threshold in the semantic space, it proposes to the human user to use this skill to execute the task, and proceeds after obtaining the agreement from the human user. Otherwise, if the agent fails to find a usable skill, or the human user rejects the agent's proposed skill, the agent asks the human user for a human demonstration, and attempts to find a usable skill in the skill space based on the human demonstration. The skill search in the skill space is similar to that of the semantic space. If the agent finds a skill that has a higher similarity score than the threshold in the skill space, it proposes the skill to the human user. If the human user agrees with such skill proposal, the agent learns that a known skill can be adapted to the new task and executes the task. Otherwise, if the agent fails to find an aligned skill or its proposal is rejected by the human user, it realizes that it doesn't possess the skill to execute the task, and will ask for several robot demonstrations to train a completely new skill for the task. The LLM serves as the interface between the robot agent and the human user. Whenever the robot agent is in a state that it needs inputs from the human user, it prompts the LLM with the current state of the agent and the information needed from the human user. The LLM then initiates a dialogue with the human user, and continues the dialogue until it retrieves the information needed by the robot agent. Such share autonomy between the state machine has more reliability than fully relying on the LLM, and can fully exploit the linguistic capability of the LLM. §.§ Implementation details of the alignment model We describe the details of the implementation of the alignment model. Following the notation in the main paper, we use E_robot, and E_human to denote the robot trajectory encoder and human demonstration encoder respectively. We also use ϵ_t and ϵ_v to denote the different thresholds for training and validation. To reduce the computational cost, we downsample all the human demonstrations and robot trajectories to 100 timesteps uniformly, and use image inputs from a single camera for both the human demonstrations and robot trajectories. We use a 6-layer transformer encoder with 8 heads for both the human demonstration encoder and the robot trajectory encoder. Both encoders use a resnet-18 feature extractor to extract features from the raw image inputs. The robot trajectory encoder also takes in proprioceptive inputs from each time-step. During training, we minimize the cosine embedding loss between the human demonstration and robot trajectory with the training threshold ψ_t, denoted as following: L(d,τ, y) = 1 - cos(E_human(d), E_robot(τ)) if y=1, max(0, cos(E_human(d), E_robot(τ)) - ϵ_t) if y=-1. During inference, two trajectories are said to be the same skill if their cosine similarity is above the threshold ϵ_v. For the experiment of RH20T, we use ϵ_t = 0.5 and ϵ_v = 0.7, and train the alignment model for 10000 gradient step with a batch size of 16. For the domain of the human subject study, we use ϵ_t = 0.5 and ϵ_v = 0.95. Furthermore, we freeze the pre-trained resnet-18 visual encoder, and train the model for 3000 steps in prevention of over-fitting due to the lack of data points in the real world domain. §.§ Implementation details for ACT-LoRA We describe the details of our implementation of the ACT-LoRA policy. Following <cit.>, we train with a CVAE architecture and discard the additional encoder during inference. For both the CVAE encoder and the state encoder, we use a 4-layer transformer encoder with 8 heads. We extract features from raw image inputs from multiple cameras using resnet-18. These visual features are fed to the transformer encoder along with the proprioceptive inputs. For the decoder side, we use 6-layer transformer decoder with trainable embeddings. We also use a chunk size of 100 as it gives the best performance empirically <cit.>. The same configuration is also used for the baseline ACT model. As for the configuration of the low-rank adaptors, we follow TAIL <cit.> and use a rank size of 8. For both the simulation experiments and the human subject study, each skill is associated with a set of unique adaptor weights. §.§ Implementation details for TAIL As there is no publicly available source code for TAIL <cit.>, we tried our best attempt to re-implement TAIL for a fair comparison. To reduce the computation cost for the original TAIL model, we use a transformer encoder in replacement to the GPT-2 temporal decoder to speed up the training process. Furthermore, due to the limited time, the LoRA weights are only introduced to the transformer encoder, but not to any pre-trained feature extractors, including the CLIP text encoder and CLIP image encoder. Apart from these changes, we choose hyperparameters as close as possible to the original TAIL paper <cit.>. The TAIL model takes in linguistic task descriptions, image observations, and proprioceptive inputs over history timesteps. We first extract the feature of the raw image inputs and the linguistic task descriptions using the pretrained CLIP image and text encoder. Then, we use a FiLM layer to inject the linguistic features into the image features and the proprioceptive inputs. These inputs are treated as the input tokens of the transformer encoder. Then, we use an MLP layer to project the encoded token into parameters for a Gaussian Mixture Model(GMM). During training, the model is optimized by minimizing the negative log-likelihood loss of the ground truth actions. During inference, we sample from the distribution of the GMM predicted by the model. §.§ Robot Setup for User Study and Data Collection The robotic setup includes a Franka Robot and 3 Intel Realsense D435 cameras. The workspace includes a table with items curated for the system. We designed 3D-printed tools tailored to support our task requirements as an attachment for the Franka Robot. We set up our cameras to provide a frontal view, a top-down view, and a wrist-mounted camera for a view from the robot's perspective. This configuration allows us to capture dense and diverse features for training our policy. Our data collection pipeline includes a 6D Spacemouse from 3DConnexion, which dictates the motion of the robot end effector. This facilitates the collection of dense data. Although limited by the data collection rate, this setup allows users to control the robot in the task space with relative ease because of the intuitive nature of the Spacemouse. Throughout the system's operation, picking and placing robot tools is done by pre-specified waypoints because grasping a tool is not our focus.
http://arxiv.org/abs/2409.03067v1
20240904203814
A Comparative Study of Offline Models and Online LLMs in Fake News Detection
[ "Ruoyu Xu", "Gaoxiang Li" ]
cs.SI
[ "cs.SI" ]
A Comparative Study of Offline Models and Online LLMs in Fake News Detection 1st Ruoyu Xu Department of Computer Science Texas Tech University Lubbock, TX 79409, USA [email protected] 2nd Gaoxiang Li Department of Computer Science Texas Tech University City, Country [email protected] September 9, 2024 =============================================================================================================================================================================================================================== § ABSTRACT Fake news detection remains a critical challenge in today’s rapidly evolving digital landscape, where misinformation can spread faster than ever before. Traditional fake news detection models often rely on static datasets and auxiliary information, such as metadata or social media interactions, which limits their adaptability to real-time scenarios. Recent advancements in Large Language Models (LLMs) have demonstrated significant potential in addressing these challenges due to their extensive pre-trained knowledge and ability to analyze textual content without relying on auxiliary data. However, many of these LLM-based approaches are still rooted in static datasets, with limited exploration into their real-time processing capabilities. This paper presents a systematic evaluation of both traditional offline models and state-of-the-art LLMs for real-time fake news detection. We demonstrate the limitations of existing offline models, including their inability to adapt to dynamic misinformation patterns. Furthermore, we show that newer LLM models with online capabilities, such as GPT-4, Claude, and Gemini, are better suited for detecting emerging fake news in real-time contexts. Our findings emphasize the importance of transitioning from offline to online LLM models for real-time fake news detection. Additionally, the public accessibility of LLMs enhances their scalability and democratizes the tools needed to combat misinformation. By leveraging real-time data, our work marks a significant step toward more adaptive, effective, and scalable fake news detection systems. Fake news detection, LLMs § INTRODUCTION The exponential growth of the Internet and the widespread adoption of social media platforms such as Twitter and Facebook have revolutionized the dissemination of news, making it more decentralized and rapid than ever before. However, this shift has also turned these platforms into grounds for the spread of fake news, posing significant threats to public trust and societal stability. A 2023 report by the Reuters Institute for the Study of Journalism highlighted the persistent issue of misinformation, with social media playing a central role in its proliferation <cit.>. Despite ongoing efforts to curb the spread of false information, the sheer scale and speed at which fake news propagates remain formidable challenges. This underscores the urgent need for effective and adaptive detection methods capable of operating in real-time to counter the evolving nature of misinformation. Traditional approaches to fake news detection have primarily relied on offline models trained on historical datasets <cit.>. While these models have proven effective in identifying patterns associated with previously encountered fake news, they suffer from a critical limitation: their static nature prevents them from adapting to the rapidly changing landscape of misinformation. This limitation has been substantiated by our evaluations, which demonstrate that as fake news evolves in both content and dissemination strategies, offline models increasingly struggle to maintain their effectiveness. This challenge is particularly pronounced in real-time scenarios, where new narratives can emerge and spread at unprecedented speeds, further diminishing the accuracy and reliability of these models. As fake news continues to evolve, both in terms of content and dissemination strategies, traditional offline models increasingly struggle to maintain their effectiveness. This challenge is particularly pressing in real-time scenarios, where new narratives can emerge and spread at unprecedented speeds. A significant issue lies in the distributional shift between the static, historical data on which these models are trained and the dynamic, real-time data they encounter in practice. Offline models are typically built on datasets that capture past patterns of misinformation; however, as new events unfold and novel forms of fake news arise, the characteristics of these narratives may diverge substantially from those previously observed. This mismatch between the static training data and the constantly evolving nature of real-time news further aggravates the models' inability to adapt, leading to a decline in their detection accuracy when confronted with fresh and previously unseen misinformation. Motivated by the identified gap in offline models' performance on real-time news, we explored the potential of online Large Language Models (LLMs) as a viable solution. Real-time fake news detection presents a new paradigm in the fight against misinformation, requiring systems that can continuously learn and adapt to emerging data streams. Unlike traditional models, LLMs are designed to process vast amounts of information in real-time and dynamically access credible online resources, enabling them to identify emerging patterns of misinformation with greater precision. Leveraging the advanced capabilities of LLMs, we aim to address the limitations of offline models, thereby enhancing the accuracy and responsiveness of fake news detection systems in today's fast-paced information environment. In recent years, Large Language Models (LLMs) such as ChatGPT[ChatGPT: <https://openai.com/chatgpt>], Claude[Claude: <https://claude.ai>], Llama[Llama: <https://llama.meta.com>], and Gemini[Gemini: <https://gemini.google.com>] have demonstrated remarkable performance across a wide range of natural language processing tasks, showcasing their potential to tackle complex challenges. Although the application of LLMs in real-time fake news detection has not been extensively explored, their ability to process and learn from real-time data suggests they may offer significant improvements in this domain. These models, trained on vast and diverse datasets, are capable of generating and understanding human-like text, and the ability to access online information, making them particularly well-suited for adapting to the constantly evolving landscape of online discourse. This adaptability positions LLMs as promising candidates for identifying new and emerging forms of fake news. Furthermore, their public accessibility, without requiring specialized technical expertise, enhances their potential for widespread use in fake news detection. While LLMs demonstrate substantial potential, there is a pressing need to systematically evaluate their effectiveness in the context of real-time fake news detection, particularly in comparison to traditional models. Existing studies on LLMs for fake news detection <cit.> primarily focus on fine-tuning these models using static, historical datasets before deploying them for fake news detection. Although this approach can enhance performance for specific tasks, it does not fully leverage LLMs' inherent ability to dynamically process real-time information. Moreover, these studies typically use models like GPT-3.5 Turbo, which do not have online search capabilities, restricting their ability to incorporate the most up-to-date information when evaluating news. This reliance on static data poses a challenge in the real-time context, where misinformation can spread rapidly, and models need to adapt to new narratives as they emerge. In contrast, our work employs LLMs in their zero-shot capacity, without fine-tuning on task-specific datasets, allowing them to directly process and evaluate real-time news. This approach not only tests the models' adaptability to emerging narratives but also capitalizes on their ability to access up-to-date information, enabling more context-aware detection of fake news. By leveraging models with real-time web access, such as the latest versions of ChatGPT, Claude, and Gemini, we aim to overcome the limitations of static fine-tuned models. To our knowledge, this research is the first to rigorously test both LLMs and existing fake news detection models using real-time news datasets, offering a comprehensive evaluation that highlights the strengths and limitations of these approaches in handling real-time information. Our study makes several key contributions: * Identification of a research Gap: We identify a significant shortfall in the ability of existing models to effectively address real-time fake news, highlighting the inadequacy of current detection systems. * Comprehensive evaluation on real-time news: We provide a thorough comparison between LLMs and traditional offline models, elucidating their respective strengths and limitations in the context of real-time misinformation detection. * Insights for fake news detection: Our findings offer valuable insights that can guide the development of more robust and adaptable fake news detection solutions, capable of handling the evolving nature of misinformation. § RELATED WORK The automatic detection of fake news on social media has become a significant research area due to the proliferation of misinformation and its impact on public opinion. Various approaches have been proposed over the years, which can be broadly categorized into machine learning methods, deep learning techniques, and multimodal approaches. §.§ Machine Learning Methods Traditional machine learning techniques have been widely used in the initial stages of fake news detection. These methods rely on handcrafted features extracted from the textual content of news articles. Logistic regression, support vector machines (SVM), and random forests are commonly employed classifiers in this category. For instance, Castillo et al. <cit.> developed a decision-tree-based model using features from Twitter events. Similarly, Rubin et al. <cit.> utilized SVM combined with rhetorical structure theory and vector space modeling to classify news. Other researchers, such as Horne and Adali <cit.>, have investigated various linguistic features, including stylistic features and complexity measures, to differentiate between fake and real news. They used SVM and achieved notable performance improvements by incorporating these features. Zhou et al. <cit.> employed feature engineering techniques to extract stylistic and content-based features for fake news detection, demonstrating the effectiveness of logistic regression and random forests in this task. These models often utilize linguistic cues such as word n-grams, part-of-speech tags, readability scores, syntax patterns, and semantic inconsistencies to distinguish fake news from real news. The emotional tone of a piece of news can also serve as an indicator of its veracity, with fake news often exhibiting exaggerated sentiment compared to factual news. Additionally, assessing the credibility of the news source, including its history of publishing fake news and its overall reputation, can significantly enhance the accuracy of fake news detection models. §.§ Deep Learning Techniques Recent advancements have seen the application of deep learning models, which can automatically learn complex features from large datasets, significantly improving detection accuracy. Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and transformer-based models like BERT (Bidirectional Encoder Representations from Transformers) have been particularly effective. Wang <cit.> introduced the LIAR dataset and used various models including CNNs and Bi-LSTMs to analyze linguistic patterns in the data, with CNNs outperforming others. Kaliyar et al. <cit.> proposed a transformer-based model named FakeBERT, which fine-tunes the BERT model for fake news detection, achieving state-of-the-art results on multiple benchmarks. Ni et al. <cit.> created a model, called MVAN (Multi-View Attention Networks), based on deep learning to spot fake news on an early basis. They merged the text semantic attention and the propagation structure attention in the model in order to simultaneously gather important hidden cues from the originating tweet’s dissemination structure. These models leverage the powerful language representation capabilities of deep learning techniques to capture local textual features, long-term dependencies, and semantic relationships in the data, significantly improving the performance of fake news detection systems. Hybrid models that combine different deep learning techniques have also been proposed. Ruchansky et al. <cit.> developed the CSI (Capture, Score, and Integrate) model, which incorporates text, reaction, and source characteristics to detect fake news. Ajao et al. <cit.> created a hybrid model based on LSTM and CNN for Twitter data. Nasir et al. <cit.> developed a hybrid model based on deep learning for detecting fake news that uses recurrent and convolutional neural networks. These hybrid models combine the strengths of CNNs in extracting spatial features and LSTMs in capturing temporal sequences, providing a comprehensive approach to analyzing the complex nature of fake news. §.§ Multimodal Approaches Given the rise of multimedia content on social media, multimodal approaches that combine text and images have gained attention. Models integrating textual and visual information can provide a more comprehensive analysis of news content. Wang et al. <cit.> proposed EANN (Event Adversarial Neural Networks), which can learn event-invariant features from multimodal data to detect fake news, using adversarial learning to ensure that the learned features are robust across different events. Khattar et al. <cit.> introduced the MVAE (Multimodal Variational Autoencoder) for fake news detection, which uses variational autoencoders to capture the joint distribution of text and images, improving robustness and accuracy in fake news detection. Giachanou et al. <cit.> created a multimodal multi-image system that integrates text, visual, and semantic components to detect fake news. Their approach utilizes BERT for textual representation, VGG-16 for visual representation, and cosine similarity between text and image tags for semantic representation. Segura-Bedmar and Alonso-Bartolome <cit.> developed a multimodal fake news detection method that combines text and image data using a Convolutional Neural Network (CNN) architecture. These models leverage the complementary strengths of different modalities to provide a more holistic understanding of the news content, improving the accuracy and reliability of fake news detection systems. §.§ LLM-Based Fake News Detection The integration of large language models has significantly advanced the sophistication of fake news detection methods. Traditional approaches often rely on auxiliary data in addition to the article’s text. For example, detection systems like Grover <cit.> use metadata such as author details, publishers, and publication dates to assess the authenticity of articles. Similarly, DeClare verifies the credibility of statements by comparing them against information gathered from web-searched articles <cit.>. And platforms like Defend analyze social media interactions with news articles to assist in fake news detection <cit.>. Additionally, methods such as those demonstrated by Zhang et al. in 2021 focus on extracting emotional and semantic features from texts to distinguish between real and fake news <cit.>. These approaches, while effective, can be limited by the need for auxiliary data, which is not always readily available. Recent studies have shifted towards using LLMs like GPT-3.5 for detecting fake news without the need for extensive auxiliary data. These models focus on analyzing the content of news articles themselves, utilizing their vast pre-trained knowledge to identify falsehoods more efficiently. In particular, fine-tuned versions of these models have shown strong performance on benchmark datasets. Models like FactAgent <cit.> propose advanced fact-checking workflows that enable the model to systematically decompose complex news claims into smaller tasks before verifying the veracity of each sub-claim. However, despite its innovative approach, the reliance on static datasets limits its adaptability in detecting misinformation in rapidly evolving real-time contexts. Similarly, HiSS (Hierarchical Step-by-Step Prompting) <cit.> employs a prompt-engineering strategy, where LLMs are guided to break down a claim into smaller sub-claims for verification using external evidence. Recent work by Hu et al. <cit.> explores the role of LLMs in fake news detection, finding that while LLMs like GPT-3.5 provide useful multi-perspective rationales, they often underperform in directly detecting fake news compared to fine-tuned small language models (SLMs). To address this, the authors propose an Adaptive Rationale Guidance (ARG) network, which leverages the strengths of both LLMs and SLMs, achieving improved performance. However, their approach remains focused on static datasets and does not fully explore the potential of LLMs in real-time news detection, a gap our work seeks to address. While these techniques improve the LLM's performance by structuring the fact-checking process, they do not address the challenges posed by real-time news, as they remain limited by the static data used during training. A key distinction between earlier models like GPT-3.5 and the most current LLMs lies in their online capabilities. GPT-3.5 operates as an offline model, relying solely on static, pre-trained knowledge with no ability to access or retrieve live, real-time information from the web. In contrast, modern LLMs such as ChatGPT-4, Gemini, Claude, and Llama are designed with online access features, allowing them to dynamically update their knowledge and verify information in real-time. Given the increasing speed and sophistication of fake news dissemination, it is a natural progression to shift from offline models to these online-enabled models. This transition allows for a more adaptive and responsive approach to combating misinformation, making real-time fake news detection more efficient and scalable. Despite these advancements, the potential of online LLMs for real-time fake news detection remains underexplored. Most existing models are evaluated on static, historical datasets and do not fully exploit the adaptability and real-time processing capabilities of LLMs. In contrast, our work takes a step further by evaluating LLMs in a real-time fake news detection context, without relying on extensive pre-training or task-specific fine-tuning. Furthermore, the public accessibility of original LLMs, without requiring specialized technical expertise, enhances their potential for widespread use in fake news detection. Unlike many fine-tuned or task-specific models, which often require domain-specific data and technical resources to build, general-purpose LLMs are easily accessible to the general public. These models offer a flexible, scalable solution to identifying misinformation without the need for specialized knowledge in AI model fine-tuning. This underscores the importance of evaluating LLMs in the context of real-time fake news detection, as their potential reach far surpasses that of fine-tuned models. §.§ Challenges and Research Gaps Developing effective fake news detection systems involves several challenges. The evolving nature of fake news and the rapid dissemination of information on social media platforms require models that can adapt to new and emerging patterns of misinformation dynamically <cit.>. Traditional models often struggle with real-time detection because they are trained on historical data and cannot easily incorporate new information. A significant challenge is the need for models to process and analyze real-time data to identify fake news as it emerges. This requires the development of adaptive models that can learn from new data continuously and update their knowledge base. Additionally, there is a need to explore new datasets that better represent the current landscape of fake news, as existing datasets may become outdated quickly. Due to these challenges and gaps, we aim to find robust and adaptable models that can handle the real-time nature of fake news. We want to explore the capabilities of LLMs like ChatGPT, Claude, Llama, and Gemini in detecting fake news. These models can process and learn from real-time data, potentially providing a more effective approach to identifying fake news as it appears. § LIMITATIONS OF OFFLINE MODELS §.§ Challenges in Real-Time News Detection Traditional fake news detection models are typically offline, meaning they are trained on historical datasets and do not adapt to new information in real-time. This static nature presents significant limitations, as fake news continuously evolves, with new stories and formats regularly emerging. Consequently, these offline models become less effective in identifying the latest misinformation, leading to decreased accuracy and relevance. The effectiveness of a machine learning model is highly dependent on the quality of its training data. Generally, when training a model, it is assumed that the training dataset and the test/validation dataset follow the same or similar distribution. This assumption is crucial for the model to generalize well to new, unseen data. However, in the context of fake news detection, this assumption often does not hold. Real-time news can differ significantly from the historical data on which offline models are trained. If the validation dataset is not correlated to the training dataset, the trained model will struggle to perform effectively. For instance, during the COVID-19 pandemic, an excessive amount of fake news stories emerged about the virus, vaccines, and treatments. Offline models trained on pre-pandemic data struggled to detect these new types of misinformation. An example includes false claims about the efficacy of certain treatments like hydroxychloroquine, which spread rapidly across social media platforms. Since offline models were not trained on such content, they often failed to identify these new falsehoods. Another example is the spread of misinformation during significant political events, such as the 2020 US Presidential Election. Fake news articles and social media posts claiming election fraud or manipulating voting processes emerged quickly. Offline models, which did not have training data reflecting these specific events, showed limitations in accurately detecting and flagging such misinformation. Additionally, offline models lack the flexibility to process and learn from new data dynamically. They require periodic retraining with updated datasets, which is both time-consuming and computationally expensive. This inflexibility further hinders their ability to stay current with the rapidly changing news landscape. For instance, the rapid evolution of fake news regarding new technologies, such as 5G networks causing health issues, posed a challenge to offline models that were not updated with the latest information. §.§ Experimental Validation of Offline Models on Real-Time News §.§.§ Data Collection To evaluate the performance of LLMs and existing fake news detection models, we build a continuously updated dataset of real-time news. This dataset includes news articles posted in 2023 and 2024 from social media platforms like Twitter (X) and fact-checking websites such as PolitiFact. While each evaluation snapshot is static, the dataset itself remains dynamic, as we continuously incorporate the most recent news, ensuring it reflects the evolving nature of misinformation and real-time events. The data collection process involves both automated and manual labeling to ensure accuracy and relevance. By regularly updating the dataset with fresh content from live online platforms, we simulate the conditions faced by real-time fake news detection systems. This dynamic nature allows us to assess the adaptability of models to new and emerging misinformation, differentiating our approach from prior studies that rely on outdated static datasets. Moving forward, we aim to further enhance this process by integrating live data feeds, enabling real-time testing environments. §.§.§ Models for Evaluation To evaluate the effectiveness of traditional fake news detection models in a real-time context, we selected three pre-trained models that do not require additional training. Although one of these models, ChatGPT-3.5 Turbo is an LLM, its offline nature—where it cannot access real-time data or updates—makes it suitable for comparison alongside other traditional offline models. The chosen models represent state-of-the-art approaches for detecting fake news across different domains and modalities: * MDFEND (Multi-domain Fake News Detection) <cit.>: Released in 2021, MDFEND is designed to tackle the challenges of detecting fake news across multiple domains. MDFEND pre-trains a RoBERTa model on the Weibo21 dataset, which includes news from nine different domains, ensuring the model can adapt to various domains. * Multimodal Fake News Detection <cit.>: This model explores both unimodal and multimodal approaches for fake news detection using the Fakeddit dataset. The unimodal approaches include CNN, BiLSTM, and BERT, which focus solely on text data. * Chatgpt 3.5Turbo is a variant of OpenAI’s GPT-3.5 architecture, known for its efficiency in generating human-like text based on pre-trained data. Like other models in the GPT-3 series, it operates as an offline model, meaning that it does not have access to real-time data or the internet for live information retrieval. Instead, ChatGPT-3.5 Turbo generates responses based on the static dataset it was trained on, which contains information up until a certain cut-off date (typically 2021). Although ChatGPT-3.5 Turbo is a Large Language Model (LLM), its lack of real-time data access places it in a similar category to traditional offline models While it is acknowledged that a broader comparison involving more benchmark methods would offer greater validation, several constraints influenced the selection process. Some existing methods require more detailed or specific information about the news content, such as user interactions or metadata, which were not available or applicable in our real-time dataset. Additionally, other state-of-the-art methods have not released their code or an accessible application, limiting their inclusion in this study. As a result, MDFEND, the Multimodal Fake News Detection model, and the ChatGPT-3.5Turbo were chosen for their accessibility and relevance, allowing for a focused yet insightful evaluation of offline models in the context of real-time news detection. §.§.§ Experimental Setup and Results To assess the performance of these offline models on real-time news, we conducted experiments using a dataset specifically designed to reflect the dynamic nature of contemporary misinformation. The models were evaluated across several key performance metrics: accuracy, precision, recall, and F1-score. As summarized in Table <ref>, while these models demonstrated strong performance on historical datasets, their effectiveness notably decreased when applied to real-time news. This decline in performance highlights the inherent limitations of offline models in adapting to the rapid evolution of fake news narratives. The experimental results clearly underscore the challenges that offline models face in detecting fake news within a real-time context. Despite their strong performance on historical datasets, these models struggle to maintain accuracy and relevance when faced with new, emerging information. This significant performance drop reveals the static nature of offline models, which lack the ability to adapt to the constantly evolving landscape of fake news. These findings underscore the critical need for more adaptive and dynamic approaches to fake news detection, paving the way for the integration of online LLMs, which can process and learn from real-time data to offer a more robust and responsive solution. § ONLINE LARGE LANGUAGE MODELS FOR FAKE NEWS DETECTION In contrast, LLMs like ChatGPT and Llama have shown great potential in various natural language processing tasks due to their extensive pre-training on vast datasets and fine-tuning capabilities. These models can understand and generate human-like text, making them highly adaptable to new information and contexts. Moreover, online LLMs can access and process real-time data, providing a significant advantage over traditional models in the context of fake news detection. Their ability to continuously learn and update based on new information allows them to detect emerging fake news patterns more effectively. This real-time processing capability makes LLMs well-suited for the dynamic and fast-paced nature of fake news. Existing comparative studies have demonstrated the superior performance of LLMs over traditional models in various NLP tasks, including sentiment analysis, language translation, and text generation. However, there is limited research specifically focused on evaluating LLMs for real-time fake news detection. Comparative studies that do exist highlight both the strengths and challenges of using LLMs for this purpose. While LLMs offer improved accuracy and adaptability, they also come with challenges such as ensuring the reliability of real-time data and managing the computational overhead associated with processing large volumes of data continuously. The current state of research indicates a clear need for real-time fake news detection models. Traditional offline models, although useful, are limited by their reliance on outdated data and lack of adaptability to new information. Therefore, LLMs, with their ability to process and learn from real-time data, present a promising alternative. This study aims to bridge the gap in existing research by providing a comprehensive evaluation of LLMs for real-time fake news detection, highlighting their advantages and addressing potential challenges. Additionally, our work focuses solely on detecting fake news based on plain news text, without requiring any additional resources such as user interactions or multimedia content. This approach ensures that the models are evaluated purely on their ability to process and understand textual information, providing a clear comparison between traditional models and large language models in a real-time news detection context. §.§ LLM Models for Evaluation Large Language Models: * ChatGPT (4o): ChatGPT-4o is the latest version of the ChatGPT model developed by OpenAI, released in May 2024. This cutting-edge LLM is renowned for its conversational abilities and extensive pre-training on diverse datasets. It is designed to process real-time data efficiently, making it highly effective for detecting fake news. Its conversational nature allows it to understand and generate human-like text, enhancing its capability to identify misinformation. * Claude (3.5 Sonnet): Claude-3.5 Sonnet , developed by Anthropic, was released in June 2024. Claude focuses on delivering accurate and context-aware responses and it has been pre-trained on a wide range of information, making it adept at identifying fake news in real-time scenarios. Claude's strength lies in its detailed and nuanced understanding of context, which is crucial for discerning misinformation. * Llama (3.1-405B): Llama-3.1 <cit.>, developed by Meta AI, was released in July 2024. Llama is known for its adaptability and efficiency in processing large volumes of text and it benefits from continuous learning and real-time data access, enhancing its fake news detection capabilities. Llama's design focuses on scalability and responsiveness, making it an excellent tool for handling the dynamic nature of real-time news. * Gemini (1.5 Pro): Gemini-1.5 Pro is developed by Google DeepMind, released in June 2024. Gemini excels in natural language understanding and generation, and its continuous learning ability from new data sources and frequent updates makes it well-suited for real-time fake news detection tasks. Gemini's robust architecture ensures it stays current with evolving news patterns, providing accurate and timely responses. These LLM models, all of which are the latest versions available at the time of the experiment, will be evaluated on their ability to accurately and efficiently detect fake news in a real-time context. The comparison aims to highlight the strengths and weaknesses of traditional models versus LLMs. Using the latest versions ensures that we leverage the most advanced capabilities and improvements, providing a clearer picture of the current state-of-the-art in fake news detection. This evaluation will offer insights into the effectiveness and practicality of using LLMs for this critical task, especially in dynamically changing environments. Table <ref> displays an overview of the leading large language models (LLMs). §.§ Evaluation Framework The ability of LLMs to detect real-time fake news was evaluated using the same real-time news dataset used to evaluate existing models. This section details the experimental setup, including the zero-shot approach employed for the evaluation of LLMs. Given the exploratory nature of this research, we adopted a zero-shot evaluation approach for the LLMs, which allows these models to be tested on tasks without explicit prior training on similar datasets. The LLMs used in this study, including ChatGPT, Claude, Llama, and Gemini, were evaluated by posing them the task directly: “I will give you some news; please determine whether it is real news or fake news.” This approach mirrors real-world applications where LLMs are often employed without fine-tuning on specific datasets, relying instead on their broad, pre-trained, and fresh online knowledge. During the evaluation, LLMs occasionally responded with phrases such as "highly likely to be fake" or “highly likely to be true.” For the purposes of this study, responses indicating a high likelihood of being fake were treated as “fake,” while those indicating a high likelihood of being true were treated as “true.” In instances where the LLM responded with uncertainty, such as “I don't know,” this was treated as a failed response. The rationale behind this approach is that an inability to definitively classify the news reflects a failure to detect the fake news, which is crucial in the context of real-time detection. The performance of both the traditional models and LLMs was evaluated using several key metrics: accuracy, precision, recall, and F1-score. These metrics provide a comprehensive view of each model’s ability to correctly identify fake news while minimizing false positives and false negatives. The results from these evaluations are discussed in detail in the subsequent sections, highlighting the strengths and limitations of each approach within the context of real-time news detection. §.§.§ Experimental Results The experimental evaluation was conducted to assess the performance of four LLMs (i.e., ChatGPT, Claude, Llama, and Gemini) on real-time fake news detection tasks. These models were evaluated using key performance metrics, including accuracy, precision, recall, and F1-score. The overall performance results are summarized in Table <ref>. The results indicate that Llama achieved the highest overall accuracy at 0.949, followed closely by ChatGPT at 0.946, with both models demonstrating strong performance across all metrics. Claude, while achieving the highest precision at 0.930, displayed a trade-off with a lower recall of 0.778, suggesting that it may miss some instances of fake news. Gemini, although showing potential, exhibited lower performance metrics overall, with an F1-score of 0.675, highlighting the challenges it faces in this task. Llama's performance was particularly notable in terms of recall, achieving a score of 0.947, indicating its robustness in detecting a wide range of fake news instances. On the other hand, ChatGPT offered a balanced performance with an F1-score of 0.903, making it a well-rounded option for real-time detection scenarios. Claude, despite its precision, had a lower F1-score of 0.829 due to its comparatively lower recall. These findings underscore the varying strengths and trade-offs among the models, which will be further examined in the discussion section. In addition to overall performance, we also evaluated the models' effectiveness across different domains, including Politics, Crime, Health, Entertainment, Economics, Sports, International Affairs, and Science. The domain-specific performance, measured by F1-score, is presented in Table <ref>. During the evaluation, Gemini exhibited challenges when tasked with detecting fake news in the Politics and Crime domains. Specifically, the model often responded with statements such as “I can't help with responses on elections and political figures right now. While I would never deliberately share something that's inaccurate, I can make mistakes. So, while I work on improving, you can try Google Search.” As a result, we did not list the performance of Gemini under these two domains. This limitation reflects the model’s current restrictions in handling certain sensitive topics, which impacts its overall effectiveness in those areas. These detailed domain-specific results highlight the varying capabilities of LLMs in detecting fake news across different content areas. This analysis provides a nuanced understanding of where each model excels and where improvements may be needed, offering valuable insights for the development of more specialized and effective real-time detection systems. § DISCUSSION The results of our experimental evaluation offer insights into the limitations of traditional offline models and the advantages of LLMs in real-time fake news detection. This section explores these findings, emphasizing the areas where LLMs excel and their potential for advancing misinformation detection. Our study revealed substantial limitations in the ability of traditional offline models, such as MDFEND and the Multimodal Fake News Detection model, to adapt to the rapidly evolving nature of real-time news. These models were originally designed to perform well on static datasets, where the characteristics of the data remain relatively consistent over time. However, when applied to real-time news, where new narratives and misinformation patterns can emerge unpredictably, these models struggle to maintain accuracy and relevance. This challenge is particularly pronounced due to the distributional shift between the static training data and the dynamic, real-time data that these models encounter in practice. As fake news evolves in both content and dissemination tactics, the static nature of offline models becomes a significant drawback. Lacking the ability to continuously learn from new data streams, these models are less effective at detecting emerging forms of misinformation that deviate from the patterns observed during training. This limitation was evident in the lower performance metrics observed during our evaluation, particularly when contrasted with the more adaptive capabilities of LLMs. In contrast, LLMs demonstrated a robust ability to manage the complexities of real-time fake news detection. The zero-shot evaluation approach employed in this study allowed us to assess the inherent capabilities of LLMs without the need for task-specific fine-tuning. The results showed that LLMs like ChatGPT, Claude, Llama, and even Gemini, despite some limitations, can achieve high accuracy and F1-scores, effectively handling novel and evolving information. One of the key advantages of LLMs is their ability to provide nuanced and context-aware responses, unlike traditional models which are typically restricted to binary outputs—classifying news as either true or false. LLMs, on the other hand, can offer more detailed assessments. For instance, in evaluating the claim that "The 9th Circuit Court of Appeals just ruled Covid vax mandates unconstitutional," an LLM like ChatGPT not only identified the claim as false or misleading but also provided a detailed analysis. The model highlighted specific reasons for its conclusion, such as the lack of recent rulings, the nuanced nature of past court decisions, and the misleading use of terms like "just ruled." Additionally, the LLM flagged red flags indicating the news item’s likely falsehood, including oversimplification of legal issues, lack of specific details, and the absence of corroboration from credible sources. This level of detail and context-aware assessment goes beyond what traditional models can offer, providing users with a richer understanding of why a piece of news might be considered false. Moreover, LLMs can leverage their extensive pre-training on diverse datasets to understand and analyze the context of the news, considering factors such as the source, the language used, and the broader socio-political environment in which the news is situated. This enables LLMs to detect more subtle forms of misinformation that may not be immediately apparent to traditional models. For example, an LLM might recognize that a piece of news is highly likely to be fake based on its resemblance to previously debunked stories, even if the specific content is new. Another significant advantage of LLMs is their ability to generate explanations for their classifications. In our experiments, LLMs occasionally provided reasoning behind their predictions, such as pointing out inconsistencies in the narrative or referencing known facts that contradict the news content. This explanatory capability is a powerful tool for users who need to understand not just whether news is fake, but why it might be considered so. The ability to generate such explanations not only aids in enhancing user trust but also contributes to the broader effort of improving the transparency and accountability of AI systems in sensitive tasks like misinformation detection. However, it is important to note the limitations faced by certain LLMs, such as Gemini, which exhibited challenges when dealing with politically sensitive topics. The model's tendency to respond with disclaimers, such as its inability to provide assistance with political content, highlights the need for further refinement in handling a broader range of topics, especially those that are critical in the context of misinformation. In summary, while traditional offline models face significant challenges in adapting to the real-time detection of fake news, LLMs present a promising alternative with their adaptability, nuanced understanding, and ability to provide context-aware responses. These findings suggest that LLMs could play a pivotal role in advancing the field of misinformation detection, particularly in real-time applications where the ability to quickly and accurately identify false information is crucial. § CONCLUSION The findings from this study suggest that LLMs hold significant promise for enhancing real-time fake news detection. Their adaptability, contextual awareness, and ability to provide detailed and nuanced assessments make them a valuable tool in the fight against misinformation. However, it is also important to recognize that while LLMs performed well in this study, their application in real-world scenarios will require careful consideration of factors such as computational resources, deployment costs, and the need for continuous updates to maintain their effectiveness. Future research should explore the integration of LLMs with traditional models, leveraging the strengths of both approaches to create hybrid systems that can provide both high accuracy and contextual explanations. Additionally, there is a need for ongoing evaluation of LLMs in diverse and rapidly changing information environments to ensure that they continue to perform effectively as new forms of misinformation emerge. In conclusion, while traditional offline models face significant challenges in adapting to real-time fake news detection, LLMs offer a promising alternative that combines high performance with the ability to provide richer and more informative outputs. As the landscape of misinformation continues to evolve, the deployment of LLMs in this context could play a crucial role in mitigating the spread of fake news and enhancing the quality of information available to the public. § ACKNOWLEDGMENT IEEEtran
http://arxiv.org/abs/2409.03405v1
20240905104227
Continuous risk assessment in secure DevOps
[ "Ricardo M. Czekster" ]
cs.SE
[ "cs.SE", "cs.CR" ]
Accelerating multipartite entanglement generation in non-Hermitian superconducting qubits H. H. Jen September 9, 2024 ========================================================================================= § ABSTRACT DevOps (development and operations), has significantly changed the way to overcome deficiencies for delivering high-quality software to production environments. Past years witnessed an increased interest in embedding DevOps with cybersecurity in an approach dubbed secure DevOps. However, as the practices and guidance mature, teams must consider them within a broader risk context. We argue here how secure DevOps could profit from engaging with risk related activities within organisations. We focus on combining Risk Assessment (RA), particularly Threat Modelling (TM) and apply security considerations early in the software life-cycle. Our contribution provides a roadmap for enacting secure DevOps alongside risk objectives, devising informed ways to improve TM and establishing effective security underpinnings in organisations focusing on software products and services. We aim to outline proven methods over the literature on the subject discussing case studies, technologies, and tools. It presents a case study for a real-world inspired organisation employing the proposed approach with a discussion. Enforcing these novel mechanisms centred on security requires investment, training, and stakeholder engagement. It requires understanding the actual benefits of automation in light of Continuous Integration/Continuous Delivery settings that improve the overall quality of software solutions reaching the market. § INTRODUCTION Modern Software Development Life-Cycle (SDLC) evolved from strict Waterfall methodology or specializations like the V-model (that focused on testing for each phase) to modern Agile approaches namely eXtreme Programming (XP), Scrum, Kanban <cit.>, and hybridisations <cit.> where organisations `cherry pick' practices they believe contributes to productivity. Stakeholders must ensure adequate end-product level-of-service reaching customers through stringent observance to quality properties that prevent serious defects or protects users from malicious behaviours. Examples are performance, usability/intuitiveness, energy efficiency, and other “-ities/-ilities”, ie, availability, reliability, integrity, maintainability, and agility, in a non-exhaustive list <cit.>. We focus here on another important quality property given by security. This concern stands out as a crucial “non-requirement” <cit.>, ie, everyone simply assumes that a system or service are thought to be automatically safe and secure. Security (used interchangeably here with cybersecurity)[This work entails offering protections to users at any level, ie, cyber-physical, encompassing any malicious attempt not anticipated by requirements.] entails observing so-called CIA triad: Confidentiality, Integrity, and Availability <cit.>. Developers do not wish to sit down and read risk related documentation on a higher abstraction level (ie, management, governance, leadership). They want to start thinking about the solution straight away, usually by means of Threat modelling (TM) systems and services, using what they know, eg, Data Flow Diagrams (DFD), Attack/Threat Trees, Unified modelling Language (UML) diagrams (Interaction Diagrams, and a host of other diagrams), and so on. A modern approach of SDLC is called DevOps <cit.>, the idea of crafting solutions that considers development to deployment (operations). The four agreed principles are Culture, Automation, Measurement, and Sharing (CAMS) <cit.>. One distinct variant is secure DevOps that aims to seamlessly embed security into the SDLC through early TM and Shift-Left Security <cit.>, also called Start-Left <cit.>[Note that these buzzwords have been abused by vendors, thus significantly losing its core meaning, which is to embed security principles early on.]. Secure DevOps undoubtedly poses challenges for adoption in software teams as discussed by <cit.> and <cit.>. Risk Management (RM) and Risk Assessment (RA) cannot be detached from these considerations as they belong to a broader risk-based approach within organisations that allocate resources to combat potential threats, remediate cyber-attacks, and make sure security controls are working effectively, among other tasks. We argue here, however, the need for understanding the context and motivations on how RM/ RA intertwines with TM. We provide an overview of the general process for understanding the interplay among RM, RA and TM in Figure <ref>. It outlines how the approaches are inter-related and the geographically distributed institutions behind the initiatives (where leaders are currently the US, the UK, and the European Union). TM can be seen as a RA technique within a broader RM context. It has gained significant traction in software development community as it provides means and tools to understands most-likely threats to systems and services. As motivation, we have identified inherent difficulties of working with domain experts across fields when enhancing the cybersecurity posture of software-based solutions. For instance, one setting may have risk analysts (broader risk panorama in an organisational level), programmers/testers, software architects, governance, compliance and security officers (some with coding skills, others only with security/privacy domain expertise), and so on, in heterogeneous settings in terms of skill-sets and capabilities. We outline next our contributions: * Thorough discussion highlighting how secure DevOps practitioners should incorporate Shift-Left Security practices in a broader risk context and how these elements are intermixed altogether. * Review on how RM/RA approaches mixes with secure DevOps and how developers and risk analysts could combine efforts while embracing Continuous Integration/Continuous Delivery-Deployment (CI/CD). * Showcase the importance of considering risk management, risk assessment and threat modelling approaches in software development life-cycle as proposed in secure DevOps contexts. * Perspective on joint team/ management/ customer about the potential overhead imposed by new features entering the Product Backlog and the inherent issues to take into account in terms of effort and feasibility aligned with budgetary constraints. * Comprehensive examination of beneficial practices to enhance the overall software quality requirements with focus on cybersecurity of features reaching production environment within CI/CD mechanisms and Application Security Testing (AST) tools and methods. To the extent of the literature survey presented here, we were unable to find work outlining the desired cross-fertilisation of RM/RA with TM in secure DevOps contexts with effective practices and recommendations. One work aligned with this one was proposed by Dupont et al. (2022) <cit.>, where they provided a discussion of mixing RA with secure DevOps, however, the approach discussed here goes further in providing context for RM/RA altogether. Another close aligned work was discussed by Zografopoulos et al. (2021) <cit.>, where they mixed RA and TM in a cyber-physical energy systems context. The sheer amount of documentation, best practices, compliance, recommendations, guidance on TM/RA, etc., hinders productivity and slows feature development/deployment into production environments. We stress that this paper does not aim to be an authoritative cybersecurity “must-follow” account of how to best employ RM, RA, and TM within organisations. It serves as a compilation of related documents or as guidance to provoke further in-depth risk related investigations and matching approaches to organisational objectives. As it is the case for many security publications, there is no `silver bullet', `panacea', enforced checklists[The Open Worldwide Application Security Project (OWASP) does provide a broad and interesting list of so-called cheat sheets: <https://cheatsheetseries.owasp.org/>, with useful steps to follow.], or general rule-set to blindly follow and expect systems and services to be automatically secure and protected from threat agents. This work is organised as follows. Section <ref> will set the context and Section <ref> will discuss how to incorporate effective TM/RA in secure DevOps. In Section <ref> we present our recommendations and guidelines with practical implications. Finally, Section <ref> delineates our conclusion, a roadmap for adopting secure DevOps altogether, and future work. § CONTEXT: TACKLING RISKS Basic security principles point out to practitioners that any attempt of abusing a system or service are related to the so-called CIA triad: Confidentiality, Integrity, and Availability. Modern authors <cit.> include Authenticity, Accountability, Privacy, Trustworthiness, and Non-Repudiation, to factor in requirements for auditing, forensics, and uniquely identifying individuals. The sheer amount of risk and security related guidance, recommendations, and standards for risk management (RM) and risk assessment (RA) is overwhelming, especially for inexperienced stakeholders. Focusing on the most significant ones, there is ISO 31000:2018 tackling risk management (and accompanying ISO/IEC 31010:2019 – risk assessment techniques), ISO 27005:2022 for information security risk management, the European Network and Information Security Agency (ENISA) RM/RA and interoperability documents, NIST SP 800-37 for Risk Management Framework for Information Systems and organisations, and NIST SP 800-30 <cit.>, a Guide for Conducting Risk Assessments from the NIST Risk Management Framework (RMF), and the NIST SP 800-53, Security and Privacy Controls for Information Systems and organisations. RM and RA methodologies share concepts, for example, Figure <ref> shows a juxtaposition among ISO 31000, OCTAVE <cit.>, and NIST 800.30r1 and the Process of Attack Simulation and Threat Analysis (PASTA) <cit.>. In ISO 31000:2018, risk is the effect (ie, deviation from expected) of uncertainty on objectives. It focuses on Framework (governance, leadership, commitment, improvement), Principles (value creation & protection), and Process (context, risk communication, risk assessment – risk identification, analysis, evaluation – risk treatment, and reporting/monitoring risk in a continuous fashion. The document defines risk sources, events, consequences, likelihood, impact, and controls. For NIST, additionally, they have conceived the Cybersecurity Framework[Link: <https://www.nist.gov/cyberframework>.] (CSF 2.0 – Feb/2024), a document that provides guidance for managing cybersecurity risks that could be integrated with previous NIST related RA methodologies. The CSF core outlines functions such as Govern, Identify, Protect, Detect, Respond, and Recover. As a noticeable difference from previous versions, it now recognises governance as a dimension: “The GOVERN Function supports organisational risk communication with executives. Executives’ discussions involve strategy, particularly how cybersecurity-related uncertainties might affect the achievement of organisational objectives”. The UK's National Cyber Security Centre (NCSC) and the newly created (2024) National Protective Security Authority (NPSA) offer RM/RA recommendations. NCSC offers guidance on RM that is inspired on ISO 27005 with the provision of a so-called RM Toolbox encompassing RM Information, RA Approaches (system-driven and component-driven), Assurance, and Tools & Methods (which lists Attack Trees, TM, and Scenarios). It offers a basic guidance on RA for people without experience in risk analysis and also on basic TM and Attack Tree modelling. It has published the NCSC Cyber Assessment Framework (CAF) guidance as embracing UK's National Cyber Strategy, a new initiative for improving government cyber security. NPSA has devised a document called Protective Security Risk Management, a two-page guidance outlining eight steps for conducting a broad RA on assets and information management systems. For NPSA, “risks are identified threats or vulnerabilities, aligned to assets, that have been assessed for their likelihood (of the threat event occurring) and impact.” Figure <ref> conveys concepts and notions explained herein tangent to RM, RA, and TM, among other (it expands the key notions as presented in Figure <ref>). In terms of RA methodologies and risk computation formulas, <cit.> discuss classes and characteristics of selected approaches. It focuses on the following RA methodologies: * Expression des Besoins et Identification des Objectifs de Sécurité (EBIOS) * MEthod for Harmonized Analysis of RIsk (MEHARI) * Operationally Critical Threat and Vulnerability Evaluation (OCTAVE) and variants (OCTAVE Allegro, OCTAVE-S) * IT-Grundschutz * Metodología de Análisis y Gestión de Riesgos de los Sistemas de Información (MAGERIT) * Central Computing and Telecommunications Agency Risk Analysis and Management Method (CRAMM) * Harmonized Threat Risk Assessment (HTRA) * NIST.SP 800 * RiskSafe * CORAS It is worth mentioning that some methods are obsolete, as they have not been used in any modern risk assessment application domains in recent years, the case for example of CRAMM. More recently, <cit.> proposed a RA approach entitled Yet Another Cybersecurity Risk Assessment Framework (Yacraf). Its objective is to help organisations with more decision support additionally offering a risk calculation formalisation.Another framework left out the analysis was the Factor Analysis of Information Risk (FAIR™) <cit.>[Additionally, consult Open FAIR™  Risk Analysis Process Guide, Version 1.1 at <https://publications.opengroup.org/g180>, published by the Open Group.], that aims to provide quantitative means for risk assessment tailored for information security. Another interesting outcome in <cit.> is an attempt to amalgamate RA's phases altogether within four general items: 1. Preparation (ISO 31000:2018 calls this Context), 2. Risk Identification, 3. Risk Analysis, and 4. Risk Evaluation. As a matter of fact, a myriad of proposed RA throughout the years presents these four phases, perhaps with minimal differences. OWASP has proposed a Risk Rating Methodology for computing risk-based on guidance available in NIST 800-30, HTRA, and Mozilla's Risk Assessment Summary and Rapid Risk Assessment. It is applicable to online applications requiring CIA considerations and uses the standard risk computation given by[For additional formulas and discussion, please consult <cit.>.]: Risk = Likelihood * Impact This formula serves as the basis for many quantitative risk analysis in the literature. A hybrid approach is Threat Analysis and Risk Assessment (TARA) widely used in the automotive industry alongside ISO 21434:2021 (Road vehicles – Cybersecurity engineering) <cit.>. Not to confuse with Threat Assessment and Remediation Analysis (TARA) by MITRE Corporation[Link: <https://www.mitre.org/news-insights/publication/threat-assessment-and-remediation-analysis-tara>.] that specialises in the identification and ranking of attack vectors based on assessed risk <cit.>. This approach has been used in the US under military, air force, and naval applications with success. Intel has come up with an approach called Threat Agent Risk Assessment (TARA), however, there is inconsistent bibliography about it, so we will not focus on explaining in here. RA has been applied to several contexts and applications, for instance, to industrial contexts <cit.>, Smart Homes <cit.> and Smart Buildings <cit.>, critical infrastructure <cit.>, and energy systems <cit.>. For addressing risk severity, a host of techniques are available, for instance, Common Vulnerability Scoring System (CVSS), EPSS (Exploit Prediction Scoring System), Known Exploited Vulnerabilities (KEV), and Cyber Risk Scoring (CRS). The Software Engineering Institute (SEI) at Carnegie Mellon University/US has produced a White Paper outlining 12 available methods for TM[Link: <https://insights.sei.cmu.edu/blog/threat-modelling-12-available-methods/>.]: STRIDE, PASTA, LINDDUN, CVSS[Observe that CVSS is not considered a RA method, but a risk severity scoring system.], Attack Trees, Persona non Grata, Security Cards, hTMM (Hybrid TM Method), Quantitative TMM, Trike, VAST (Visual, Agile, and Simple Threat) modelling, and OCTAVE[Note that LINDDUN focuses on privacy related threats whereas OCTAVE is a full-fledged and established RM/RA approach, and listing it as TM begs further discussion.]. More recently, an addition to STRIDE was the STRIDE-LM (Lateral Movement) Threat Model (to account for LM being defined as a way of “expanding control over the target network beyond the initial point of compromise”.)[Link: <https://csf.tools/reference/stride-lm/>.] The Common Criteria for Information Technology Security Evaluation (known simply as Common Criteria or CC) is an international standard (ISO/IEC 15408) for cybersecurity[Link: <https://www.commoncriteriaportal.org/index.cfm>.]. Figure <ref> shows an adaptation of CC presenting major actors (Owners, Threat Agents), and relationship with Risks and Assets. The figure helps understanding cybersecurity concepts by non-experts as it outlines the key relationships among Owners, Threat Agents, and Threats leading to Risks over Assets. It also highlights how developers might introduce vulnerabilities into systems and services, ie, when patching and updating software, among other tasks. §.§ Discussion One aspect shared by many of these methodologies, frameworks, and standards is that they attempt to be generally applicable to a host of situations requiring close attention to risks, threats, and vulnerabilities over assets. It is a known fact that the security community has overgrown throughout the years and different ways of tackling risks have emerged, as seen by ISO/IEC, NIST/US, NCSC/UK, ENISA/EU and many other collaborations out of research groups, companies, and individuals. To help our readers choosing methodologies and approaches for risk, we comment on the following guidance: * If one is simply overwhelmed by the sheer available documentation: start with the CC, which is useful to understand the tasks for tackling risks over assets under threats in organisations. The work by Tarandach and Coles (2020)<cit.> also provides a simple view of security terminology aligned with the core ideas behind the CC (adding the concepts of system, data, value, and functionality). * In an organisational level, one will need to select a proper RM methodology out of available options. * The ISO 31000:2018 standard strives for conciseness: it has about 17 pages outlining risk and surrounding concepts in a simple to understand fashion. * For choosing a RA methodology, start by selecting the geographic region aligned to your objectives and budget. Options are 1. US (NIST); 2. UK (NCSC); 3. Europe (ENISA) – as listed, there are approaches based on Spain, France, and Germany, among other. They have free and ready to use guidance – mind that some RA methodologies have licensing and/or support costs (for instance, OCTAVE, IT-Grundschutz, MAGERIT, CRAMM, and RiskSave). § INTERMIXING RM, RA, AND TM IN SECURE DEVOPS Secure DevOps rapidly organised itself around DevOps as it became clear the need to insert security into software as early as possible. The community has come up with the DevSecOps Manifesto[Link: <https://www.devsecops.org/>.] outlining practices and recommendations. Similar approaches were also competing for attention, the case of SecDevOps <cit.>, secure DevOps <cit.> (which is the terminology followed here), or even Rugged DevOps[Link: <http://ruggedsoftware.org/>.]. The work by Lombardi and Fanton (2023) <cit.> go one step beyond arguing that secure DevOps is not enough, teams require effective shift-left security practices in an approach they refer to as CyberDevOps. There is a myriad of challenges for working with secure DevOps and TM as discussed by Jayakody and Wijayanayake (2021) <cit.> that outlined in a literature review: organisational culture changes to effectively absorb secure DevOps ideas, difficulties when finding experienced professionals, lack of management support, adopting the process, changing the mindset required for secure DevOps, issues for replicating complex technology environments, lack of collaboration, concerns when establishing a development culture, inherent complexities in software development and mismatch with secure DevOps, legacy systems, and increased project costs associated. The work by <cit.> outlined issues to keep up project delivery velocity whilst maintaining TM and associated challenges for updating models to better scale it with automation and traceability features in secure DevOps. Incorporating security into software engineering has been a concern discussed early in computing <cit.>, with ramifications in system design practices and mechanisms on building systems “the right way” from the onset of projects <cit.>. More recently, attention has been diverted into modern approaches, usually led by secure DevOps <cit.> and incorporating it with TM practices and resources <cit.>. The work by Battina (2017) <cit.> discussed best practices for incorporating security into DevOps, highlighting: 1. integrating with identity and access management and (secure) code review, 2. fitting it with governance, 3. effective vulnerability management, 4. automation, 5. validation, 6. network segmentation (more technical), and 7. use least privilege approaches. The book by Shostack (2014)<cit.> has thoroughly discussed TM in practical terms and how to design systems for security with applications to real-world systems. One outcome was to discuss the overall TM process and propose four questions that each process must answer: * What are we working on? * What can go wrong? * What are we going to do about it? * Did we do a good enough job? This has become a key tenet of the TM Manifesto[Link: <https://www.threatmodellingmanifesto.org/>.], among other substantial elements for establishing the necessary context to sustain a TM-based process within organisations. In a high-level perspective, note that the same questions could be asked by risk-based stakeholders without any loss; only the scope is broader. Tarandach and Coles (2020) <cit.> compiled a hands-on approach to TM, discussing the principles behind the approach and general applicability. For the authors, TM is “the process of analysing a system to look for weaknesses that come from less-desirable design choices”. The idea is to consider these deficiencies before developers append features to systems. Within the TM Manifesto, more recently there has been discussions on establishing secure DevOps in organisations through the use of so-called TM Capability[Link: <https://www.threatmodellingmanifesto.org/capabilities/>.]. It consists of devising a modular approach to establish a TM program in organisations, by describing measurable and provable practices that translates to actionable objectives. Another approach that aims to align practical secure DevOps outcomes and make them a reality within software companies is the DevOps Research and Assessment (DORA)[Link: <https://dora.dev/>.], or simply “DORA Metrics”, outlined in <cit.> that investigated metrics for high performance teams. Secure DevOps metrics were also investigated by Prates et al. (2019) <cit.>, where authors identified the following ones: 1. Defect density, 2. Defect burn rate, 3. Critical risk profiling, 4. Top vulnerability types, 5. Number of adversaries per application, 6. Adversary return rate, 7. Point of risk per device, 8. Number of CD cycles per month, and 9. Number of issues during Red Teaming drills. In many organisations there is a mismatch between software development teams and business managers <cit.>. Going beyond this point we posit that this disconnect is even bigger, as risk analysts are also not participating in the overall quality of products being delivered to customers. We expand these gaps as discussed in so-called “continuous software engineering” <cit.> to append continuous RM/RA activities that are deemed crucial for aligning business objectives to product delivery, as depicted in Figure <ref>. We argue that the software engineering community is (rightfully) interested in pushing systems towards innovation through experimentation, however, there are additional concerns that must be enforced to ensure customer assurances to their security and privacy. That is why in the figure there is the “Continuous caution” recommendation, as paraphrasing <cit.>: “In our experience, developers live at the speed of deployment. Architects set the speed of progress. Security people run at the speed of their caution” (emphasis added). The notion of tackling continuous tasks is crucial in the SDLC, so we clarify secure DevOps with respect to CI/CD activities <cit.>: * Continuous Integration: crucial activity identified in eXtreme Programming that is triggered by series of interconnected phases, ie, compilation, unit, acceptance, or integration tests, code coverage analysis, adherence to code style conventions, and building solutions. It refers to having releasable software artefacts and deploying them to some environment (eg, pre-stage, testing, etc.) not necessarily to customers. * Continuous Deployment: consists on releasing software builds to users automatically. * Continuous Testing: integrate testing related activities as close as possible to coding, quickly fixing errors and defects while integrating code bases. The process is automated, and practitioners assign test cases prioritisation to speed up the overall process. * Continuous Experimentation: this is the cornerstone for quickly understanding deficiencies in designs and learning `what works' and `what doesn't work'. Inasmuch as one must focus on principles not on technologies as stressed out by the security community as a plethora of approaches, tools, and mechanisms are in place to help stakeholders, we shall here embark on a technical discussion pointing out how to harden applications altogether. §.§ Case study for framing risk in a proof-of-concept: ACME Corp. We turn our attention to devising a proof-of-concept with a potential working example outlining our proposed approach. Note that we have taken this route because organisations will not disclose their risk related activities (nor they should) due to security reasons. That is the main reason as to why we are devising an exercise on risk that captures the fundamental elements described herein to showcase our RM/RA plus TM approach. As a disclaimer, we represent here a fraction of (what should be) a comprehensive RM/RA approach, to highlight our proposition's benefits to stakeholders. We are focusing on technologies and innovation as major business activity, and purposely neglecting broader risk relevant tasks (that should be addressed as well) such as natural events, general theft, among other. Description: ACME Corp. (a fictitious enterprise) is a for-profit hardware/software organisation with global outreach with offices in London, New York, and Singapore. The company provides wearable devices (ie, Internet-of-Things – IoT) tailored to Industry 4.0 (I4.0). It has 1,000 employees among staff, analysts, developers, and hardware/software (hw/sw) architects. The company's risk appetite is high; they want to disrupt the wearables market with their products and are willing to take risks. Recent successful cyber-attacks perpetrated against their virtual infrastructure has pushed management to professionalise their security underpinnings altogether. After thorough analysis on the attacks they discovered malicious activities performed by competitors posing as insider developers with IT-level credentials. They have hired additional team of experts across sectors (management, operations, research, and development) and started taking security seriously with continuous assessments and training (just to start). ACME would like to go one step further and raise awareness on a combined risk approach encompassing different company sectors, where risk analysts inform solutions development through systemic threat modelling. Figure <ref> shows a simplified view of ACME Corp. detailing sectors and risk related roles. It depicts some internal entities tailored for tackling risk oversight that ensures and establishes controls for addressing threats and vulnerabilities over assets. In broader terms, here are the TOP 5 issues they chose to focus and, for each, a mitigation option to reduce the impact of the issue: * Industrial espionage by foreign and domestic agents. * Zero-copy policy on premises. No drives on machines. No personal computers. Limited Internet. * Insider threats leading to lost in revenue or illegal sharing of corporate secrets. * Vetting prospective candidates exhaustively. Enacting whistle-blower policies and rewards. * Loosing staff members to competition (lacking staff retention mechanisms). * Offer competitive salaries above the market and participation in company's dividends. * Secure SDLC issues giving rise to vulnerabilities. * Strong enforcement of Secure Code Review. * Massive dataset generation overwhelming or impairing analysis (ie, data deluge). * Effective tooling and employing advanced data analysis technique with high-skilled experts. Risk management overview: The management team opted for a hybrid ISO 31000:2018 and NIST 800:30r1 approach to RM/RA. They want to structure their approach and understand (or at least map) most likely uncertainties as they see might occur. Starting with RM, they compiled the following ideas: * Value: innovative (cutting-edge) solutions in hardware (embedded computing, wearables/IoT) and software (management systems, web interfaces, and micro-controllers) customised for I4.0. Intellectual property and patents, both granted and prospected. * Assets: researchers, developers, equipment, HW/SW designs, wearable/IoT on-site (being prepared) and in-customers (deployed). * Framework: as leadership, we identify the CEO, CTO, CSO (Chief Scientific Officer), and CISO (Chief Information Security Officer). These roles will enforce the commitment towards risk related issues and propagate the required protections to their business model. Evaluation, Implementation, and Design will be constantly updated every month, in a so-called `Risk Orientation Meeting', with key management roles attending with invited personnel depending on agenda. * Principles: everyone is accountable for each one's respective products (hw or sw). They must account for security measures in place to protect the staff under their responsibility as well as machinery, explaining and teaching on novel approaches to thwart attacks or newest mitigation strategies across the board (involving all sectors). As they want to protect their innovations, managers are to develop and explain controls to retain staff, attract new talent, factor in potential insiders (industrial espionage), and establish measures for malicious detection. * Process: they want to be able to map, understand, and communicate risk in a fast-pace fashion. That is why they used their own patented wearable/IoT technologies deploying it into staff's uniforms and equipment. They want to be able to track and cope with uncertainties and prevent issues from happening based on lessons learned and thorough risk analysis using quantitative and qualitative measures. Enacting continuous Threat Modelling. Controls positioned over the infrastructure capture noticeable events storing them in information systems (SIEM, Firewall, Intrusion Detection System, Issue Tracker, and Event Loggers), complementary to application logging. Additionally, they have established an Oversight Risk Board that analyses risks and come up with revisions on the process adapting it to modern times by inspecting latest Advanced Persistent Threats (APT) incursions and motivations particular to this industry. They respond to upper management, ie, CEO, CTO, CSO, and CISO. This mechanism will have at most four appointed members (according to expertise) to expedite decisions and achieve faster consensus on every activity involving risk within organisation's sectors. Risk assessment: As mentioned, the approach is hybrid: ISO 31000:2018 ([A]) and NIST 800.30 ([B]). * Process [B]: this step will map threats using common techniques and modelling. * Prepare [A] & Identification [B]: align business objectives with overall RA process. Understand major sources of risks (as mentioned, ie, espionage, insiders, staff retention, Secure SDLC principles, and data deluge as major risks, other risks and uncertainties could follow). * Conduct [B]: produce a list of security risks with prioritisation to inform decisions. Perform threat and vulnerability analysis, identifying impact, likelihood, and associated uncertainties. Conduct Threat Modelling to map relevant events. Identify crucial systems and security controls in place. * Communicate [A,B]: share results and relevant information from previous phase (conduct) to key stakeholders to guide decision making process. * Maintain [A,B]: keep risk assessment current and updated with latest information about threats and vulnerabilities. * Model [B]: Map out systems, services, throughout the organisation listing threats and vulnerabilities as discovered in previous phases, aligning with business objectives and understanding risk prioritisation over assets. * Factors [B] & Analysis [A]: the organisation conducts data-driven analysis over identified threats and vulnerabilities in assets, according to likelihood, impact, severity, and priority. Assess events that could occur to assets. * Assets [A]: map current assets within the organisation. * Vulnerabilities [A,B]: understand potential weaknesses and places where attacks can take place. * Threats [A,B]: analyse most likely threats with respect to assets. * Event likelihood [A,B]: determine how likely the event is bound to happen. * Impact [A]: address how the event might impact the organisation. * Approach [B]: data-driven (quantitative) with subjective judgement on non-quantifiable (or intangible) data (qualitative). For checking out the SDLC and adherence to Shift-Left Security practices, combined with the use of DORA Metrics to measure teams' performance and response attributes related to mitigation efforts. * Evaluation [A]: make risk related decisions on available information regarding the assessment, comparing with previous analysis, setting additional actions to conform, conduct risk treatment steps to tackle identified risks, and review security controls in place to account for reducing risk to assets. Threat Modelling: this step is intertwined with previous one with respect to identifying most likely threats and vulnerabilities aligned to business objectives. It is crucial to employ modelling that teams with different backgrounds can understand and communicate to others. Security team strongly suggests adopting TM that provides value to teams and focus on high-level descriptions of systems and possible abuses. Priority is given to products reaching end-customers on critical environments (eg, healthcare), however, security is viewed as everyone's problem, so issues concerning this element are deployed throughout the portfolio, internally and externally. Specialised teams will perform application decomposition analysis over (crucial) HW/SW products to map out specific threats permeating the solutions offered by the organisation. Following the risk proposition outlined in this work, the organisation wants stakeholders to understand business objectives and align all RA and TM altogether for achieving better results. Teams working on any product (at any stage, conception, design, usability, testing, production, etc.) are encouraged to propose new modelling methods or incorporate different existing TM or concepts into the mix. These meetings are open for any person at the organisation, and they are also free to invite any stakeholder with specific expertise to help the TM effort. Teams are expected to tackle CIA+ aspects for any product, system, or service in ACME's portfolio as well as document their processes and update the models altogether. Specific to this case study, the organisation decided to focus on DFD, Attack Trees, and STRIDE for SDLC related activities (they serve as possible indicators of threat hunting activities, no team member should embrace one technique over the other or push teams to adopt one). The software tool-chain (versioning, issue ticketing, security systems, event logging) and supporting systems have entries with cybersecurity notes and links to latest incursions and vulnerabilities catalogues, prioritising problems, and determining severity of each noticeable task to tackle next. As stated, the organisation expects collaborators to keep an open mind about approaches, focusing on potential attack venues instead of specific technologies and modelling alternatives. Dynamics: Enforcing security through risk informed activities throughout organisational sectors: * Alignment to organisational objectives (summary): focused on i) industrial espionage, ii) insider threats, iii) staff retention, iv) Secure SDLC, and v) data deluge (except for iii), activities are highly technological, and security controls must be enacted accordingly, observing CIA+ objectives). * Map current attack surface effectively by means of information systems combined with secure logging. * Controlling data within organisation: authentication and authorisation mechanisms in place to understand access and perform auditing and forensics if ever required. * Shift-Left Security by involving developers and architects in secure programming and design practices from the onset of activities. Match User Stories to Security User Stories that map CIA underpinnings directly into the process. * Test coverage: it continuously tests how much of the code is being tested. * Secure Code Review: experts review all HW/SW code and allow only secure tested code and vouched API to enter the repository. * Observability and secure logging: identifying and alerting malicious activities (as perpetrated by insiders). * Data analytics: de-duplication, processing, treating, analysing, converting, and analysing data, making sure data in transit and in rest are confidential. * Quick detection, alerting, and mitigation: allocating response teams if any threat appears and establishing actions to minimise damage and secure the operation. Upper management understands the importance of risk within the organisation and identifies opportunities for continuously improving the overall RM/RA process integrated with TM. Risk analysts are expected to engage across sectors for communicating risks across the board. Security officers engage with risk analysts to align objectives altogether. SDLC developers and architects engage with security officers and risk analysts for improved Shift-Left Security practice. § RECOMMENDATIONS AND GUIDANCE Because of its broad scope, RM indicates risk strategy on an organisational level, outlining leadership, roles, and responsibilities. Acting more locally, RA informs tactics for threats. TM tackles working with devising ways of abusing systems and technologies. §.§ From potential features to concrete features in production Before they become actual features executing in production, raw ideas and wished for functionalities exist only in customer's wish lists or resting in (non-prioritised) backlogs. They must add value to the software solution, have an associated cost and require equally costly resources to be allocated. Figure <ref> showcases this process and outlines the major concerns and decisions required by stakeholders when deciding whether these undefined ideas should become full-fledged User Stories (features) and enter the Product Backlog. And because secure DevOps and Shift-Left Security is supposedly enforced into the process, additional concerns (potentially leading to delays) are also in play. Under such context, team leaders and managers must thoroughly study the feature's impact on their operation before any development takes place. We outline next the journey of such raw ideas until they become operational in production to understand the required effort and budget concerns to factor in. Before we take on this task, let's devise the following assumptions: * As stressed here, teams must understand the RM/RA context to guide their security-based decisions altogether. Relating RA and TM within secure DevOps entails understanding architectural details for software solutions and mapping inherent risks associated with implementation and operations embedding it all with security requirements. * Secure DevOps presumes the productive existence of continuous feed-back loops among stakeholders, ie, customers, team members, security officers, domain experts, and managers. * Project adopted an incremental approach with frequent releases and aiming stakeholder feedback throughout phases. * The team has decided to use, as software development process, Agile approaches, perhaps even considering Scrum, XP or Kanban (to mention some). Under this, decision teams will work with a list of User Stories (appending them into a Product Backlog), breaking them, adding them to Sprint Backlog, and assigning required effort. * Listing vulnerability catalogues to track, observe, and align with the SDLC. * Use of a Versioning system (Git based) employing Trunk Based Development where main branch is the latest software version. * Secure code review in place: security experts review code before they are committed into the versioning system. * Shift-Left Security and early TM: security is under consideration since requirements/plan/design phases and run throughout the SDLC, especially in early phases. Figure <ref> illustrates the overall process encompassing secure DevOps with respect to SDLC and addressing RA, TM, and Shift-Left Security altogether. Next stage consists of moving forwards into a Secure SDLC approach intermixed with secure DevOps cycle, for instance: * Requirements: identifying, mapping, refining, and reasoning about implementation effort and usefulness (value). * Plan and Design: understanding which modules will be required for change, which teams they might impact for extra testing and integration. * Risk Assessment and Threat modelling on early designs: as mentioned, RA and TM tasks can kick-start early, with drafts of architectural documentation and application mocks (user interfaces) for devising DFD or other models that inform better design decisions to stakeholders. * Implementation (branching from versioning system) and integration with (other) team's source code. * Testing: creating and updating Unit Tests (ie, automated validation), involving team commitment for devising and reviewing through additional implementation. * Application Security Testing (AST): focus on SAST, linting, adherence to code conventions (as set by the team), and Fuzzy Testing. * Software Composition Analysis (SCA): checking external API/libraries, versions, and licensing, as well as checking for vulnerabilities. * Automated Threat modelling: using software artefacts as available in projects to compose TM amenable for analysis and guiding software teams developing high-quality code and adherence to security. * Secure Code Review: security experts comment on code and uphold constraints, suggest changes, and check whether Unit Tests were updated to meet feature's objectives, and compliance to regulation and standards. * Integration: merging code, solving conflicts, re-running tests, integration testing. * Delivery: commit to main branch (deliverable version of feature or set of features) * Internal pentesting: mind that this is a biased approach as one is familiar with the system and might only test what is working, not a comprehensive test suite aiming at finding active vulnerabilities. * Deployment: feature set is pushed into test, pre-production, staging or production environments. * DAST: executing software in a specific environment to track potential sources of vulnerabilities. As mentioned, this process is highly time-consuming and it involves the allocation of costly resources, where the gains and insights are sometimes disputed by teams as this process could be replaced by SAST and testing. * External pentesting: conducted by third-parties (unbiased) where the team is given a report for addressing issues found in the process. * Monitoring: secure DevOps Metrics (DORA based), tracking features, network usage, application logs, system statistics * Maintenance and evolution: overseeing system under execution for quality improvements. §.§ Approaches: Start with Security, Shift-Left Security, or Start-Left The Federal Trade Commission (FTC) in the US devised a White Paper in 2015 <cit.> inviting organisations to “Start with Security”, listing a series of guidance such as 1. Control access to data, 2. Enforce secure passwords and authentication, 3. Store sensitive information securely (in rest and in transit), 4. Segment your network, 5. Grant secure remote access to your network, 6. Apply sound security practices, 7. Make sure your service providers meet your secure standards, 8. Keep security current and address vulnerabilities, and 9. Secure paper, physical media and devices. This document is a blueprint for addressing security concerns early in systems design, discussing physical security elements and cyber related concerns. The practice of Shift-Left Security (SLS) is inherently collaborative <cit.>, encompassing a range of stakeholders concerned with the security posture of the organisation. The work by Lombardi and Fanton (2023) <cit.> discussed the need for effective SLS in secure DevOps. The authors argue that their approach dubbed CyberDevOps[Patrick Debois has discussed the “Shades of DevOps Roles” in a post available at <https://www.jedi.be/blog/2022/02/11/shades-of-devops-roles/> (2022), where he outlines how hyped and charged the term DevOps has become.] is more effective than straightforward secure DevOps because they incorporated SCA to deal with nondeterministic environments and tackled vulnerability assessment and compliance by adding another pipeline step in the process. The work by Jimenez et al. (2019) <cit.> discussed SLS in an industrial case study. They discuss a framework for automating a deployment setting for distributed software systems and components. For instance, Fisher (2023) <cit.> advocates employing SLS for preventing costly maintenance and code rewrites that may or may not introduce additional vulnerabilities into applications. The author comments on the benefits of this approach and cite examples for achieving success, for instance, making good choices on the right architecture and other design decisions early whilst taking security into consideration. It includes using institutions trusted by industry for instance, OWASP, that foments training and guidance when hardening applications. Other interesting ways to consider include protecting data flows and database technologies, eliciting requirements on encryption strategy with visible documentation throughout stakeholders, among other suggestions. Figure <ref> shows the SDLC coordinated with Shift-Left Security approaches plus secure DevOps cycle, AST, and tooling. §.§ Additional quality attributes required by software projects One could posit that other software quality properties could also be shift-left in design and planning phases, for instance, usability, maintainability, scalability, modularity, performance, and more recently, authors recognised energy efficiency as an attribute <cit.>. The issue in software development is that teams are almost never sure which dimension is more important than the other, and it requires experience to “get things right the first time”. The decision on which one to focus could impact project's speed and productivity altogether, so they must carefully analysed and aligned with other project stakeholders. These properties are trends and buzzwords that should be taken seriously by software developers, ie, the case of a project's objective to be robust, reliable, resilient, adaptable, survivable, or sustainable. Serious projects address selected quality attributes with actionable tasks that effectively ensures that it addresses the issues and that it sustains it with measurements and metrics. Although this is intrinsically related to software architecture, it has ramifications in secure DevOps <cit.>. The work by Alnafessah et al. (2021) <cit.> discussed quality-aware DevOps, identifying research challenges to improve architectural design, modelling and Infrastructure as Code, CI/CD, testing and verification, and runtime management. The latter also recognises the intermixing with emerging trends of Artificial Intelligence (AI) for DevOps, approaches they expect to dominate research in the years to come. When teams are modelling software they must work alongside with software architects that share the product's design and allow security issues to be raised by security officers. With this documentation they could kick-start TM right away and inform development teams of potential vulnerabilities and points to focus on. Usual culprits are input handling, communication, data in rest and in transit, session management, and cryptography, to mention some concerns. These decisions and guidance are related to (future) maintainability and secure code review, where more experienced programmers advise on `dirty tricks' that could lead to unintended exposure, data leaks, or weaknesses. Modern software solutions are inherently complex, dependent on multiple auxiliary libraries and Application Programming Interfaces (API) to allow the seamlessly provision of the solution in testing and production environments. These quality related notions circle back to addressing software complexity, scalability (at the right moment), premature refactoring for performance, and multiple API integration in projects (software inter-dependencies). In terms of scalability, teams must estimate application usage and adjust the architecture to meet this demand. There is a need to break solutions down into more manageable parts by means of so-called application decomposition, helping out establishing adequate modularity with implications on TM and testing. Beck's book <cit.> thoroughly discussed coupling/cohesion, explaining how software design functions and how to address making large changes in software through small incremental steps. With respect to risk and TM, PASTA <cit.> has a step devoted to application decomposition in its threat modelling process. Finally, software evolution is related to maintainability and addressing technical debt <cit.>, ie, a situation where software developers prioritise speed to push changes that are easy (low hanging fruit) instead of really fixing the issue, that invariably takes more time and resources. §.§ Technological guidance and tooling Figure <ref> shows a non-exhaustive list of technologies and practices enabling secure DevOps that improves productivity. It focuses on the ones that are deemed useful by software teams over the host of results over the years as commented out earlier. Embracing secure DevOps in software organisations demand a cultural change that encompasses different sectors and departments, ie, risk analysts, developers, domain experts, and customers involved in decisions. The risk faced by many software organisations is to have unmaintainable bloatware full of unknown vulnerabilities that are susceptible to costly cyber-attacks. To make things worse, if the team does not fully embark in the security oriented process, overtly testing (white,gray,black-boxes) and updating both tests and (threat, data) models, the secure DevOps initiative might fail in the medium to long term. That is why it is important to set up expectations from the beginning, trying to remove any (cultural, communication, ego, etc.) barriers within development teams, aiming a successful operation. Additionally, it is expected to align these secure DevOps practices with other stakeholders, namely upper management (CEO, CSO, CTO, CISO[Respectively Chief Executive Officer, Chief Scientific Officer, Chief Technology Officer, and Chief Information Security Officer.]) with risk analysts, domain experts, development/testing teams, and customers (inadequate involvement is detrimental to project success). Note that secure DevOps imposes extra activities upon stakeholders, for instance, setting up tool-chains (choosing most appropriate tools), and configurations, on top of hiring security experts that are able to communicate ideas with members. The project documents should encompass not only architectural and design notes and observations but also RM/RA approaches and methodologies. Finally, note the intentional omission of DAST in the figure. This technique is seen as a time-consuming task and does not yield significant outcomes for secure DevOps practitioners <cit.>. In the secure DevOps community there are recommendations for using tools to automate security readily incorporated into the CI/CD pipeline. However, teams must exercise caution for understanding the need for frequently attesting its `freshness', ie, whether or not they remain being useful in the project. Additionally, it is recommended to use community vouched mechanisms that adds value to the project, and one example is provided by OWASP's cheat sheet series, that addresses multiple concerns for incorporating security in varied scope projects. § CONCLUSION AND ROADMAP FOR INTEGRATING RISK ASSESSMENT IN SECURE DEVOPS Broader risk informed approaches are valuable for understanding and mapping most-likely adversaries and organisational focus on value creation. The motivation for the discussion presented here is due to considering security as an afterthought and simply assume projects seriously consider it through and throughout. The seminal work by Fitzgerald and Stol (2017) <cit.> commenting on continuous-* in software engineering very shyly mentions security, trust, and privacy, focusing instead on development and totally disregarding risk and threat modelling altogether. We stress the need to combine risk approaches and align it with business strategy and objectives in SDLC where secure DevOps and CI/CD must be thoroughly considered in a RA and TM context. Unfortunately, for many organisations, security considerations are not the focus of the approach, and many times are just an afterthought. It is often taken serious whenever subject to actual attacks with negative repercussions. The usual proportion within companies is 90:9:1, meaning a ratio of 90 developers, 9 IT/operation, and 1 security officer or someone interested in cybersecurity. Security personnel should not consider RM and RA independently from TM: these are complementary approaches aimed at ensuring high end-product quality in systems and services to customers. As discussed throughout this contribution, the vast body of knowledge of cybersecurity invites practitioners to focus and not waste time on adopting unproven approaches. That is the main reason why it offers sets of recommendations, good practices, lessons learned, and guidance so security experts approach risks and threats over assets by aligning the investigation with each one's organisational objectives. Secure DevOps is suited for Agile as in Waterfall there is the egregious approach to finish projects and only then start to think about security (“let's do security now” mindset). This is why this method is detrimental to modern SDLC as teams append security concerns too late, a practice leading to additional costs. Experienced managers know that adding security into any software project is costly and it slows down team velocity. This is compounded by the team choosing to adopt a complex technology stack that adds up to risk as additional software gets associated, sometimes with minimal security testing. The key is to train your team to quickly adapt to the new secure DevOps reality in a seamless fashion, making them understand the need for establishing the secure oriented culture and the projected future gains of this choosing. The work by Grigorieva et al. (2024) <cit.> discussed five pillars of secure DevOps: 1. CI/CD, 2. Automation and vulnerability scanning, 3. Secure coding practices, 4. Container security, and 5. Shift-Left Security whereas Alnafessah et al. (2021) <cit.> complement the list with infrastructure as code, verification, and runtime management. These authors point out the challenges for it to become a reality in organisations, for instance, cultural shift, tool integration, compliance alignment with regulatory standards, skills gap and professional shortage, and speed vs security balance, ie, security trade-offs in projects. We next complement this discussion with extra points to observe: * Training staff for security: making stakeholders understand that “security is everyone's problem” <cit.> and also identifying potential champions to push forward this `vision' across the project. * Avoid “security theatre”: this notion is for when organisations state that they care deeply about security, however they put few measures in place or allocate small budget to effectively tackle security issues encompassing their attack surface and services portfolio. * Building team trust: establishing a relationship among team members, even if virtually. * Mixing cybersecurity professionals in heterogeneous teams (coders, testers, managers, domain experts): they have a broad understanding of security issues arising in software projects and are able to consider it from the onset of the design and implementation phase. * As described in the literature <cit.>, usual development teams are formed with assigning one security champion to absorb all security related work (sometimes it is not his/her expertise) – when vulnerabilities are found (by any method), they get overwhelmed. * Mapping potential threat actors according to your organisation: which adversaries are most likely to target your setting? What are the common venues employed by these cyber-attackers in your domain? How to deal/track insider threats? * Frequent update of RA and TM: it aims to quickly absorb changes in infrastructure, architecture, and novel attack-venues as sophistication increases. * Understand what you are building: Domain Driven Design (DDD) <cit.>, bringing stakeholders to the development, shared ubiquitous language and problem understanding. * Organically considering Secure by Design (SbD) <cit.> thinking within the development team, exploring more secure alternatives that could result in less maintenance effort in future changes (related to secure code review). * Established secure code review practice: cybersecurity experts review code in search for potential vulnerabilities, future maintenance issues, and `code smells' that could turn into serious defects or costly refactoring. * Managerial, ie, related to project management: * Avoid micromanaging software teams: adopt full-Agile approach where the team is aware of its responsibilities without the figure of a `Scrum Master' or `Product Owner' that sometimes delay team's speed[This is highly contentious as the Agile community has been discussing means for effectively embracing a true Agile instance and how the Scrum framework and similar interventions in teams are detrimental to software teams. It is out of the scope of this work to discuss these issues.]. * Effective meetings: pre-defining action items and assigning responsibilities and points for discussion; maximum recommended time spent is 10-15 min per meeting. * Developers need (large) chunks of focus intensive time to be productive. * Core Software Engineering: * Understanding the effects of cohesion and coupling: reasoning how to adequately decompose systems into parts (modules), and how much interaction they should sustain is critical for system's designs that scale (ie, from monoliths to microservices architectures). * Code smells: there is a need to identify good programming practices and differentiate them from bad design choices very early in the process, in combination with code review from more experienced developers and security experts, as mentioned. * Implementation choices (technical): * Trunk-based development: the main branch has the stable version that has passed secure code review successfully. * Effective, validated refactoring (ie, supported by unit testing): making code improvements without changing functionality has been happening for a long time, the difference is doing this without introducing defects into the repository. * Network segmentation as a security feature: it isolates services or features altogether, increasing protections against users. * Speed/Security trade-offs: consider the renowned issue of asymmetry in cybersecurity research <cit.>, where attackers only need to identify one single vulnerability to exploit a resource whilst defenders must seek to prevent or block all vulnerabilities. Now compound with the fact that there is an abysmal difference in terms of the number of a. programmers, b. IT professionals (operations), and c. security experts. * Teams and management should strike a balancing act in their projects when coping with these shortcomings, aiming at operational sustainability to keep the business thriving. * Dealing with teams: note that clever developers will try to by-pass your guidance and constraints, for various reasons, ie, perceived delay/speed, provoke security officers, or even seeing the issue as a technical challenge (or just showing off). * Observability: inspecting logs, traces, and measurements, tools or human-in-the-loop counterparts can seamlessly understand attack progression and devise mechanisms to thwart malicious incursions before they spill over other systems and services <cit.>. * Establishing multiple data sources and then applying automated de-duplication tools (to remove duplicated entries) could help identifying Advanced Persistent Threats (APT) Living Off The Land (LOTL) and insider threats within organisations. Recent advances in AI providing coding assistants and thorough lessons learned and analysis from other projects may change the DevOps landscape altogether. At the time of writing, projects wishing to adopt secure DevOps require humans-in-the-loop for determining success, refine prompts, and review automated responses for accuracy/veracity. It is a fact that security concerns impose additional guard-rails to any development process. If this process could be cautiously automated to take into account security requirements that managers and team members trust, that will have profound effects on productivity. Security officers want systems and services to be secure (sacrificing velocity) in a context where developers want flexibility when developing solutions. Ideally, project team members strike a balance that allows everyone to be productive and as secure as possible. As future work we aim to put the considerations discussed herein into a practical software project and evaluate its implication in real-world settings. We firmly believe that secure DevOps practices integrated with risk assessment matures in the community and there is a broader understanding for intermixing these concepts altogether. unsrt
http://arxiv.org/abs/2409.03326v1
20240905075555
Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning
[ "Huaxi Huang", "Xin Yuan", "Qiyu Liao", "Dadong Wang", "Tongliang Liu" ]
cs.CV
[ "cs.CV" ]
Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training Yu Zheng,  Wenchao Zhang,  Yonggang Zhang^∗,  Wei Song,  Kai Zhou,  Bo Han  ∗ Yonggang Zhang is the corresponding author. Yonggang Zhang is with the Department of Computer Science, University of Technology Sydney, Australia. E-mails: [email protected] Zheng is with the Department of Information Engineering, Chinese University of Hong Kong, Shatin, Hong Kong SAR. E-mail: [email protected] Zhang is with the College of Computer Science and Technology, Xi’an University of Science and Technology, China. E-mail: [email protected]; Wei Song is with the School of Computer Science and Engineering, Northeastern University, China. E-mail: [email protected]; Kai Zhou is with the Department of Computing, Hong Kong Polytechnic University, Kowloon, China. E-mail: [email protected]; Bo Han is with the Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR. E-mails: [email protected] MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano, via Bonardi 9, 20133 Milan, Italy ^1 [email protected] ^2 [email protected] ^3 [email protected] September 9, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In the realm of multimedia data analysis, the extensive use of image datasets has escalated concerns over privacy protection within such data. Current research predominantly focuses on privacy protection either in data sharing or upon the release of trained machine learning models. Our study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication. We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level and employs machine unlearning algorithms for the privacy preservation of model parameters. This user-interactive framework allows for adjustments in privacy protection intensity based on user feedback on generated images, striking a balance between maximal privacy safeguarding and maintaining model performance. Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset. Our approach demonstrated superiority over existing methods on facial datasets across various attribute classifications. § INTRODUCTION Images inherently contain a wealth of private information, such as gender, race, and age in facial datasets to license plate numbers and vehicle types in car photographs. The extensive use of such images across social networks, government databases, and industrial applications has significantly raised privacy risks, leading to profound public concerns <cit.>. This growing reliance on image data brings privacy issues to the forefront of global discussions and legislation. Countries around the world have responded by implementing stringent privacy laws, with the European General Data Protection Regulation (GDPR) <cit.> and the Australian Privacy Act 1988 <cit.> being prime examples. These laws emphasize the safeguarding of “personal data,” which includes any information linked to a specific or identifiable individual. Given this definition, images fall under the category of personal data due to their frequent inclusion of personal sensitive attributes such as faces, textual content, and license plates. This situation highlights the urgent need for effective methods to protect image data privacy. Contemporary multimedia research has focused on dataset publication and model design, with a key challenge being privacy protection. Traditional methods like pixel-level image obfuscation <cit.> are ineffective against advanced deep learning techniques, while recent approaches using generative adversarial networks (GANs) <cit.> suffer from instability and poor image quality. Moreover, despite progress in preventing unauthorized access to sensitive model information, a unified solution addressing privacy in both dataset sharing and model deployment is lacking. This challenge is particularly relevant for companies providing facial image applications, such as facial recognition and emotion detection services, which require comprehensive privacy solutions to manage user requests for the removal of private data, requiring adjustments to both datasets and models. Recognizing these theoretical and practical challenges, this paper proposes an integrated framework that bridges the gap by effectively managing privacy concerns in image data sharing and secure machine learning model release. Developed from published datasets, these models are designed to prioritize privacy protection and can efficiently unlearn specific private information upon user requests. This approach ensures a balance between privacy safeguarding and the utility and performance of the data and models. Fig. <ref> provides a schematic overview of this framework, illustrating the interactive, user-centric process from customer input to model refinement, underscoring our strategy for dynamic privacy management in both data and models. Specifically, our framework addresses the privacy preservation challenge in image sharing by introducing differential privacy (DP). To overcome the instability in GAN training and issues with image quality, we have drawn inspiration from diffusion models <cit.> and developed a novel approach to differential image privacy. This method employs diffusion models for both the extraction of intermediate features and the generation of new images. The fundamental advantage of diffusion models lies in their process, which primarily involves adding noise to images and optimizing the subsequent denoising steps, circumventing the adversarial training intrinsic to GAN models and leading to more stable model training. Existing research <cit.> indicates that diffusion models can achieve superior image quality compared to GANs. We design the Diffusion Differential Privacy model (Diffusion-DP). By utilizing a diffusion autoencoder model <cit.>, we train an auxiliary classifier to disentangle intermediate features, allowing precise control over target face images by injecting DP noises. This enhances the privacy protection of the shared images while maintaining their utility. To address the challenge of updating models in response to dataset changes, especially in erasing privacy information, our framework incorporates an advanced machine unlearning (AMU) module. This module is designed for scenarios requiring the removal of specific private information from trained models due to dataset updates. Unlike traditional methods of retraining models from scratch, characterized by inefficiency and high costs, our method utilizes cutting-edge machine unlearning techniques <cit.>. We have adapted the machine feature unlearning algorithm <cit.> specifically for our task, allowing for rapid fine-tuning of models with minimal data input. This enables the models to efficiently unlearn specific information, significantly reducing the resources and time required while ensuring high performance. Integrated with the Diffusion-DP module and the AMU module, our framework offers an efficient solution for adapting models to dataset changes, rigorously safeguarding user privacy, and maintaining the utility and effectiveness of the models. In summary, this paper makes the following contributions: * We propose a user-centric interactive image privacy protection framework designed to safeguard user privacy concurrently during image data sharing and model release phases. This framework is interactive, enabling user-driven adjustments for personalized privacy considerations. * We instantiate two specific modules within our broader framework to separately address the processing of private information in data sharing and model updates. The Diffusion-DP module focuses on editing and generating image data with an emphasis on privacy preservation, while the AMU module enables efficient and rapid model adjustments in response to dataset updates, ensuring privacy compliance. * Comprehensive experiments are conducted to evaluate the effectiveness of our proposed framework. The results demonstrate that our approach not only achieves significant improvements in privacy protection but also maintains high utility and performance. The rest of this paper is organized as follows. Section <ref> introduces related works. Section <ref> presents the proposed framework. In Section <ref>, we evaluate the proposed method on widely-used facial datasets. The conclusion is discussed in Section <ref>. § RELATED WORK §.§ Privacy Protection Methods for Image Data Traditional privacy-preserving methods like pixel-level image obfuscation are often ineffective against advanced deep learning techniques <cit.>, as they can significantly degrade the utility of the image and fail to protect against sophisticated attacks <cit.>. Recent research has proposed using deep GANs for image privacy protection, where DP noise is infused into intermediate features extracted from the encoder, which are then used by the GAN's image generator to produce new images <cit.>. While this approach offers a more robust solution, it faces challenges such as GAN training instability, susceptibility to mode collapse, and the need for further improvement in the quality of generated images <cit.>. This challenge extends to contemporary multimedia research, which focuses on both dataset publication <cit.> and specific model design <cit.>. Privacy protection remains a critical concern, particularly in the publication and sharing of datasets. Although substantial progress has been made in developing privacy-preserving techniques for both datasets and models, a unified approach that addresses privacy concerns in both domains is still lacking. This gap is particularly evident in practical scenarios faced by companies specializing in facial image applications <cit.>, where there is a growing demand for solutions that safeguard privacy while maintaining the functionality of AI services based on facial data, like facial recognition <cit.>, age estimation <cit.>, gender identification <cit.>, and emotion detection <cit.>. When users request the retraction of their private data to prevent privacy violations, these companies must carefully adjust both datasets and corresponding AI models, underscoring the complexity and importance of a comprehensive privacy-preserving strategy. §.§ Machine Unlearning In the field of machine learning, both exact and approximate machine unlearning methods have been explored to address the challenge of completely removing the influence of certain data segments from trained models. Exact machine unlearning aims to fully eliminate data influence, often necessitating some degree of model retraining. Early work by Cao et al. <cit.> introduced a heuristic approach to convert machine learning algorithms into a summation format, facilitating the removal of data lineage. Subsequent methods, such as Sharded, Isolated, Sliced, and Aggregated training (SISA) by Bourtoule et al. <cit.>, proposed a sharded, isolated, sliced, and aggregated training approach to enhance retraining efficiency. DeltaGrad <cit.>, another exact method, accelerates retraining by counteracting the data designated for deletion, though it is limited to specific algorithms and cannot manage mini-batch sizes. On the other hand, approximate machine unlearning focuses on minimizing the differences between models before and after data removal, aiming to preserve model performance. Notable methods include the certified-removal approach by Guo et al. <cit.>, which uses a Newton step to erase the influence of data points in L_2-regularized linear models, and a scrubbing method for deep neural networks proposed by Golatkar et al. <cit.>. These approaches often incorporate differential privacy mechanisms to obscure residual information. Additionally, specialized methods, such as those by Ginart et al. <cit.> for K-means, highlight the development of model-specific unlearning techniques. While approximate methods offer privacy safeguards and performance preservation, they face challenges in verifying implementation and aligning with legal requirements like the “right to be forgotten.” § USER-CENTRIC INTERACTIVE IMAGE PRIVACY PROTECTION FRAMEWORK In this paper, we propose a user-centric interactive image privacy protection framework that converts a potentially risky model into a safe one by unlearning specific attribute features, as depicted in Fig. <ref>. The framework begins with a training dataset labeled as “risky” due to its inclusion of sensitive information. This dataset is used to train a risky model. The process involves identifying a sensitive subset within the training data, guided by a resource specification step, which isolates sensitive attributes or individuals. Concurrently, a transferring dataset, deemed “safe”, undergoes distribution matching to ensure alignment with the sensitive subset. This allows for the safe transfer of attribute features, producing an “attribute transferred sensitive subset” that mitigates privacy risks. The core of this framework is the advanced machine unlearning (AMU) module, which employs a refined machine feature unlearning algorithm to efficiently remove sensitive attributes from the model without the need for costly retraining. To ensure privacy preservation, the framework integrates a Diffusion-DP module, providing DP guarantees by training on the transformed sensitive subset. The result is a “safe model” that can perform accurate inferences while safeguarding sensitive information. This approach not only mitigates the risks associated with sensitive data exposure but also maintains high model performance and utility, making it an efficient solution for adapting to dataset changes while rigorously protecting user privacy. In what follows, we will provide detailed insights into each module of the framework. §.§ Differential Privacy For an (ϵ, δ)-DP mechanism, ϵ > 0 denotes the distinguishable bound of all outputs on adjacent datasets 𝒟 and 𝒟' in a dataset[Two datasets, 𝒟 and 𝒟', are adjacent if 𝒟' can be built by adding or removing a single training example from 𝒟.], and δ denotes the probability that the ratio of the probabilities of two adjacent datasets 𝒟 and 𝒟' cannot be bounded by exp(ϵ) after adding a privacy-preserving mechanism <cit.>. The specific definition of DP is provided as follows. A randomized mechanism ℳ: 𝒳→ ℛ with domain 𝒳 and range ℛ satisfies ( ϵ, δ)-DP, if [ ℳ( 𝒟) ∈ 𝒮] ≤ e^ϵ[ ℳ(𝒟') ∈ 𝒮] + δ, for all measurable sets 𝒮⊆ ℛ and for any two adjacent datasets 𝒟, 𝒟' ∈ 𝒳. A randomized mechanism ℳ: 𝒳→ ℛ with domain 𝒳 and range ℛ satisfies (α, ϵ)-RDP, if D_α( ℳ(𝒟) ℳ(𝒟')) ≤ϵ, for all measurable sets 𝒮⊆ ℛ and for any two adjacent datasets 𝒟, 𝒟' ∈ 𝒳. If a mechanism ℳ: 𝒳→ ℛ satisfy ( α, δ)-RDP, it also satisfies ( ϵ + log(1/δ)/α -1, δ)-DP for any 0 < δ < 1. Moreover, ℳ satisfies pure ϵ-DP. We analyze the sensitivity and privacy performance of the proposed time-varying DP noise perturbation mechanism. We use the ℓ_2-norm sensitivity, as given by Δ s = max_𝒟,𝒟' s(𝒟) - s(𝒟'), where s(·) is a general function in 𝒟. §.§ Diffusion Differential Privacy Model To implement DP for images, we consider a diffusion mechanism that generates images while incorporating noise to ensure privacy. Assume 𝐗,𝐗' ∈ℝ^d, let ℳ(𝐗) be the mechanism that is a random function taking X as the input, and returning ℳ(𝐗) at the (t+1)-the iteration: X_t+1(i) = s_t+1(X_t(i)) + w^t_c(i) · n_t+1(i), X_t+1 = Decoder(Encoder(X_t) + w^t_c(i) · n_t+1(i)), where s_t+1(·) is the contractive map, 𝐰^t_c =[w^t_c(1),⋯,w^t_c(i),⋯,w^t_c(d)] is the weighted vector obtained from a N-class classifier, and 𝐧_t+1∼ OU (θ, ρ) at the (t+1)-th iteration, i.e., 𝐧_t = 𝒩 (e^-θ tx, ρ^2/θ(1-e^-2θ t)𝕀_d). Let ℳ(𝐗') be obtained from 𝐗' under the same mapping. For α≥ 1, ℳ(𝐗) satisfies D_α( ℳ(𝐗) ℳ(𝐗')) ≤α𝐗 - 𝐗' ^2/2T (w^t_c(i)/∑_i=1^N w^t_c(i)·ρ^2/θ(e^2θ t-1)). Suppose that an input image 𝐗_0, and its corresponding output image 𝐗_T. s(·) maps the image into its latent space (or, feature space), and s'(·) is a 1-Lipschitz. Let s(·) have global L_2-sensitivity Δ_s and 𝐏 = (P_t)_t ≥ 0 be the Ornstein-Uhlenbeck (OU) process with parameters θ and ρ. For any α >1 and t >0 the OU mechanism ℳ_t^s(𝐗_t) = P_t (s(𝐗_t)) satisfies (α, αθΔ_s^2/2T (w^t_c(i)/∑_i=1^N w^t_c(i)ρ^2 (e^2 θ t -1 )))-RDP. According to Definition <ref>, we have ℳ(s(𝐗)) for α≥ 1 satisfies D_α( ℳ(s(𝐗)) ℳ(s(𝐗'))) ≤αs(𝐗) - s(𝐗') ^2/2T(w^t_c(i)/∑_i=1^N w^t_c(i)·ρ^2/θ(e^2θ t-1)) ≤αmax_{X,X'}s(𝐗) - s(𝐗') _2^2/2T (w^t_c(i)/∑_i=1^N w^t_c(i)·ρ^2/θ(e^2θ t-1)) = αΔ^2_s/2T (w^t_c(i)/∑_i=1^N w^t_c(i)·ρ^2/θ(e^2θ t-1)), where (<ref>) is obtained based on (<ref>). Based on the definition of (α,ϵ)-RDP, we have D_α( ℳ(𝐗) ℳ(𝐗')) ≤ϵ. Therefore, the OU mechanism ℳ_t^s(𝐗_t) = P_t (s(𝐗_t)) satisfies (α, αθΔ_s^2/2T (w^t_c(i)/∑_i=1^N w^t_c(i)ρ^2 (e^2 θ t -1 )))-RDP. §.§ Advanced Machine Unlearning This section presents our methodology for unlearning specific attributes from input images using first-order and second-order optimization techniques, aiming to remove the influence of specific attributes from the model while preserving the overall performance of other features. §.§.§ First-Order Update To unlearn data, we aim to find an update Δ(𝐗, 𝐗̃) for our model w^*, where 𝐗 and 𝐗̃ are the original data and its perturbed version. If the loss ℒ is differentiable, we compute the first-order update as: Δ(𝐗, 𝐗̃) = -η( ∑_x̃∈𝐗̃∇_w ℒ(x̃, w^*) - ∑_x ∈𝐗∇_w ℒ(x, w^*) ), where η is the unlearning rate. This update shifts the model parameters to minimize the loss on x̃ while removing the information in x. It differs from gradient descent by moving the model based on the gradient difference between original and perturbed data. The gradients can be computed in 𝒪(p), with p being the number of parameters in the learning model <cit.>. §.§.§ Second-Order Update The unlearning rate η can be eliminated if ℒ is twice differentiable and strictly convex. The influence of a single data point is approximated by: . ∂ w^*_ϕ, x →x̃/∂ϕ|_ϕ=0 = -H_w^*^-1( ∇_w ℒ(x̃, w^*) - ∇_w ℒ(x, w^*) ), where H_w^*^-1 is the inverse Hessian of the loss at w^*. This leads to a linear approximation, as given by w^*_x →x̃≈ w^* - H_w^*^-1( ∇_w ℒ(x̃, w^*) - ∇_w ℒ(x, w^*) ). Extending this to multiple data points gives the second-order update: Δ(𝐗, 𝐗̃) = -H_w^*^-1( ∑_x̃∈X̃∇_w ℒ(x̃, w^*) - ∑_x ∈ X∇_w ℒ(x, w^*) ). This update does not require parameter calibration, as it derives from the inverse Hessian of the loss function. The second-order update is preferred for unlearning in models with strongly convex and twice differentiable loss functions, and can be easily calculated with common machine learning frameworks. § EXPERIMENT We verify the effectiveness of our proposed method on the two manually corrupted datasets: CelebA-HQ dataset <cit.> and CelebA dataset <cit.>. CelebA-HQ contains 30k high-resolution celebrity images with 40 attributes. We use this CelebA-HQ dataset to train our diffusion model. CelebA has 202,599 images with 40 same attributes as CelebA-HQ. We use this dataset to test our proposed method. §.§ Comparison Methods We compare our method against several SOTA methods. The image-manipulated methods involve processing the input images (224x224 pixels) using different techniques: (i) Random central removal of 100x100 pixels, which involves randomly removing a central portion of the image; (ii) Whole image mosaic, which applies a mosaic filter to the entire image, blurring detailed features; and (iii) our proposed diffusion-based method, which leverages diffusion processes to selectively alter specific attributes in the images. For fine-tuning methods, we implement three unlearning approaches: (i) Retraining, which involves training the model from scratch on a modified dataset; (ii) First-order unlearning, which adjusts the model parameters using gradient descent based on the attribute-specific loss; and (iii) Second-order unlearning, our proposed method, which employs both gradient and curvature information to more effectively remove specific attributes from the model. §.§ Evaluation Metrics In our experiments, conducted on the CelebA dataset, we employed several evaluation metrics to assess the performance of attribute modification and image realism. Specifically, we focused on two primary metrics: Attribute Classification Accuracy (ACC) and Structural Similarity Index (SSIM). For the assessment of attribute modification, we utilized the ACC. In this approach, we trained a linear binary classifier on the training set for each of the 40 attributes. This allowed us to evaluate the accuracy with which each attribute was correctly modified or maintained. The effectiveness of the attribute modifications across the entire image was further quantified by calculating the average ACC across all 40 attributes. This average ACC provides a comprehensive measure of the system's performance in handling multiple attributes simultaneously, ensuring that modifications are not only accurate but also consistent across the dataset. In addition to ACC, we also assessed the realism of the generated images using the SSIM. Following the methodologies presented in <cit.>, SSIM was chosen due to its ability to measure the perceptual quality of images. SSIM evaluates the similarity between two images based on luminance, contrast, and structural information, producing a similarity score that ranges from -1 to 1. Higher SSIM values indicate a greater degree of similarity between the images, reflecting the extent to which the modified images retain their visual fidelity and align with human visual perception. Conversely, lower SSIM values suggest a decline in image quality or realism. Together, these metrics provide a robust framework for evaluating both the technical accuracy of attribute modifications and the visual quality of the resulting images. The combination of ACC and SSIM ensures that our approach is not only effective in modifying attributes but also capable of generating realistic and visually pleasing images, which is critical for applications that require high levels of visual authenticity. §.§ Experimental Results Table <ref> provides an extensive comparison of various image manipulation methods and their impact on attribute classification performance under different levels of data change (i.e., 1%, 3%, and 5%). The methods compared include `Mosaic', `Random', and `Ours', with each method's performance evaluated across multiple attributes such as , , , , and . Table <ref> presents the results for attributes categorized under different percentages of data changes. Notably, in the case of 1% data change, our method achieves the highest average accuracy of 89.82% across 40 attributes, outperforming both the Mosaic' (82.38%) and `Random' (84.32%) methods. Our method also shows strong performance in specific attributes, such as and , indicating its effectiveness in handling minimal data changes. As the data change increases to 3% and 5%, the `Ours' method consistently demonstrates superior performance, maintaining high accuracy in most attributes. For example, under 3% data change, our method reaches an accuracy of 89.07%, compared to 88.96% and 89.28% for `Mosaic' and `Random', respectively. This trend continues under the 5% data change scenario, where our method achieves an accuracy of 89.05%, again outperforming Mosaic (88.96%) and Random (88.91%). Overall, the results indicate that our method is particularly robust and adaptable, offering high accuracy even as the data change percentage increases. This consistent performance across different attributes and levels of data manipulation highlights the method's potential for robust attribute classification in dynamic datasets. Such reliability is crucial for real-world applications where data may vary significantly, and accurate attribute classification is essential. Our proposed method's superior results in most categories, especially under higher data changes, highlight its effectiveness and utility in improving attribute classification performance. Table <ref> provides a detailed comparison of various unlearning methods and their impact on attribute classification performance under different levels of data change (3%, 5%, and 10%). The methods evaluated include `Retraining', `First_order', and `Second_order', with their performance assessed across multiple attributes such as , , , , and . The differences from the `clean' baseline are highlighted, with all selected attributes having an initial accuracy of less than 90%. In the clean data scenario, the `Retraining' method method achieves the highest average accuracy of 89.75% across 40 attributes, followed closely by `First_order' (89.03%) and `Second_order' (89.01%). As the data change increases to 3%, 5%, and 10%, the `First_order' and `Second_order' methods exhibit competitive performance, often surpassing `Retraining' in specific attributes. For instance, under a 3% data change, `Second_order' attains notable accuracies in attributes like (98.29%) and (94.00%), with values close to those of the clean scenario, demonstrating robustness against minor data alterations. As the data change reaches 5% and 10%, the `First_order' and `Second_order' methods continue to demonstrate robust performance. Specifically, at a 10% data change, `Second_order' maintains an average accuracy of 88.99%, excelling in challenging attributes like (98.04%) and (91.15%). The `First_order' method, although slightly lower in accuracy, still achieves a respectable average accuracy of 88.64%, showing reasonable adaptability in high data change scenarios. The method's ability to maintain high accuracy under significant data changes underscores its robustness and effectiveness. Overall, these results demonstrate the efficacy of the `First_order' and `Second_order' unlearning methods in maintaining high attribute classification performance across varying levels of data change. The `Second_order' method, in particular, shows superior adaptability and accuracy in more challenging scenarios, suggesting its potential as a reliable unlearning technique for robust attribute classification in dynamic and diverse datasets. These insights are valuable for selecting appropriate unlearning methods to enhance attribute classification performance in practical applications. Table <ref> compares the performance of various image generation models on the CelebA-HQ dataset, measured by the SSIM metric. Diff-AE leads with a near-perfect SSIM of 0.991, indicating exceptional retention of image details. Our method, with an SSIM of 0.951, ranks second, showcasing its strong capability to preserve image realism, highlighting its competitiveness and effectiveness in generating high-quality images after attribute modifications. Table <ref> summarizes the training times for different unlearning methods. The `Clean' method takes 13,322 seconds, while the `Retraining' method, which likely involves comprehensive retraining, required the most time at 15,185 seconds. In contrast, the `First_order' and `Second_order' methods drastically reduce the training times to 174.41 seconds and 506.99 seconds, respectively. Notably, the `Second_order' method, while not as fast as `First_order', still offers a substantial reduction in time compared to `Clean' and `Retraining'. This additional computational time for the `Second_order' method suggests it may involve a more detailed unlearning process, potentially leading to better accuracy, as indicated in Table <ref>. This efficiency, coupled with the possibility of improved accuracy, makes the `Second_order' method particularly valuable for applications where balancing computational resources and maintaining high performance is critical. The sharp decrease in training times with both `First_order' and `Second_order' methods demonstrates their potential utility in scenarios necessitating quick updates or modifications, while the slight increase in computational time for the `Second_order' method could indicate a trade-off that favors accuracy without significantly compromising efficiency. § CONCLUSION This paper introduces a privacy protection framework for image data, designed to address challenges in both data sharing and model publication. By integrating a diffusion model for attribute modification with machine unlearning algorithms to secure model parameters, our approach successfully balances the dual objectives of privacy protection and model performance. The framework’s user-interactive design enables customized privacy settings, ensuring that high-quality image generation and precise attribute modifications are maintained. Our results demonstrate that this framework not only preserves image realism but also enhances accuracy across facial datasets, representing a significant advancement in privacy-preserving machine learning.
http://arxiv.org/abs/2409.03042v1
20240904192413
Parameter Analysis in Continuous Data Assimilation for Various Turbulence Models
[ "Debora A. F. Albanez", "Maicon Jose Benvenutti", "Samuel Little", "Jing Tian" ]
physics.flu-dyn
[ "physics.flu-dyn", "cs.NA", "math-ph", "math.MP", "math.NA" ]
theoremTheorem lemmaLemma corollaryCorollary References.bib
http://arxiv.org/abs/2409.02217v1
20240903184106
Quantifying Liveness and Safety of Avalanche's Snowball
[ "Quentin Kniep", "Maxime Laval", "Jakub Sliwinski", "Roger Wattenhofer" ]
cs.DC
[ "cs.DC" ]
Q. Kniep et al. ETH Zurich Zurich, Switzerland {qkniep,jsliwinski,wattenhofer}@ethz.ch EPFL Lausanne, Switzerland [email protected] Quantifying Liveness and Safety of Avalanche's Snowball Quentin Kniep1 Maxime Laval2 Jakub Sliwinski1 Roger Wattenhofer1 September 9, 2024 ==================================================================== § ABSTRACT This work examines the resilience properties of the Snowball and Avalanche protocols that underlie the popular Avalanche blockchain. We experimentally quantify the resilience of Snowball using a simulation implemented in Rust, where the adversary strategically rebalances the network to delay termination. We show that in a network of n nodes of equal stake, the adversary is able to break liveness when controlling Ω(√(n)) nodes. Specifically, for n=2000, a simple adversary controlling 5.2% of stake can successfully attack liveness. When the adversary is given additional information about the state of the network (without any communication or other advantages), the stake needed for a successful attack is as little as 2.8%. We show that the adversary can break safety in time exponentially dependent on their stake, and inversely linearly related to the size of the network, e.g. in 265 rounds in expectation when the adversary controls 25% of a network of 3000. We conclude that Snowball and Avalanche are akin to Byzantine reliable broadcast protocols as opposed to consensus. § INTRODUCTION The Avalanche protocol <cit.> advertises exceptional performance in terms of transaction throughput and latency. The Avalanche blockchain based on the protocol has certainly gained significant attention and support within the cryptocurrency community, as evidenced by the remarkable market capitalization of its native token amounting to $10B[<https://coinmarketcap.com> (Accessed: June 19 2024)]. The media prominence and monetary value firmly place Avalanche among the most popular and successful blockchain systems. The protocol is built on a simple mechanism that operates by repeatedly sampling random nodes of the network in order to gauge the system's support of a given decision and confirm transactions. Conceptually, the underlying Snowball protocol can be compared to a voting process for a binary choice concerning a transaction. The protocol description promises to swiftly converge to a final decision from a network state initially divided equally between two alternatives. With the aid of a directed acyclic graph (DAG), Avalanche forms a partial order of transactions instead of the total order that is established by usual blockchain protocols, like Bitcoin <cit.> or Ethereum <cit.>. Thus when transactions on Avalanche are accepted, validators are able to execute them in different orders based on their current view of the DAG, as long as those transactions are not causally dependent on each other. In theory, such structure can allow for a higher degree of parallelism in the transaction confirmation process, which can lead to a higher throughput than traditional blockchain protocols. The ideas that form Avalanche stand in contrast to other Proof-of-Stake and BFT-based consensus protocols such as Ethereum 2.0. While the whitepaper <cit.> claims excellent resilience, it only proves the protocol's liveness in presence of up to 𝒪(√(n)) malicious parties, where n represents the total number of validators (or stake supply). However, usually Proof-of-Stake protocols ensure the upper bound resilience of n/3 in partial synchrony. Another detail that stands out in the description of Avalanche, is how it defines its guarantees with respect to “virtuous” transactions, i.e. assuming there's no conflicting alternative in the system. Remarkably, broadcast-based payment systems <cit.> are inherently reliant on such an assumption, and as such are fundamentally weaker than consensus protocols. The lack of clarity about the Avalanche family of protocols begs the question: how resilient Snowball and Avalanche really are? Does the unusual consideration of “virtuous” transactions indicate a fundamental limitation? §.§.§ Our Contribution We examine the resilience properties of the Snowball and Avalanche protocols. We experimentally exhibit the resilience of Snowball against attacks from adversarial nodes. Our simulation showcases that in a system of n nodes (or stake supply), the adversary can indefinitely halt the Snowball protocol when controlling a stake of Ω(√(n)), or less than 2% in some experimental scenarios. Furthermore, we examine a strategy for an adversary to violate safety by getting a single validator to finalize an output distinct from the rest of the network. The expected duration of the safety attack depends exponentially on the stake controlled by the adversary, and is inversely linear to the size of the network. For example, at 25% adversarial stake in a network of 3000, safety can be violated after 265 rounds in expectation. We discuss how these considerations translate to the Avalanche protocol based on Snowball. Finally, we draw parallels between Avalanche and broadcast-based payment systems, and conclude that Avalanche is fundamentally weaker than usual consensus protocols. § BACKGROUND Avalanche's blockchain platform consists of three distinct built-in blockchains: The Exchange Chain (X-Chain), the Contract Chain (C-Chain) and the Platform Chain (P-Chain) <cit.>. The X-Chain is responsible for processing simple transactions on the network, such as transfers of the native AVAX token. It is based on the Avalanche protocol with the DAG that runs multiple instances of the Snowball algorithm and only partially orders transactions. The C-Chain is responsible for executing general smart contracts compatible with the Ethereum Virtual Machine (EVM). In contrast to the X-Chain, C-Chain uses the Snowman protocol which ensures a total order of all transactions. The P-Chain processes various platform-level operations, such as creation of new blockchains and sub-networks, validator (de-)registration, or staking operations. It also uses Snowman. The Avalanche protocol introduced in the Ava Labs whitepaper, which is also mainly marketed and presented in online materials, is used as the basis of X-Chain. Interestingly, the Snowman protocol, which supports the C-Chain and P-Chain, is almost absent from documentation and marketing, and remains outside the scope of this work. §.§ Validators Participants in the Avalanche protocol are called validators or nodes. Validators following the protocol are called honest. As a blockchain protocol, Avalanche aims to be resilient to validators deviating from the protocol, which are called malicious, or collectively as the adversary. Avalanche employs a Proof-of-Stake mechanism to control the ability of malicious validators joining the system. Validators need to acquire AVAX tokens (2,000 minimum) and deposit them using the Avalanche platform to actively participate in the agreement process. Validators are associated with, and weighted by, the amounts of deposited tokens, called their stake. Typically, Proof-of-Stake blockchains aim to be resilient to the adversary that is able to acquire a stake smaller than 1/3 of the total tokens (which is the theoretical maximum in harsh network conditions). §.§ UTXO Model Avalanche uses the Unspent Transaction Output (UTXO) model, as initially introduced in Bitcoin <cit.>. In the model, a transaction contains a set of inputs, a set of outputs, and a digital signature. Each input of a transaction corresponds to a specific output from a previous transaction. Transactions are issued by users, processed by the system, and as a result are accepted or rejected by the system. Two transactions including the same input are conflicting, and only one transaction from such a pair can be accepted by the system. The balance of a user is determined by the set of outputs transferred to that user in previously accepted transactions and not yet used as inputs for newer transactions. A valid transaction is also signed with keys corresponding to the relevant inputs. In contrast to most blockchains such as Bitcoin, Avalanche does not necessitate a total order of all transactions. Instead, transactions in Avalanche form a directed acyclic graph (DAG) resulting in a partial order. A transaction tx' depends on tx if tx^' consumes an output of tx. In this case, every validator needs to process tx before processing tx^'. Validators can execute transactions that are not dependent on each other in any order. § SNOWBALL The Snowball protocol serves as the foundational component of the Avalanche blockchain. It is based on continuously querying random sets of k validators regarding their current “approval” regarding a transaction, denoted as T. When performing a query on k = 20 nodes within a Snowball instance, the selection probability of a node is proportional to the stake of the node. Intuitively, the influence of validators in validating transactions, quantified by the probability of them being queried, is determined by their stake. Validators maintain a confidence value for each binary choice: Blue if they prefer to accept transaction T, Red if they reject transaction T. When a validator queried k other nodes and saw at least α for either Red or Blue, we say that this color received a chit, and the confidence value for that color is incremented by one. When queried, a validator will either respond Blue if the confidence value for Blue is higher, or Red if the confidence value for Red is higher. A color is accepted by a node if for at least β consecutive rounds of querying it received a chit. The logic of Snowball is illustrated in <Ref>. §.§ Safety Intuitively, safety properties can be understood as “bad” things not happening. In our context, the main safety property is ensuring that two honest nodes cannot perceive two conflicting transactions as accepted. The Avalanche whitepaper outlines the definition of safety as follows: P1. Safety: When decisions are made by any two honest nodes, they decide on conflicting transactions with negligible probability (≤ε). Here ε represents the safety failure probability, with the specific value dependent on the maximum number f of adversarial nodes, which is not explicitly stated in the formal definition provided by the Avalanche whitepaper. §.§ Liveness Liveness refers to the continued operation of the system. In our context, liveness mainly refers to ensuring that all honest nodes eventually decide to accept or reject a transaction within a reasonable time frame. According to the whitepaper, Avalanche has the following liveness guarantees: P2. Liveness (Upper Bound): Snow protocols terminate with a strictly positive probability within t_max rounds. P3. Liveness (Strong Form): If f ∈𝒪(√(n)), then the snow protocol terminates with high probability (≥ 1 - ε) in 𝒪(log(n)) rounds. However, it is specified later in the whitepaper that P2 holds only under the assumption that initially, one proposal has at least α/k support in the network, for which there is no guarantee. § SIMULATION To test resilience of Snowball, we perform a local simulation of the protocol using a Rust implementation <cit.>. As the base implementation of Snowball (c.f. <ref>) we use the Rust library[https://crates.io/crates/avalanche-consensus], which is a translation of the Snowball Go code that is part of the official Avalanche implementation[https://github.com/ava-labs/avalanchego] and is maintained by the Ava Labs team. The simulation involves a network of multiple honest nodes executing the protocol correctly, aiming to achieve agreement on a binary decision. Malicious nodes collude to perform the considered attacks. In our experimental scenarios, the stake is equally divided among validators. §.§ Network Assumptions Distributed protocols might require various network reliability assumptions to work correctly. Many blockchain protocols guarantee safety in harsh network conditions, such as those of partially synchronous models, where messages can be greatly delayed. In our simulation we make the strongest network reliability assumptions possible, where every message arrives with the same, known latency. We deny the adversary any communication advantage whatsoever, including advantages often practically achievable by an attacker in the real world, such as performing queries faster, or performing more queries. In synchronous rounds, nodes query other nodes, as described by the Snowball protocol. Between rounds, the nodes update their preferred color with which they respond to the queries. §.§ Adversary Information To perform the attacks, the adversary needs information about the other nodes' preferred color. We call the adversary naive if the adversary simply queries honest nodes in line with the protocol and updates his estimation of the colors preferred by the honest nodes according to the query results. We also consider an adversary that possesses accurate information about the numbers of nodes preferring Red/Blue in the current round, and call that adversary informed. §.§ Liveness Attack When attacking the liveness property, the adversary aims to delay the decision of honest nodes by keeping the network split equally between Red and Blue. The attack strategy we consider is straightforward. When the adversary is queried, it responds with the color that is less preferred among all honest validators. By doing so, the adversary aims to bring the network split between the Red and Blue decisions closer to the even 50-50 split. This is shown in <ref>. For a given experimental scenario, we consider the attack successful if in more than 5 out of 10 simulation runs, no validator has terminated with a decision after 100,000 rounds. We note that if a round of querying took about 1 second, 100,000 rounds would correspond to over a day. We perform binary search with respect to the adversary stake to find the minimal fraction of total stake for which the adversary is successful. <ref> shows the minimum percentage of stake the adversary needs to attack the liveness of the protocol. It can be seen to decrease significantly with increasing number of total nodes in the network, showing the sub-linear security bound. §.§ Safety Attack In this scenario, the adversary aims to break the safety property of the protocol by causing some honest nodes to accept conflicting transactions. The adversarial strategy we consider starts with maintaining a modified liveness attack. While the honest nodes are divided between Red and Blue, denote by μ the fraction of honest nodes that prefer Red. Then, the fraction of honest nodes that prefer Blue is 1-μ. The adversary attempts to keep the numbers of honest nodes preferring Red and Blue close to the μ:Red, 1-μ:Blue split by replying to queries with colors that sway the honest nodes towards this split. Additionally, the adversary chooses a set of honest nodes to queries of which the adversary responds exclusively with Red. By employing this approach, the attacker can significantly increase the likelihood of the targeted nodes finalizing with the color Red after some time, while at the same time keeping the rest of the network from deciding in either direction. Once some targeted node accepts Red, the adversary replies to all queries with the color Blue, such that the rest of the network accepts Blue. As a result, the adversary produces a safety violation, as the targeted node decides differently to the rest of the network. <ref> describes the adversarial strategy for the considered safety attack. §.§ Safety Attack Analysis Consider the attack where a single node is targeted. Denote the number of adversarial nodes by f and the number of nodes in total by n. Assuming that the adversary can maintain the honest nodes split of μ:Red, 1-μ:Blue, in expectation we observe the following: when the targeted node queries, it receives a percentage of μ (1-f/n) + f/n responses for Red, while other nodes receive ≥μ (1-f/n) fraction of responses for Red, depending on the adversary. For example, with 30% adversary stake and a split of 69.4% Red and 30.6% Blue among honest nodes, the targeted node has a probability p = 0.694 · 0.7 + 0.3 = 0.7858 of receiving a Red response when querying. Consequently, the targeted node can finalize Red with some probability, which eventually occurs. Once this happens, the adversary replies to all queries with Blue. Since μ (1-f/n) = 0.694 · 0.7 = 0.4858 < 0.5, it is very likely that all honest nodes flip to Blue and later accept Blue. In general, if μ (1-f/n) < 0.5, the adversary still has the ability to sway the network towards accepting Blue. We now compute the probability that the targeted node converges to Red, given that it sees an average proportion of μ (1-f/n) + f/n of responses in favor of Red when querying. Recall that when querying k = 20 other nodes, a validator increments its successive success counter, denoted as “counter”, only if a color receives at least α = 15 votes, and if this color is the same as the currently preferred color. Otherwise, the success counter is reset to 0. Let the random variable X denote the number of participants who prefer Red in a sample of size k = 20. We want to calculate the probability distribution P(X ≥α) = 1 - P(X < 15). We can model this using a binomial distribution with parameters p = μ (1-f/n) + f/n for the targeted node and p = μ (1-f/n) for the other nodes. Thus, we have: P(X ≥α) = 1 - P (X< α) = 1 - F(α - 1,k,p) = 1 -∑_i=0^α-1ki p^i (1-p)^k-i Here, F(α - 1, k, p) represents the cumulative distribution function (CDF) of the binomial distribution. For our previous example, where μ = 0.694, for honest nodes other than the attack target, we can calculate the probability P(X ≥α), which represents the chance of reaching the α majority threshold for the color Red when querying. Plugging in the probability to receive a response supporting Red in a single round p = 0.4858 from above, we get P(X ≥α) = 1 - F(14, 20, 0.4858) ≈ 0.015. On the other hand, the targeted node that has a probability p = 0.7858 to get a Red response, and so p_α = P( X ≥α) = 1 - F(14, 20, 0.7858) ≈ 0.756. This means that our targeted node has a p_α≈ 75.6% chance of reaching the α majority for Red when querying k = 20 other nodes, whereas other honest nodes only have a 1.5% chance of the same. Consider the expected number of iterations needed to obtain β = 20 consecutive successes of reaching the α = 15 majority for a color. Let X_β represent the number of trials required to achieve β consecutive successes, with the probability of one success being p_α. From <cit.>, we can use the following formulas: 𝔼[X_β] = 1 - p_α ^ β/(1 - p_α) p_α ^ β 𝕍ar[X_β] = 1 - (2β + 1)(1 - p_α)p_α ^β - p_α ^2 β + 1/(1 - p_α) ^ 2 p_α ^ 2 β For the example where p_α = 0.756 for the targeted node, we obtain: 𝔼[X_20] ≈ 1095 σ = √(𝕍ar[X_20] )≈ 1078 On average, the targeted node needs to query 1,095 times with a standard deviation of 1,078. We confirmed the expected results experimentally. The effectiveness of the attack can be greatly increased by targeting a large number of nodes rather than just one. Let X_β^k be the number of trials required for any target node among k targeted nodes to achieve β consecutive successes. Assuming the β successes to be equally probable to conclude in every round after 19, and assuming the constant network split maintained by the adversary, the expected number 𝔼[X_β^k] is 𝔼[X_β - 19]/k+19. We have simulated some experimental scenarios, such as targeting 1000 nodes with n = 3000 and the adversarial stake of f = 750, where the results matched our expectation. With increasing total number of nodes n, the adversary can target more nodes. While in our experiments we successfully targeted over 0.3n of n = 3000 nodes, future work is needed to understand how big the share of targeted nodes can be in an optimal strategy and with increasing n. In summary, the strength of the attack corresponds exponentially to the share of the adversary stake. On the other hand, the expected required duration of the attack is inversely linear to the overall number of nodes n, as the number of targeted nodes can increase roughly linearly with n. <ref> summarizes the effectiveness of the safety attack for adversaries of different strengths. § AVALANCHE PROTOCOL In this section, we explain how the Avalanche protocol builds on Snowball to incorporate optimizations and additional features. §.§ DAG To enhance the throughput and enable parallel processing of transactions, the Avalanche protocol builds a directed acyclic graph (DAG) for transactions, instead of a linear chain. Each transaction is represented as a node in the DAG. Furthermore, transactions in the DAG are interconnected through parent-child relationships: A transaction T refers to older transactions known as its parents 𝖯𝖺𝗋𝖾𝗇𝗍𝗌(T). We denote the parent relation T^'∈𝖯𝖺𝗋𝖾𝗇𝗍𝗌(T) by T^'← T. If T” is reachable by parent links from T, we say that T” is an ancestor of T, or T”∈𝖠𝗇𝖼𝖾𝗌𝗍𝗈𝗋𝗌(T), and that T is a descendant of T”, or T ∈𝖣𝖾𝗌𝖼𝖾𝗇𝖽𝖺𝗇𝗍𝗌(T”). §.§ Vertex In order to limit the coordination overhead, a node in the Avalanche DAG is not an individual transaction but rather a batch of transactions known as a vertex. A vote for a vertex is considered a vote for all transactions contained within that vertex. This allows Avalanche to facilitate efficient queries, while still maintaining confidence levels and a conflict set for each individual transaction. When a vertex is accepted, all transactions within it are accepted. When a vertex is rejected, valid transactions in that vertex may be batched into a new vertex, by removing the non-preferred transactions that resulted in the vertex getting rejected. When a node creates a vertex V, it chooses parents for V that are currently preferred. When a user submits a payload transaction tx, a node creates a transaction T ⟨ tx,𝒟⟩ for that payload. It includes the payload tx, along with the set of UTXO IDs that will be consumed if the transaction is accepted, and the list 𝒟 of dependencies on which this transaction relies. Each dependency must be accepted before this transaction can be accepted. The node then batches this transaction T ⟨ tx,𝒟⟩ with other pending transactions into a vertex. The node assigns one or more parents to this vertex, allowing it to be added to the DAG. We define an Avalanche transaction T as preferred if it is the preferred transaction in its conflict set P_T. In other words, if transaction T has the highest confidence among other conflicting transactions. Each node u calculates the confidence value for each transaction T denoted by d_u[T]. This confidence value is defined as the sum of the chits received by T and all its descendants <cit.>: d[T] =∑_ T^'∈𝒯_u : T ∈𝖣𝖾𝗌𝖼𝖾𝗇𝖽𝖺𝗇𝗍𝗌(T^') c_u,T^'. Here, 𝒯_u represents all the transactions currently known by node u in its view of the DAG, and c_u,T^' represents the chit received by transaction T^'. c_u,T can only take two values: 0 or 1. Node u queries transaction T only once, as the votes on the descendants of T also serve as queries and votes on T. Specifically: c_u,T= 1 transaction T received a chit when u queried for it 0 otherwise As a reminder, receiving a chit for transaction T means that node u received an approval rate of at least α = 15 when it queried k = 20 other nodes to determine if T was their preferred transaction. The confidence value of T (and thus its status as accepted or rejected) is then updated based on the queries made on its descendants. We say that a transaction T is strongly preferred if T is preferred and all its ancestors are also preferred in their respective conflict sets. An Avalanche transaction T is considered virtuous if it conflicts with no other transactions or if it is strongly preferred. Consequently, a virtuous vertex is a vertex where all its transactions are virtuous. Similarly, a preferred or strongly preferred vertex is one where all its transactions are preferred or strongly preferred, respectively. The parents of a vertex are randomly chosen from the virtuous frontier set 𝒱ℱ, which consists of the vertices at the frontier of the DAG that are considered virtuous: 𝒱ℱ = { T ∈𝒯 |𝗏𝗂𝗋𝗍𝗎𝗈𝗎𝗌(T) 𝗏𝗂𝗋𝗍𝗎𝗈𝗎𝗌(T^') ∀ T^'∈𝒯 : T ← T^'} The notation 𝗏𝗂𝗋𝗍𝗎𝗈𝗎𝗌(T) indicates that T is virtuous. In other words, 𝒱ℱ is the set of vertices that are virtuous, and have no virtuous children. §.§ From Snowball to Avalanche The Avalanche protocol runs a Snowball instance on the conflict set of each transaction T once a node hears about a new transaction that gets appended to the DAG. This means that when a new transaction T is received, a validator will query k other random nodes to determine if T is their preferred transaction. The queried nodes will respond positively only if transaction T and its ancestors in the DAG are also their preferred transactions within their respective conflict sets. Instead of querying a Snowball instance for each individual transaction, Avalanche batches transactions into a vertex and instantiates a Snowball instance for that vertex, checking if all the transactions within that vertex and its ancestors are valid. When a node is queried about the preference of transaction T and its ancestors, it provides not just a binary vote as in Snowball, but rather responds with its entire virtuous frontier 𝒱ℱ based on its local view. This allows the respondents to specify which ancestors are not preferred if T is not strongly preferred. The querying node u collects the virtuous frontier of the k queried nodes. For each virtuous frontier 𝒱ℱ^' sent by a node w as a vote, we add the transactions T^' from 𝒱ℱ^' and the ancestors of T^' to a set 𝒢[T,w], which represents the positively reported transactions of w when asked to vote for T. We then count how many times node w, when queried for T, has acknowledged a transaction T^' as virtuous, and store this in the counter ack[T,T^']. We then run a Snowball instance for every ack[T,T^']: If ack[T,T^'] received more than α votes it indicates that the α majority of the k queried validator agree that T^' is preferred. We then increase the consecutive counter for T^' if it was also the preferred transaction in the last vote. The above procedure of voting on a vertex containing a single transaction can be generalized for vertices containing multiple transactions. Finally, there are two ways in which a vertex V, and consequently all the transactions it contains T ∈ V, can be accepted, provided that all the ancestors of V have also been accepted. The first way is if none of its transactions T ∈ V conflict with any other transactions, and the vertex V received β_1 consecutive successes. In this case, the vertex and all its transactions are accepted by node u. The second way is if some transactions T ∈ V have other transactions in their conflict sets, and the vertex V receives β_2 consecutive successes. In this case, the node accepts the vertex V and all its transactions. The Avalanche protocol denotes β_1 as and β_2 as , and naturally β_1 < β_2. §.§ Liveness Attack Suppose that two transactions (both with accepted virtuous ancestors) T batched in vertex V and T^' batched in vertex V^' are conflicting. Recall that the Snowball liveness attack consisted of a strategy where the adversary tried to ensure that the split between Red and Blue was always close enough to 50% each. Here, the approach is similar, except that we have to ensure that, on average, 50% of the network has V in their virtuous frontier or as an ancestor of their virtuous frontier, and the other 50% of the nodes have V^' in their virtuous frontier or as an ancestor of their virtuous frontier. The binary attack can be transposed to one where the adversaries responds with the virtuous frontier 𝒱ℱ, with V an ancestor of the nodes in 𝒱ℱ, or responds with the virtuous frontier 𝒱ℱ^', with V^' an ancestor of the nodes in 𝒱ℱ^'. The intuition behind this attack is that half of the nodes will adopt a virtuous frontier that contains vertex V as a virtuous vertex, and the other half of the nodes will adopt a virtuous frontier that contains V^' as a virtuous node. At every iteration of the loop, the adversary needs to maintain those two conflicting virtuous frontiers 𝒱ℱ and 𝒱ℱ^', grow the DAG such that some new valid vertices are appended to the conflicting 𝒱ℱ and 𝒱ℱ^', and respond accordingly with either one of the virtuous forests using the same technique that was used for the Snowball liveness attack. §.§ Safety Attack The safety attack from Snowball to Avalanche can be transposed in the same way as was explained above for the liveness attack. Similarly, we will try to maintain a network split that does not converge: μ of nodes will prefer a virtuous frontier 𝒱ℱ that contains v as an ancestor, and ν of nodes will prefer a virtuous frontier 𝒱ℱ^' that contains v^' as an ancestor. For one targeted node, the adversary will respond exclusively with the virtuous frontier 𝒱ℱ instead of trying to maintain a split. This way, we can make the targeted node to accept vertex v while all the other nodes are still undecided. Once this is done, the adversary can unanimously respond with virtuous frontier 𝒱ℱ^' to make the rest of the nodes accept v^' in order to break safety. Such an attack can be instantiated by any adversary that creates conflicting (double spending) transactions T and T^', batch them in nodes v and v^' and conducts the attack to make some nodes accept T, and some other nodes accept T^' thus resulting in a successful double spending. § CONSENSUS OR BROADCAST Consensus is a property that allows multiple parties to reach agreement on transactions, either accepting or rejecting them. In the context of blockchain systems, consensus can be defined by the following set of properties: Each honest validator observes some transaction from a set of conflicting transactions {t_0, t_1, …}. Consensus satisfies the following properties: * Totality: If some honest validator accepts a transaction, every honest validator will eventually accept the same transaction. * Agreement: No two honest validator accept conflicting transactions. * Validity: If every honest validator observes the same transaction (there are no conflicting transactions), this transaction will be accepted by all honest validators. * Termination: Some transaction from the set will eventually be accepted by honest validators. As implied by Agreement and Termination, a consensus protocol enables nodes to reach an agreement on conflicting transactions, where multiple valid transactions consuming the same input are involved. In such cases, all nodes should unanimously accept one of the conflicting transactions. As we have established, the Avalanche protocol features a relatively weak, sublinear resilience to liveness attacks involving conflicting transactions. To address this issue, Avalanche introduces the term of virtuous transactions, which can enjoy better guarantees. In other words, even for a relatively small adversary, Avalanche does not satisfy the Termination property, and only guarantees termination if the Validity condition is also met: all honest validators observe just one valid transaction and no conflicting ones. The termination property becomes crucial in scenarios involving smart contracts, where conflicting transactions may arise, such as two users attempting to purchase the same product. To address this limitation, the Avalanche team introduced a different solution for the C-Chain and P-Chain, specifically designed to execute smart contracts required for such blockchain applications. As described by <cit.>, consensus is not necessary for payment systems, and indeed there exist payment systems providing similar guarantees to Avalanche, while also unable to support general applications such as smart contracts: broadcast-based payment systems <cit.>. The provided guarantees of a Byzantine reliable broadcast can be defined as follows: Each honest validator observes some transaction from a set of conflicting transactions {t_0, t_1, …}. Byzantine reliable broadcast satisfies the properties of Consensus, without the Termination property. Thus, referring to Avalanche as a consensus protocol can be misleading, as it is more akin to broadcast-based payment systems. While the performance of Avalanche is given prominence, a different solution has been used as required by the C-Chain and P-Chain. § RELATED WORK Amores-Sesar et al. <cit.> analyze the Avalanche protocol. They explain the protocol with pseudocode and introduce a property of Avalanche that was omitted here: No-op transactions which are stateless transactions are added into the DAG by the nodes to make sure we always make progress on the finalization of older transactions. The paper introduces a liveness attack (different from ours) that could be possible if the naive way of voting for a transaction and its ancestors with just a binary yes/no vote was implemented. However, this is not the case, as emphasized at the end of the paper with the pseudocode involving the virtuous frontier concept. Ash Ketchum and Misty Williams <cit.> raise concerns similar to ours in their recent write-up, that Avalanche is not a consensus protocol. Most recently, a follow-up analysis by Amores-Sesar et al. <cit.> formalized the need for at least Ω(log n + β) rounds for consensus with the Snow family of protocols. They then proposed a specific modification of Snowflake and Snowball implementing this change. § CONCLUSION In this paper, we have examined the resilience properties of Avalanche and its underlying Snowball protocol. We have experimentally evaluated simple strategies for a potential adversary. To quantify the efficacy of these attacks, we have conducted simulations and evaluated the ratio of stake the adversary needs to control to launch successful attacks on liveness and safety. Our analysis revealed that an adversary with a small fraction of the stake can indefinitely keep the network in a state where it cannot finalize a transaction. With some probability depending on the stake and the size of the network, the adversary can also convince some node to finalize a transaction that is then rejected by other honest parties, which can result in a double spending attack. Through our analysis, we have demonstrated that the Snowball protocol - the foundation of Avalanche - is vulnerable, when conflicting transactions are present. The weak resilience when conflicting transactions are present is a critical limitation, as it makes the protocol unable to support general smart contracts. This explains why Avalanche actually uses a different protocol, called Snowman, which uses a linear blockchain (instead of a DAG) in order to totally order those transactions, unlike what is done for payments <cit.>. §.§.§ Future Work The basis of our attacks relies on the presence of conflicting transactions. Future work could analyze how Avalanche distinguishes unique transactions, and determine the feasibility for an adversary to arbitrarily create conflicting transaction from another transaction T broadcast by an honest node, for example, by creating a copy with different parents in the DAG. splncs04
http://arxiv.org/abs/2409.03054v1
20240904200415
Context-Aware Image Descriptions for Web Accessibility
[ "Ananya Gubbi Mohanbabu", "Amy Pavel" ]
cs.HC
[ "cs.HC" ]
The University of Texas at Austin Austin, TX USA [email protected] The University of Texas at Austin Austin, TX USA [email protected] § ABSTRACT Blind and low vision (BLV) internet users access images on the web via text descriptions. New vision-to-language models such as GPT-V, Gemini, and LLaVa can now provide detailed image descriptions on-demand. While prior research and guidelines state that BLV audiences' information preferences depend on the context of the image, existing tools for accessing vision-to-language models provide only context-free image descriptions by generating descriptions for the image alone without considering the surrounding webpage context. To explore how to integrate image context into image descriptions, we designed a Chrome Extension that automatically extracts webpage context to inform GPT-4V-generated image descriptions. We gained feedback from 12 BLV participants in a user study comparing typical context-free image descriptions to context-aware image descriptions. We then further evaluated our context-informed image descriptions with a technical evaluation. Our user evaluation demonstrates that BLV participants frequently prefer context-aware descriptions to context-free descriptions. BLV participants also rate context-aware descriptions significantly higher in quality, imaginability, relevance, and plausibility. All participants shared that they wanted to use context-aware descriptions in the future and highlighted the potential for use in online shopping, social media, news, and personal interest blogs. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>00000000.00000000.00000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Human-centered computing → Accessibility → Accessibility systems and tools < g r a p h i c s > Our system provides context-aware descriptions for images by considering the image along with extracted details from the image source to craft a description that uses correct visual terminology (e.g., chenille texture rather than velvet) and focuses on the relevant item (e.g., the sofa rather than the room). The figure shows information from left to right. The leftmost frame shows an e-commerce webpage where the image appears. An arrow points to the right from this picture to the image that the user selects on the webpage to obtain context-aware image description. An arrow from the selected image points to the right to the context-free description generated by GPT-4V. Arrows from the webpage labeled “extracted webpage context” and user selected image point to the rightmost frame that contains the context-aware description. Context-Aware Image Descriptions for Web Accessibility Amy Pavel September 9, 2024 ====================================================== § INTRODUCTION Images are a primary way that people share information online. For example, people post selfies on social media, product photos on shopping sites, and scenic landscapes on travel websites. Blind and low vision (BLV) web users often use text descriptions to understand the content of images <cit.>, but images online often lack high-quality text alternatives such that the vast majority of images remain inaccessible <cit.>. Recent vision-to-language models such as GPT-V <cit.>, Gemini <cit.>, and LLaVa <cit.> can now convert images to detailed text descriptions on-demand, and new applications including an NVDA OpenAI plugin <cit.> and Be My AI <cit.> provide access to such descriptions. Existing applications generate descriptions for the image alone, ignoring the surrounding context of the webpage where the image appears. In contrast, humans change what they describe in an image based on the image context <cit.>. For example, a human description of the same room may be different for an AirBnB listing (e.g., number of seats, amenities, views) compared to a sofa listing (e.g., details of the sofa style). Existing image description applications miss an opportunity to tailor their descriptions to specific webpage contexts. Prior work and image description guidelines have established that the context of the image, including where it appears, and its purpose, should inform the image description <cit.>. For example, Stangl et al. identified that the image source and information goal inform the information wants of BLV audience members <cit.> and guidelines for image descriptions state that “context is key” <cit.> for deciding the content and terminology for image descriptions. Computer Vision researchers have explored using the surrounding context of an image to change the style of descriptions <cit.> and improve the specificity of descriptions <cit.> (e.g., replacing named entities <cit.>, tailoring alt text to Twitter post text <cit.>). These prior approaches improve the description accuracy and specificity (e.g., by replacing “woman” with “Mira”), but all prior approaches were created for an earlier class of vision-to-language models and thus were limited to simple context input (e.g., a short post text, a sentiment) and generated short descriptions. Prior work has not yet explored using rich webpage context to inform image descriptions or explored the impact of adding context for long descriptions provided. Such prior work has also not yet collected feedback from BLV audience members on automatically generated context-informed descriptions. Motivated by a rich history of accessibility research <cit.> and guidance <cit.> on the importance of context in descriptions, we build on such knowledge to explore how to automatically tailor image descriptions to their webpage context. In this work, we present a system to automatically provide on-demand context-aware image descriptions for BLV users browsing the web. We prototype our system as a Chrome Extension that describes images on-demand as the user browses the web. In particular, when a user selects an image to describe, the extension sends an image with the webpage to our pipeline. Our pipeline first extracts relevant webpage context, including the webpage text, title, and URL, along with the image and its alt text. Our pipeline scores all webpage text according to its relevance to the image (e.g., based on position and content similarity), then provides visually relevant details from the extracted context to GPT-V to produce a context-aware image description. The user receives context-aware short and long descriptions in our Google Chrome Extension such that they can flexibly access additional detail. We evaluated our system accuracy with a pipeline evaluation to assess risks of hallucinations and subjectivity for context-aware descriptions and conducted a user study with 12 BLV participants who frequently used AI description tools comparing context-aware and context-free descriptions. All participants wanted to use our system in the future and reported enthusiasm for context-aware descriptions. Participants rated context-aware descriptions significantly higher than context-free descriptions across all metrics we measured (quality, imaginability, relevance, and plausibility). Participants reported the context-aware descriptions to be focused on more relevant details than context-free descriptions (especially for online shopping images) and appreciated the inclusion of concrete terminology from the surrounding text (especially for news images). Participants highlighted that the system provided relevant visual details tailored to the webpage audience and thus would be useful in areas of expertise (e.g., car details on a car blog). While prior work established a need for context-aware descriptions <cit.>, participants in our study also highlighted risks of new potential automated descriptions (e.g., specific details may produce unwarranted trust, privacy concerns). In summary, we contribute: * An approach for automatically generating rich context-aware image descriptions meeting an existing demand. * A technical evaluation assessing the risks of context-aware descriptions. * A user study with 12 BLV participants comparing context-aware to context-free image descriptions. § RELATED WORK As we aim to create context-informed image descriptions for images encountered on the web, our work relates to prior work in describing images on the web and considering context in image descriptions. §.§ Describing Images on the Web Images are a primary medium for online communication across social media, blogs, tutorials, and more. BLV web users often access such images by reading text descriptions of the image content with screen readers or Braille displays. The Web Content Accessibility Guidelines (WCAG) thus request that creators “provide text alternatives for any non-text content” <cit.> — and text descriptions have been the standard since 1995 <cit.>. Still, creators and platforms often fail to provide image descriptions such that most images online lack high-quality alternative text <cit.>. To make images accessible, prior research proposed human-powered <cit.>, automated <cit.>, and hybrid approaches <cit.> to craft descriptions for images. Existing approaches for automated and hybrid authored descriptions recognize the potential for inaccuracies of automated methods and mitigate this risk with human authoring. For example, WebInSight <cit.> and TwitterA11y <cit.> generated descriptions but fell back to human-produced captions when automation failed, while Mack et al. <cit.> and Singh et al. <cit.> explored human-AI co-created alt text. The recent progress of vision-to-language models has sparked a new era of image accessibility tools that can produce detailed and high-quality descriptions on-demand. BLV users can use tools like OpenAI's GPT-V <cit.> and Google's Gemini <cit.> directly to upload images and receive descriptions, or use screen reader plugins (e.g., an NVDA plugin <cit.>) to get in-place descriptions while browsing the web. While such tools now produce more accurate and detailed descriptions, they lack the context afforded to human image description authors (e.g., where and why the image appears in a website), and thus miss the opportunity to deliver context-relevant details. We explore how to integrate context into such descriptions. As image descriptions are static, prior research has also explored alternative formats to enable gaining more information on demand by accessing a location in the image <cit.>, gaining information through progressive detail or information type <cit.>, selecting a description modality <cit.>, or by asking questions about the image <cit.>. These interactive techniques let BLV audiences get information on demand but require extra effort to access additional information. We investigate how context can improve the information that BLV audiences initially receive. §.§ Why Context Matters for Descriptions An image description should share what the BLV reader needs to know to understand the page content. Accessibility guidelines offer high-level guidance on how to write high-quality image descriptions <cit.> by encouraging writers to be succinct <cit.> and to consider the context of the image <cit.> as the optimal content, tone, and terminology for an image description depend on the use of the image. For example, a photo of a woman crossing a street may be described as “Mari smiles while crossing a crosswalk in New York.” on Mari's social media, or “A woman wears a short-sleeve chiffon knee-length sundress with a pastel pink and blue floral pattern.” on a dress shopping website. As image description requirements vary based on context, prior research and guidelines explored information wants across different types of image contexts (e.g., for social media posts <cit.>, memes <cit.>, red carpet looks <cit.>, journalism <cit.>, and generating images <cit.>). Prior studies specifically considered the role of context in BLV audience members information wants <cit.> and preferences <cit.> for image descriptions. Stangl et al. identified that information wants change for images encountered across different sources (e.g., wanted appearance details for dating profile images) <cit.>, and even for the same image encountered in different contexts (e.g., wanted attributes of a bazaar for news but details of shirts in the bazaar for e-commerce) <cit.>. Kreiss et al. further found that BLV audience members consider context relevance in their image description quality ratings <cit.>. These studies confirmed that image context is crucial to determine what information to include in image descriptions, but vision-to-language model tools do not yet consider the context of the image when providing image descriptions. Our work seeks to explore the potential of context-aware image descriptions by contributing a technical approach for creating them and conducting interviews with blind and low-vision users who compare automatically generated context-aware descriptions to context-free descriptions. §.§ Augmenting Descriptions with Context Prior work also considered adding external text context beyond the image to inform the linguistic style <cit.> and content <cit.> of automated descriptions. For example, to guide linguistic style, Computer Vision researchers have explored providing captions based on positive or negative sentiment <cit.>, personality type <cit.>, style <cit.> (e.g., humorous, romantic), and personal history <cit.>. Prior work has also investigated informing the content of descriptions by gaining information from the surrounding context <cit.>. Everingham et al. identified visual characters in movie frames based on movie scripts <cit.> and Biten et al. used named entities from news articles to identify out-of-vocabulary entities for news image captions <cit.> (e.g., replace “woman” with “Mira”, or “farm” with “Cherryville Lane”), but such work were not aimed at accessibility and thus did not motivate or evaluate how context-rich descriptions may impact blind users. Further, such work investigates changing the named entities in the description rather than the description focus itself. Our work is most related to Srivatsan et al.'s work that creates alternative text for Twitter images considering the associated Tweet text <cit.>. While such prior work supports images with short plain text context (e.g., 240 character Tweet text <cit.>, sentiment labels <cit.>), we support web images with long complex context (the entire webpage). Thus, we explore how to prioritize relevant context details, ignore irrelevant context details, and consider a variety of types of context (e.g., alt text, webpage title, surrounding text) when crafting descriptions. All prior work also used earlier vision to language models that produced short single sentence descriptions (e.g., BLIP-2 <cit.>), such that our work is the first to investigate rich adaption of multi-sentence descriptions to context (e.g., adding rich details, introducing and defining terminology). We also uniquely seek feedback from BLV audience members who are frequent users of AI description tools to learn about the trade-offs of automatic context-tailored descriptions. § DESIGNING CONTEXT-AWARE IMAGE DESCRIPTIONS To design a system to provide context-informed image descriptions for images encountered on webpages, we first examined what type of context appeared around images on webpages to determine what type of context a system may consider and then we reviewed image description guidelines and prior literature to inform our technical pipeline. We synthesize these activities as system design goals. §.§ How Webpage Context Informs Descriptions Prior work identified that the context of the image at a high level (e.g., source, purpose) might inform image descriptions <cit.> and that for social media posts, the post text can inform image captions <cit.>. However, the context on the web is complex (Figure <ref>), and users may not want to manually enter information goals for each image. As high-level guidelines suggest that context is important for deciding what to describe <cit.>, we aim to surface how low-level interpretations of context may impact image descriptions required for adding context to descriptions to inform system design. To broadly understand what type of context on the web we may extract to inform context-aware image descriptions, we examined webpages across a variety of contexts (examples in Figure <ref>). We noted types of webpage context and potential impacts on descriptions (Table <ref>). Webpage context can guide description focus as the focus of the description depends on the purpose the image serves on the webpage <cit.>. For example, in the shopping example, the webpage title and adjacent text suggest the focus of the page is the 3 piece furniture set, and thus the furniture should be described in detail instead of the windows (Figure 2, Shopping). Webpage context may also guide description tone and terminology as the tone and terms in the image description should be appropriate for the audience viewing the description <cit.>, the surrounding website text provides tone and terminology that may be used in the image description. For example, identifying Stephen Curry in the image may be useful for social media followers of Stephen Curry (Figure 2, Social Media). Terms in the article may help resolve potentially ambiguous elements in the image to improve description accuracy. For example, if the image displays a partially submerged vehicle the caption may help clarify that the vehicle is a Ford Explorer rather than a Honda CRV (Figure <ref>, News). Finally, the image on the page alone and relative to other media and content may guide level of detail and presence of the description. For example, the purpose of a main image on an e-commerce page may be informative <cit.> and thus should receive detailed descriptions while small thumbnail images may be primarily for navigation and thus do not receive a detailed description <cit.> (e.g., Figure 2, Shopping). Purely decorative images based on the surrounding context may also not receive a description <cit.>. §.§ Design Goals of Context-Aware Descriptions In crafting descriptions for BLV audience members we aimed to add context to descriptions while preserving existing image description guidelines. Based on prior literature and guidelines, we surface 5 key design goals: * D1. Descriptions should be objective <cit.>. * D2. Descriptions should be as concise as possible <cit.>. * D3. Prioritize information in descriptions to fit the context (e.g., main topic, purpose) <cit.>. * D4. Language in descriptions should fit the context (e.g., names, places, objects) <cit.>. * D5. The level of description provided should be informed by the context (e.g., decorative, informative) <cit.>. In our system, we address D1-D4 but leave D5 as an interesting avenue for future work. We select D1-D4 as we intend to provide descriptions on-demand rather than for all images to save API costs. As users query for descriptions, we will assume that they would like enough detail to understand what is in the image. When context-aware descriptions become platform-supported, we may extend this work to D5 to run our pipeline on less important decorative images. For D1, we aim to provide objectivity and reduce hallucinations in our technical pipeline (e.g., by focusing descriptions on details with visual evidence). For D2, we aim to provide concise descriptions that are succinct rather than unnecessarily wordy (e.g., repetitive or redundant). We provide concise descriptions at two levels of detail (short and long) to reflect that users have different preferences for description lengths <cit.>. For D3 and D4, we prioritize discussing visual details that are important to the context using language appropriate for the target audience. § PROTOTYPE SYSTEM To gain feedback from blind and low vision users on automated context-aware descriptions, we built a prototype system to address guidelines D1-D4. Our prototype provides both context-free (existing) and context-informed (new) image descriptions at multiple levels of detail (Figure <ref>). To provide users access to descriptions, we created a Google Chrome Extension[https://github.com/UT-CS-HCI/context-aware-image-descriptions] that enables users to select an image as they browse the web to get context-aware descriptions for that image (Figure <ref>). §.§ Image Description Interface We created a Chrome Extension that users can install and activate for a webpage by clicking a button. Our system opens a new extension window to provide descriptions. While prior systems replaced alternative text directly in-place rather than in a separate window <cit.>, our system preserves existing alternative text (Figure <ref>) and enables flexible access to context-aware descriptions. Users can click an image to receive descriptions for that image in the extension window (Figure <ref>). The extension window provides both short and long context-aware and context-free descriptions to mimic existing “alt text” and image descriptions (or “long desc”) respectively. We provide the short description for both context-aware and context-free descriptions first, and users can optionally expand the longer descriptions on-demand. We include both context-aware and context-free descriptions to enable users to potentially recognize errors through transparency <cit.>. §.§ Image Description Pipeline When a user clicks an image, our system initiates a pipeline to provide context-aware descriptions (Figure <ref>). In particular, we extract relevant context from the webpage HTML including the webpage title, webpage URL, webpage text, and the alt text of the selected image. We then process this extracted context to distill information most likely related to the image and then we compose the final context-aware descriptions (short and long). §.§.§ Extracting Webpage Context As text on the webpage serves as the surrounding context for the image, we extract potentially relevant text from the webpage using the webpage's HTML. We extract the webpage title by using the tag in the page as the title appears as the browser tab title and often communicates the key purpose of the webpage. We also extract the URL of the page and the webpage text. To extract the webpage text, we select all text elements with HTML tags typically used for text (, , , and through ). We also extract the existing image alt text for the clicked image ( tag) as the alternative text occasionally contains a useful short description of key image content that context-aware descriptions can further expand upon. §.§.§ Processing Webpage Context We initially attempted to add the raw HTML or extracted webpage context with the image directly to a vision language model (GPT-4V <cit.>) to craft a context-aware description. However, the model tended to repeat the webpage HTML or extracted text itself rather than describing the provided image. For example, GPT-4V summarized a news article rather than describing the provided image associated with the news article. Thus, to encourage the model to describe the image rather than the webpage context directly, we further processed the webpage context to surface potentially relevant details. First, we provided an image relevance score for each extracted text segment on the webpage that considers spatial and content relevance. Specifically, the image relevance score considers the relative position (proximity and layout) of each text segment to the image on the page and the content similarity between the image content and text segment content. We compute proximity by first computing the distance from the center of the image to the closest edge of the text segment. We then normalize the proximity to achieve a score between 0-1 by dividing the computed proximity by the maximum possible proximity, i.e. the distance between the image and furthest text segment. We compute the layout score by first providing a `top', `bottom', `left', or `right' property to describe the position of the closest edge of the text segment to the center of the image. We then assign a layout score of 0.8 for `top' and `bottom' or a layout score of 0.9 for `left' and `right'. We assign a higher score for left and right as the elements positioned as left and right positioning was less common overall (e.g., on e-commerce rather than blogs or news articles), but more likely to be used specifically for image details (e.g., e-commerce details to the left and right). The specific scores were intentionally close together such that they would primarily be used for breaking ties in proximity. We finally determined content similarity using the CLIP score <cit.> between the text segment and image. We truncate text segments to 77 tokens to input them into CLIP to match CLIP's maximum input length. We immediately filter out all text elements with CLIP scores lower than 0.001 as such text segments are highly unlikely to be related to the image (e.g., often navigation bar links or advertisements) and use remaining text segments for the remainder of the pipeline. We compute the final image relevance score for the text segment as imageRelevanceScore = 0.55*proximityScore + 0.1*layoutScore + 0.35*similarityScore. We determined the score weights empirically in early testing to acknowledge the relative importance of position and content factors. To focus our descriptions on the website purpose, we also obtain a website purpose descriptor by extracting the URL of the webpage and prompting GPT-4 to “Identify the domain of the web link, determine the category of the webpage in [ecommerce, news, educational...] and the purpose of the website in short.” (see A.1 for the full prompt[We include select prompt details throughout this section for clarity, and include the full prompts with further instructions, formatting details, and sample outputs in the appendix. We refer to the relevant appendix section A.1-B.3 to indicate prompt location for each part of the pipeline.]). We then provided the image and initial website purpose to GPT-4V to obtain an initial context-aware image description with the prompt: “Describe the visual details of the element(s) in focus in the image for blind and low vision users to reinforce the purpose of the webpage” (A.2). The initial context-aware description tailors the description of visual details from the image but lacks specific details from the surrounding webpage context, so we next extract relevant visually concrete text from the webpage. §.§.§ Extracting Visually Concrete Text. To encourage our final description to attend to parts of the context that are relevant to the visual content in the image, we provide the initial context-aware image description, the alt text, the title, and the extracted context text with image relevance scores along with the image to GPT-4V to extract visually concrete text — i.e. words or phrases from the initial image description (A.3) and context text segments (A.4) that can be clearly seen in the image — and the elements in the image they are associated with. For example, a visually concrete term in the context text may be “Rose Garden” and the associated visible element in the image may be “Flowers in the background”. To remove redundant visual concepts extracted from different parts of the context, we merge together visually concrete text (A.5) for similar visual concepts and again filter out visually concrete text (A.6) not present in the image to avoid hallucinations with GPT-4V. We replace all names in the visually concrete text with a placeholders (e.g., person A, person B) using GPT-4V (A.7). §.§.§ Generating Context-Aware and Context-Free Descriptions To generate context-aware descriptions we instruct GPT-4V to create a description based on the image and the visually concrete text we extracted (A.8). We include image relevance scores for the visually concrete text extracted from the website text to encourage the model to attend to website content that is more likely to be related to the image. We get a description back and replace all name placeholders with names using GPT-4V (A.9). As the model occasionally ignores the names, we also run this step 3 times then select a description by prompting GPT-4V to: “Choose the best description in [long context-aware descriptions] array based on the highest number of visual details, named entities such as names of people, location, objects, and objectivity”. This process results in our final long context-aware image description and we also generate a description from the image alone to achieve a long context-free image description (B.1). For each description, we ask GPT-4V to make the descriptions more concise to obtain the short context-aware description (A.10) and short context-free description (B.3). §.§.§ Implementation Our Chrome Extension interface was implemented using JavaScript. For the backend, we used a Python Flask server with a real-time Firebase <cit.> database to log context-aware and context-free descriptions for the user-selected images. To save on inference costs, we cache descriptions. Specifically, we use a Firebase real-time database to log the image, website, and description details. If the user revisits an image on a webpage, we retrieve a cached description directly from the database and display it. § PIPELINE EVALUATION While we designed our pipeline to reflect our design guidelines, adding context to descriptions comes with the risk of adding hallucinations, subjective information, or irrelevant details into the descriptions. For example, our approach may capture a subjective description of a “jaw-dropping house” or a false claim of a “basketball hoop” from a vacation rental listing then add these details to the description. Our approach may also pick up details that are not relevant (e.g., a lamp is listed in suggested purchases, and it begins to describe the lamp in a couch listing). Thus, before we gathered user feedback, we first conducted a pipeline evaluation to assess the accuracy, objectivity, and relevance of our context-aware descriptions against several baseline descriptions. Specifically, we evaluated short and long context-aware image descriptions from our pipeline and two baselines for accuracy, objectivity, relevance. §.§ Dataset & Models We selected a set of 24 images from the web. These images were selected from four categories of websites: e-commerce, news, social media, and blogs. We selected e-commerce, news, and social media from prior work <cit.> then selected blogs to generally cover other informational content (2 food, 2 travel, and 2 lifestyle). We selected 6 different websites from each of the four categories to represent a range of website structures, and selected one image from each website. The webpages were on average 12170 characters long (σ = 11487) and had 1620 words (σ = 1592 words). We selected the websites and images such that the images had varying levels of alt text from no alt text to high-quality alt text. We ran our system on each image to generate a context-free long and short description (i.e. GPT-4V with no context) and a context-aware long and short description (i.e. GPT-4V with our pipeline to extract context). We also created a context-HTML baseline that provided GPT-4V the full HTML of the page as context as HTML would contain relevant semantic context details (e.g., title, alt text, web text) (i.e. GPT-4V with HTML as context). Across all methods, the long descriptions were around 2.5x the length of the short descriptions. Long descriptions were on average 172 words (σ = 51) for context-free, 136 words (σ = 49) for context-HTML, and 131 (σ = 44) for context-aware. Short descriptions were on average 56 words (σ = 14) for context-free, 49 words (σ = 16) for context-HTML, and 57 words (σ = 20) for context-aware. For each of the 24 images we evaluated 6 descriptions for a total of 144 descriptions (708 sentences in total). §.§ Analysis Two researchers, unaware of the description source, evaluated each sentence in across all descriptions for accuracy (accurate, inaccurate), and objectivity (objective, subjective). Accuracy refers to whether or not each sentence contained a hallucination (inaccurate) or not (accurate) to assess whether our context-aware descriptions added hallucinations from the context (e.g., the descriptions mention that a person is wearing a hat, but no hat is present in the image). We considered a statement to be accurate if it contained zero errors and inaccurate if it had at least one error. An error is any text without matching visual evidence (e.g., “4 shoes” for 3 shoes, “a hat” for no hat). This strict binary measure is a lower bound on accuracy. Objectivity assesses whether image descriptions contain subjective details without evidence in the picture (e.g., “the cups are tastefully arranged on the table” vs. “the cups are on the table”) as image description guidelines suggest to be objective <cit.> to recognize if context-aware descriptions added subjective details. Relevance refers to the relevance of the sentence to the image given the context. For example, the color of a floorlamp in the background may be typically irrelevant on a dress shoppping website. While sighted people often provide image descriptions, BLV and sighted raters may disagree on relevance based on their lived experience, so we also assess relevance in a user evaluation with BLV participants. Two researchers created the codebook by iteratively reviewing the data and refining codebook definitions. The researchers then both coded descriptions for a randomly sampled subset of the images (3 images with 99 total sentences, >10% of the data) and achieved a moderate to substantial agreement across all codes (Cohen's κ=0.53-0.79). Then the researchers split the remaining descriptions to code independently. For the full codebook, see Supplementary Materials. Finally, we ran a named entity detector <cit.> across all descriptions to assess how often named entities (i.e. objects, people, locations, organizations that can be denoted with a proper name) were included in the descriptions. §.§ Results Context-aware descriptions had a similar percentage of accurate, objective, and relevant sentences compared to the baseline context-free and context-HTML descriptions (Figure <ref>). Thus, adding context did not increase hallucinations, subjective statements, or irrelevant details. We also performed an error analysis of errors across all models to assess in what scenarios hallucinations occur. Across all models, we saw several types of errors: plausible but not present visual objects (e.g., stating there is a factory near a parking lot of new cars, stating there is a small dog in a garden scene), plausible but inaccurate visual adjectives (e.g. identifying a shiny pleated dress as striped due to lighting, describing a dark door as an open door), incorrect counts (e.g., a cabinet features 3 doors when it actually features 4 doors), and incorrect positioning (e.g., incorrectly stating a person is walking behind another person, stating the water level has reached a window when it is below the window). We specifically reviewed all named people extracted from the context with our context-aware descriptions to see if the entities had been misapplied by our name replacement step, but we did not find any errors in named entities. We further reviewed sentences that lacked visual evidence in the image (subjective sentences). Often the model provided statements without grounding in the image for both context-aware and context-free descriptions. Common types of ungrounded statements included: adding subjective adjectives (e.g., calling a blue and red pattern “harmonious”), providing subjective interpretation of the image as a whole (e.g., “the overall impression is one of opulence and cultural heritige”), providing a guess about the image setting based on the colors (e.g., “the golden hue suggests this image may have been taken during golden hour”) and adding explanations of image content not grounded in the image (e.g., “typical of Indian bridal wear”). For context-aware images, we observed fewer subjective statements overall (our pipeline features multiple steps to assess visual grounding), but sometimes the subjective statement was uniquely derived from the context. For example, in an image of Billie Eilish on her Twitter with a post about her recent brand transformation and rise as a global phenomenon, the context-aware description includes the statement “The golden gleam of the awards [...] symbolizes both a brand transformation and her rise as a global phenomenon.” whereas the context-free description features less specific interpretation “holding multiple gold gramophone trophies [...] conveying a sense of accomplishment and pride in their music industry achievements”. Overall, our context-aware descriptions contain more named entities (e.g., proper nouns of people and places) than other methods (Figure <ref>). Our approach excelled at including named entities for news and social media — categories where specific names are often important. Our model selectively excluded named entities for e-commerce where specific names of people or places are typically not relevant. The blogs in our dataset consisted of educational content (2 food, 2 lifestyle, and 2 travel) such that named entities were important for some blogs (travel) but not others (food). Short descriptions obtained a higher number of named entities than long descriptions demonstrating the named entities were typically preserved during long to short description summarization for all description approaches. § USER EVALUATION We then conducted a study with 12 blind and low vision AI description tool users to compare context-free to context-aware descriptions (on 6 images selected by us, and 2 images selected by participants) and provide open-ended feedback about benefits and risks of context-aware descriptions. §.§ Method Our user evaluation invited BLV participants to directly compare context-free and context-aware descriptions for 6 pre-selected images and 2 participant-selected images then provided an opportunity for open-ended feedback about risks and benefits of context-aware descriptions in a semi-structured interview. The study was an hour long, conducted in a 1:1 session via Zoom, and approved by our institution's IRB. We compensated participants with $25 USD for their participation. §.§.§ Participants We recruited 12 BLV participants who use screen readers to access information on the web and have experience using AI tools to describe images (Table <ref>). We recruited participants using mailing lists. Participants used a variety of screen readers (NVDA, Talkback, Jaws, VoiceOver) and AI tools (e.g., Google Chrome Descriptions, Seeing AI, Be My AI, Picture Smart AI). Participants were totally blind (7 participants) or had some light perception (5 participants). §.§.§ Materials. We selected 6 pre-selected website and image pairs for the study from our dataset collected in the pipeline evaluation to capture a variety of description types. We selected 3 images that contained people and 3 images that did not contain people across a range of categories (two images from both news and e-commerce and one image from blog and social media categories). For the 2 participant-selected images we invited participants to select their own image off a website of their choice. Participants chose images from blogs (tech, automobile, personal), news, educational articles, and more. For the 6 pre-selected images we used the descriptions obtained in the pipeline evaluation to keep the descriptions consistent across participants. For the participant-selected images, we installed our Google Chrome Extension on our computer to generate the image descriptions. See Appendix B for the full list of images and short descriptions for Task 1. §.§.§ Procedure We first asked participants a series of demographic and background questions about their strategies and challenges with understanding image content while browsing the web. Participants then completed two tasks: a controlled pre-selected image task, and a open-ended participant-selected image task. Task 1. In the first task, participants rated, selected, and provided open ended feedback on context-aware and context-free descriptions for 6 pre-selected images. For each image, we first invited the participant to browse the corresponding website and image for 2 minutes to understand the image context including the image alt text if present. Then, we provided one short description (either context-free or context-aware) with an option for users to extend the description to gain the long version of the description. We asked for participants to rate on a 5-point scale (from 1-low to 5-high) the quality, imaginability, relevance and plausibility of the description (evaluation measures selected from prior work <cit.>). We selected definitions of quality, imaginability, and relevance from prior work <cit.> (see Appendix E for metric questions). We used plausibility <cit.> to assess the likelihood that a participant thinks a statement is true (similar to trust <cit.> and we use the terms interchangeably). We then provided the participant with the other description for the image (either context-aware or context-free) and repeated the rating questions. We randomized the sequence of images. We randomized and counterbalanced the order of context-aware and context-free descriptions for each participant and across participants for each image to mitigate ordering effects. We did not provide information about what type of description the participant was viewing during this stage to mitigate bias. After participants provided per-description ratings we asked participants to select what description they preferred (the first or second one they saw for the image) then provide open-ended feedback. Task 2. In the second task, we mimicked real life use of the extension by inviting participants to provide an image from each of two websites of their choice. We provided participants context-aware and context-free descriptions for each image. We randomized the order to mitigate ordering effects. The descriptions were labelled (context-aware or context-free) to mimic real extension use. For each image, we initially provided users both context-aware and context-free short descriptions that they could optionally extend to see the longer version. We then asked participants to express preference between the two descriptions on a 5 point Likert Scale (from 1 - Strongly Prefer Context-Free to 5 - Strongly Prefer Context-Aware) then provide open-ended feedback about their preference. Semi-Structured Interview. We concluded a semi-structured interview asking participants about their perspectives on the benefits and drawbacks of context-aware descriptions and about potential future use of context-aware descriptions. See supplementary material for the full list of questions. §.§.§ Analysis. We recorded and transcribed the interviews. To examine participants' feedback on the context-aware image descriptions, one researcher read interview transcripts to derive themes through affinity mapping. §.§ Results Overall, all participants reported they would want to use the Chrome Extension in the future to get context-aware descriptions about images on the web. Participants preferred context-aware descriptions to context-free descriptions 76% of the time in the first task (Figure <ref>) and 67% of the time in the second task (Figure <ref>). Participants in the first task also rated context-aware descriptions significantly higher than context-free descriptions (GPT-4V) across all metrics (quality, imaginability, relevance, and plausibility) (Figure <ref>). §.§.§ Using Context Specific Terms for Visual Concepts All 12 participants stated that they found context-specific terms in context-aware descriptions to be useful for understanding the image. Participants frequently cited the context-specific terms in explaining why they chose a context-aware description over a context-free description (Figure <ref> and <ref>). While participant's existing visual description tools and context-free descriptions lacked context-specific terms, the context-aware descriptions often provided contextual details (e.g., “Himalayas” instead of “Mountain Range” in Image 6). Participants also highlighted that while their existing tools often left out names of people (e.g., GPT-4V provides “I'm sorry, but I can't identify...the people in the image.” for images of people) our context-aware descriptions included the names of people (e.g., “Billie Eilish” instead of “a woman” in Image 1). P9 explained how context-aware descriptions improved imaginability: “I was immersed because its giving me the names, my brain took a second parsing when it said `a man', `woman' in the other description, but [the context-aware descriptions] gives me names and I could easily relate. It tells me about the White house, more specifics and details". P8 noted that the context-specific terms were particularly helpful when browsing an image he selected in the second task from his area of expertise: “I work for automotive industries, its got the key details very accurately, second one [context-aware description] is way way better.” (Figure <ref>). While context-specific terms were beneficial when participants were familiar with the terms, participants noted difficulty interpreting terms they were not familiar with. P4 and P7 both mentioned that they did not know what “Grammy” trophies looked like so they noted they would prefer a combination of the context-aware description “multiple golden Grammy trophies against a backdrop of blurred Grammy trophies” with the context-free description “multiple gold gramophone trophies”. The long context-aware description included a similar explanation “multiple golden Grammy trophies, which are shaped like gramophones” and future work may also explore enabling participants to define a term on demand <cit.>. As we pre-selected 6 images in the first task, participants occasionally fell outside of the target audience of the page, such that the prevalence of context-specific terms could impede understanding. For example, P2 highlighted for a dress description that they appreciated that the context-aware description highlighted all the features of the dress (e.g., thin straps, hem) so that they could get a mental image, but these descriptions did not work well for P9 who mentioned: “I personally don't know what these mean, but I'm sure someone who shopped for dresses would like this description better.” §.§.§ Including Relevant vs. Irrelevant Details Participants rated context-aware descriptions as more relevant to the source than context-free descriptions (Table <ref>). All participants highlighted that the context-aware descriptions were particularly strong at highlighting relevant details in the e-commerce images (Image 3 and Image 5). For example, in sofa listing (Image 5), P2 mentioned that while the context-free description explained the room, the context-aware description “focused on the sofa, that was what the image was about, I don't care bout the light coming in, table, or the rug! It went into details about the sofa, the wood color, back being tufted.” While 10 of 12 participants preferred the context-aware description for the sofa due to the concentration on relevant details (ignoring irrelevant details), the remaining two participants appreciated that the context-free descriptions provided more information about the relevant context as it could also inform their buying decision, as P11 described: “it gave me some detail about the product but it gave more about context, sofa was able to accept the table, how tall it was and how it could match against other pieces of furniture”. Beyond e-commerce, participants appreciated the focus on relevant over irrelevant details for topic-focused articles. For example, for an image on a food blog that P7 selected in the second task, the participant appreciated the focus on the food rather than the background: “It's much more descriptive, it describes the food first, which is ideal on the food blog, its accompaniments, and then the setting. Food is the main focus. It is also easy to imagine this way, seems more organized.” Even when the context-aware and context-free descriptions included a similar distribution in the topics they described, the context-aware description often provided more specific details such that participants perceived the context-aware description as more relevant. For example, P10 selected a painting for a description and preferred the context-aware description as it provided a higher quanitity of relevant details (e.g., the context-aware description mentioned “Wrinkles mark his forehead” whereas the context-free description left this out, and highlighted a “warm and indistinct” background rather than just “warm”) and included the subject's name (e.g., “an oil painting of Rabindranath Tagore”). §.§.§ Subjective vs. Objective Details Participants appreciated the level of objectivity in the details in the context-aware descriptions. Such objective details let participants form a mental image, as P9 reported: “It leaves no room for confusion, it tells me everything.” (P9). However, users preferred context-free descriptions when context-aware descriptions omitted useful subjective details present in the context-free descriptions. For instance, for a news article on Prince Harry's “protective” gesture towards Meghan Markle (Task 1, Image 2), P3 and P4 expressed that they were able to imagine the image better with the context-free description because it provided hints about their relationship “a man and a woman walk side by side, both with serious expressions”. The context-aware description stated their names but lacked subjective details about their relationship or interaction. The action in the image was ambiguous (e.g., walking or standing). P5 described that the context-aware description: “misses some of the details that they are walking side by side, which the [context-free] description tells us. So, both of them are okay. But if both of them were combined, then they could make a comprehensive description.”. §.§.§ Trust Participants found the context-aware descriptions to be more plausible compared to the context-free descriptions (Table <ref>). Context-aware descriptions described the images with terms from the image's context, which assured the participants that the descriptions are likely to be accurate. P7 said, “I trust it higher because some of the details it said on the webpage, it also said in the description.”. Using the same terms as the website also made them less verbose unlike context-free descriptions which do not account for context while describing an image. P7, P9, and P10 also reported that for e-commerce websites, their trust in context-free descriptions was lower because context-free descriptions provided high-level overview of the product with focus on the background or details of the model showing the product, rather than product details. In contrast, one participant expressed that they find it hard to trust detailed product descriptions: “Love the outfit description! but the more details it gets into, the less trust I have, because I automatically think that it is hallucinating and that's just my issues.” (P7). P7 still rated the description generated by our system higher than the context-free description. P5 also mentioned that the descriptions from both the context-free and context-aware models for products were surprisingly detailed and thus they would suspect potential hallucinations based on prior experience with AI models. §.§.§ Future Use and Improvements All participants they wanted descriptions to consider context in the future. All participants expressed their interest in using our extension in the future and stated that it would improve their general web browsing experience. Seven participants specifically expressed their interest in using context-aware descriptions for online shopping. Five participants reported that context-aware descriptions would be useful in seeing images of family and friends on social media to identify people, understand their expressions, and learn about their activities. Participants noted a range of other specific uses such as news articles with people, news articles about events, and automobile blogs. P12 mentioned that beyond web browsing, “It is really hard to find images for presentations that fit the context, I would use this tool for that too”. 4 participants wanted context-aware descriptions to also support chat, 4 participants mentioned that they wanted better support for text or graphical images (e.g., diagrams), and 2 participants mentioned that they wanted a central database to query for descriptions people already obtained (our extension supports caching, but this functionality could be extended). For text images and diagrams, our context-aware descriptions provided more details than context-free descriptions that only provided a high level overview. However, the details provided were not well-organized such that they were difficult to understand. §.§.§ Current Practice & Comparison to Existing Approaches Participants reported that they came across images on social media, blogs, news, CAPTCHAs and used them for programming, shopping, and analytics. Participants estimated that the images they come across online have alt text 30% of the time on average (ranging from 5-40% of time), and only a maximum of 20% of those images had alt text that adequately described the image. While all participants frequently used AI tools for obtaining image descriptions, some participants mentioned that obtaining descriptions for online images was time-consuming with their current approach. Current applications required participants to download the images first they had to download the images first — a step that is tedious and occasionally impossible with a screen reader. Participants found automated descriptions helpful, but for complex images participants reported that current AI generated descriptions contained hallucinations and often required the assistance of sighted people to verify description accuracy and obtain minor details. We asked participants how the descriptions across the study compared to their day to day descriptions and 8 participants reported the descriptions in the study were clearly better while P2, P3, P5 and P12 reported they were about the same overall, but that the context-aware descriptions were augmented with more useful details (e.g., names, places, object details). P2, P5, P8, P12 suggested that more descriptions were always better such that they wanted access to context-aware descriptions as a new technique. § DISCUSSION Our work contributed an approach for context-aware image descriptions on the web. Our work was motivated by an existing gap in tailoring image descriptions to their context as highlighted by the accessibility community and existing guidelines <cit.>. Our technical pipeline represents the result of an iterative process to achieve relevant and context-aware descriptions with respect to this goal, contributing the first system to provide context-aware descriptions with modern vision to language models. The modern vision language models provide the capability for rich context input and multi sentence descriptions that can adapt to the webpage context in content and terminology use. Our technical evaluation demonstrates the technical feasibility of our approach for achieving context-aware descriptions. Our study demonstrates that BLV participants who already use AI image description tools are excited about the potential of integrating context into their image descriptions. Our study also reveals benefits (e.g., a focus on relevant details, use of context-specific terms) and potential risks (e.g., increase plausibility may increase trust) of automated context-aware descriptions. We discuss several key trade-offs, limitations, and opportunities for future work. §.§ Context-Aware Descriptions for Expertise Current descriptions are one-size-fits-all by default, but description guidelines suggest tailoring not only content <cit.> but also tone and terminology to the audience to match their knowledge and interests (e.g., college vs. grade school science class) <cit.>. As our context-aware descriptions incorporate details from the surrounding context to shape the description, the descriptions follow the tone and terminology of the article. We originally intended to use the visual concepts from the context to improve specificity and accuracy (e.g., augment “mountain range” with “Himilayas”), similar to Biten et al. improving vocabulary in short image captions <cit.>. However, the impact of using terminology that matched the context for long descriptions meant that for general audience news articles the approach often made the image easier to imagine. For websites with more specific audiences (e.g., a dress shopping store, a car blog) the descriptions became rich with context-specific terms such that the descriptions may be more enjoyable for experts to consume (e.g., the auto hobbyist reading an auto blog) and more challenging for novices to consume (e.g., a non-dress buyer reading about specific dress features). Over time, context-specific descriptions may have the potential to support furthering expertise in a domain of interest. Compared to our context-aware short descriptions that aim for conciseness, our context-aware long descriptions often included explanations of visual concepts. However, participants rarely accessed the long descriptions. To support ease of access for new visual terminology, we will explore combining context-free with context-aware descriptions such that a user may click on a term in the context-aware description to gain a context-free description of how the visual concept appears in the image. We will also take participant suggestions to integrate context-aware descriptions into chat such that participants can ask follow-up questions on demand. Future work may also explore personalization to adapt the terminology use to prior description history or specific information goals of a browsing session <cit.>. §.§ Risks and Trade-Offs of Context-Awareness Our prototype provides an opportunity to examine potential drawbacks of using context-aware descriptions in practice. §.§.§ Trust and errors. All vision to language models have some hallucinations, and our technical evaluation uncovered fewer hallucinations and subjective statements in context-aware descriptions compared to context-free descriptions. Participants also rated the context-aware descriptions as significantly more plausible than context-free descriptions. A potential risk is that even though context-aware descriptions generate fewer errors, the errors may be more believable when they may come from the webpage context, such that the errors produced by context-aware descriptions may be more risky. We attempted to reduce context errors in the pipeline as much as possible by requesting that any extracted webpage context corresponded to image visual content. Thus, most context-related errors we observed were in subjective interpretation rather than complete hallucinations (e.g., stating an object that was not present). Participants expressed mixed opinions on how details impacted their perceptions of plausibility. Most participants stated that more specific details made them trust the descriptions more, while other participants stated they trusted the detailed descriptions less (e.g., because the details went beyond their expected capabilities of AI models). Future deployments of context-aware descriptions may benefit from explanations of potential errors, and removing errors or making errors easier to detect by running the model multiple times so that the system or users could compare results <cit.>. §.§.§ Privacy. Potential privacy concerns may also arise from adding context-aware descriptions to images. Participants expressed existing discomfort with needing to provide their images to an external server (e.g., OpenAI) to receive high-quality descriptions from new generation vision to language models. In our current implementation, the context offers an additional piece of information to an external server (e.g., the webpage where the image was viewed). While participants may not see a risk for mundane publicly accessible webpages, extending our tool to support image description in spaces with personal information in the context (e.g., in a family photo album) may raise additional concerns. Participants frequently expressed a preference for on-device models for both context-aware and context-free descriptions. As our system typically takes 30 seconds to 1 minute to produce a description and requires an OpenAI API key, one participant expressed that they wanted descriptions saved for future participants. We currently have this feature implemented, but in practice such a feature should be opt-in by people using context-aware descriptions. §.§.§ Person identification. Our system provides identification of people in an image when models (e.g., OpenAI, Gemini) explicitly prohibit identification of all people (e.g., for privacy), which participants noted disrupts their ability to imagine the image. Participants appreciated the ability of context-aware descriptions to identify people in articles across pre-selected images (common celebrities) and personally selected images (cricket players, Grace Hopper, a blogger). Context-aware descriptions may provide a unique opportunity to let BLV audience members access the identity of people in images without privacy risk as the only names that can be included in context-aware descriptions are already included in the context itself. We did not observe named entity errors in our dataset, such errors they may be present in images with more identified people (e.g., an image of a large group with many names in the context) due to vision language model capabilities and restrictions. §.§.§ Limits to generalizability. Our current pipeline works best for images that have relevant text context but the performance degrades in cases where the images lack text context (a photographer's page with only images) and for cases where the text content is unrelated or only loosely related to the images (a services page with stock photos). Our also system performs poorly also in cases where vision language models tend to perform poorly (e.g., structured image understanding <cit.>). While our work created a proof of concept system and gained audience feedback on context-aware descriptions, future work can further explore limits to generalizability with larger scale evaluations and pipeline abalations. A fixed approach for context-aware descriptions is also not likely to fit user preferences across all users and contexts. For example, our system omitted subjective details (D1), but the diverse and context-specific user preferences about subjective details suggest the need for future work. We will explore how to let users customize the pipeline and prompts to set global and context-specific description preferences. §.§ Limitations and Future Work Our user study has several limitations. First, we recruited participants who were frequent users of AI image description tools to gain expert feedback. However, such users may have more knowledge about AI generated hallucinations than others. Second, the study was short-term so that we do not examine the impact of reading context-aware descriptions over time. Long-term use may also allow users to learn context-relevant visual concepts or tire of visual details. Finally, participants did not use the Google Chrome Extension themselves to avoid installing the extension for a single short study. While the research team assured the interface was usable with a screen-reader, future work may further gain feedback on use of the extension in practice (similar to Gleason et al. <cit.>). Our technical approach and technical evaluation also offer some clear opportunities for extensions. First, we did not adapt the level of detail in our descriptions to fit the context, and we instead provided both long and short descriptions for every request. In the future, we will explore adapting the level of detail in the description according to other aspects of the context (e.g., size, visibility, complexity). Second, our technical evaluation used a strict binary measure to evaluate the accuracy of all statements in the pipeline evaluation. It may be valuable but non-trivial to evaluate major vs. minor errors as error severity may depend on its impact (e.g., importance of the detail, or likelihood of misleading the user). In addition, we require the user to query for descriptions from an image by clicking on it to save on API costs, but in the future we will explore providing descriptions for all images at once. In this scenario, future work may explore how to provide other images as context for the current image (e.g., if the image is in a set of three of a t-shirt, how does that change the descriptions?). Finally, our extension will offer more customization options in the future (e.g. setting short or long descriptions as default). § CONCLUSION We present a system to generate context-aware descriptions for images encountered on web using relevant context from the webpage HTML. We evaluated the effectiveness of our system through a technical evaluation of description accuracy, objectivity, and relevance, and a user study with 12 BLV participants. In the user study, participants reported that the context-aware descriptions were more detailed, relevant, and plausible than context-free descriptions. All participants stated they wanted to use our system in the future. We aim to motivate future work to support people with disabilities performing everyday tasks on the web. § ACKNOWLEDGEMENTS We thank Dr. Earl Huff Jr. from the School of Information at The University of Texas at Austin for his expert guidance during the initial stages of this research. His valuable insights informed our system design. We also thank our study participants for their time and valuable feedback on this work. ACM-Reference-Format § SYSTEM PIPELINE PROMPTS We include prompts, input descriptions, and truncated sample outputs below, and include full sample outputs in supplemental material. §.§ Webpage Purpose and Category We use the following prompt to determine the website purpose and category (categories are obtained from prior work <cit.>): “Identify the domain of the web link, determine the category of the webpage in [ecommerce, news, educational, social media, entertainment, lifestyle, dating, job portals, or services] and the purpose of the website in short. Return the result only in a JSON format of '"website": "name of the website", "category": "name of category", "purpose": "purpose of the website" ' with no additional text." [Input: Webpage link] Example Output: §.§ Initial Context-Aware Image Description We use the following prompt and the website purpose to extract all the visual concepts of the elements in the image. “Describe the visual details of the element(s) in focus in the image for blind and low-vision users to reinforce the purpose of the webpage.” [Input: Selected Image] Example Output: The image shows four people standing in front of cherry blossoms in full bloom ... On the far left stands a young woman wearing a sleeveless dress with a blue base and adorned with a varying pattern of tiny dots ... modest neckline and a flared skirt, and she has her arm around another person next to her ... To her right, a man stands with his arm comfortably around the young woman on his right ... wearing a classic dark suit with a light colored shirt and a dark tie ... His attire is formal, and he exhibits a polished look with his hair neatly trimmed ... §.§ Visually Concrete Text from Alt Text, Page Title, and Visual Description of the Image To extract the visually concrete texts and the elements in the image they refer to, we use the following prompt: “Identify all the visually concrete words and their attributes from the text. Verify if the visually concrete words can be associated with elements in the image. Return the result only in an array of JSON, in the format of [vcw: "visually concrete word", element: "element associated with the visually concrete word"] with no additional text such as starting with ”'json”'. If no visually concrete words are present, return an empty JSON." [Input: Image with Alt text, Webpage Title, Initial Context-Aware Description] Example Output: Visual concrete text from visual description: Example Output: Visual concrete text from alt text: Example Output: Visual concrete text from page title: §.§ Visually Concrete Text from Context To extract the visually concrete texts and the elements in the image they refer to, we use the following prompt: “Identify all the visually concrete words and their associated elements from the "text" field in the given JSON. If there are people/named entities present in the image, obtain their names from the highest "final_score" in the JSON. Verify if the visually concrete words can be associated with elements in the image. The score of the visually concrete word is the "final_score" field from which it is derived. Return the result only in JSON object in format of '[vcw: "visually concrete word", element: "element associated with the visually concrete word", score: "final_score"]' with no additional text. If no visually concrete words are present, return an empty JSON.” [Input: Image, Text from Webpage] Example Output: §.§ Combining and Merging all Visually Concrete Text We combine and merge all the visually concrete text from alt text, webpage title, initial context-aware description, and the context. We retain the scores associated with elements from the visually concerete text from Context. “Combine the visually concrete words that are associated with same elements, retain the score for the element if any entry for that element has a score. Keep all the named entities used to describe the elements. Return the result only in an array of JSON, with no additional text such as starting with ”'json”'. If no similar elements are present, return the original JSON." [Input: Image, JSON of Visually Concrete Texts from Alt Text, Webpage Title, Initial Context-Aware-Description, and Context] Example Output: §.§ Filtering Visually Concrete Text In the combined JSON of visually concrete text, we filter the elements that are not visible in the image. “Generate a new JSON object from the given JSON by discarding entries whose "element" field is "none" or "not present". Return only the JSON with no additional text such as starting with ”'json”'" [Input: Image, Merged JSON of Visually Concrete Text] §.§ Replacing Name(s) of Person(s) with Placeholders We replace the names of the people (if present and if known) in the image using letters as placeholders. “If the names of person/people are known, only then assign M, N, O, P... (depending on the number of people in the image) to every person and return a JSON in the following structure: ["placeholder": letter assigned to the name, "name": name of the person replaced] with no additional texts. If there are no people, return an empty JSON." [Input: Filtered JSON of Visually Concrete Text] Example Output: §.§ Long Context-Aware Image Description We generate a long context-aware description that is specific, detailed, relevant, and objective. “Describe the elements in focus in the image and their visual details for blind and low-vision users using all their visually concrete words (vcw) from the given JSON. If there is/are person/people in the image, refer to them in the description with the placeholder letters as given. If there are no people in the image or their names are not present in the JSON, return the image description as is.`+ JSON.stringify(peopleVCW) +` Use the "scores" field to determine the priority of elements in the image to describe, higher score means higher priority to describe the element with its details. The goal is to make the image description specific and relevant. Return only the image description." [Input: Image, JSON of Visually Concrete Text] Example Output: In the image, a group of four individuals ... with cherry blossoms on the trees in the background ... M is wearing a sleeveless blue dress with a polka-dot pattern ... N is clad in a teal dress... P is on the right, wearing a color-blocked dress with a coral top, a yellow skirt ... the Rose Garden, with the White House partially visible in the background. §.§ Adding Names to People in the Image (if available from Context) We replace the placeholders back with the names of the people in the image that we had retained initially. “If there is/are person/people in the image, replace the "placeholder" letters in the description with the corresponding "name" from the JSON. Ensure that the description is semantically and grammatically correct and return only the description. If there are no people in the image or their names are not present in the JSON, return the image description as is." [Input: JSON of Names and their Placeholders] Example Output: In the image, a group of four individuals ... Malia is wearing a sleeveless blue dress with a polka-dot pattern ... Michelle is clad in a teal dress ... Sasha is on the right, wearing a color-blocked dress with a coral top, a yellow skirt ... the Rose Garden, with the White House partially visible in the background. §.§ Short Context-Aware Image Description “Refine the image description to make it more concise. If there is/are person/people in the image, replace the "placeholder" letters in the description with the corresponding "name" from the JSON. Ensure that the description is semantically and grammatically correct and return only the description. If there are no people in the image or their names are not present in the JSON, return the image description as is." [Input: Long Context Aware Description, JSON of Names and their Placeholders] Example Output: Four people pose for a photo amidst cherry blossoms. On the left, Malia is in a blue sleeveless dress with polka dots. Next to her, Michelle sports a teal dress. Barack stands in the center in a dark suit, white shirt, and gray tie. On the right, Sasha wears a color-blocked dress with a coral top and yellow skirt, accented by a white belt. They all smile, and the White House is visible behind them in what seems to be the Rose Garden. §.§ Choosing the Best Long Context-Aware Description We choose the most objective, detailed, and context-aware description based on the number of named entities and visual details `Choose the best description in [long context-aware descriptions] array based on highest number of visual details, named entities such as names of people, location, objects, and objectivity. Return only the index number of the description once selected." [Input: Array of Long Context Aware Descriptions] § BASELINE PROMPTS §.§ Long Context Free Baseline Description We use the following prompt to obtain the long context-free baseline image description from GPT-4V, “Describe the image for blind and low-vision users." [Input: Selected Image] §.§ Long Context HTML Baseline Description The following prompt is used to get the long context-HTML baseline description. “Describe the image for blind and low-vision users using the context." [Input: Image, HTML of Webpage Text] §.§ Short Baselines We used the given prompt to obtain concise versions of long context-free and context-HTML baseline descriptions. “Refine the image description to make it more concise." [Input: Long Baseline] § IMAGES FOR USER STUDY IN TASK 1 § DESCRIPTIONS OF IMAGES IN TASK 1 § USER STUDY EVALUATION METRICS QUESTIONS * Overall Quality How good is the description for overall nonvisual accessibility? * Imaginability How well can you imagine this image in your mind? * Relevance How well does the description capture the relevant aspects of the image? * Plausibility How much do you trust that the image description is correct?
http://arxiv.org/abs/2409.02659v1
20240904123355
Dynamics of drug trafficking: Results from a simple compartmental model
[ "Nuno Crokidakis" ]
physics.soc-ph
[ "physics.soc-ph", "cs.SI" ]
Nuno Crokidakis Dynamics of drug trafficking: Results from a simple compartmental model Instituto de Física, Universidade Federal Fluminense Niterói - Rio de Janeiro, Brazil ^* [email protected] Dynamics of drug trafficking: Results from a simple compartmental model Nuno Crokidakis ^* September 4, 2024 ======================================================================= Day Month Year Day Month Year § ABSTRACT In this work we propose a simple model for the emergence of drug dealers. For this purpose, we built a compartmental model considering four subpopulations, namely susceptibles, passive supporters, drug dealers and arrested drug dealers. The target is to study the influence of the passive supporters on the long-time prevalence of drug dealers. Passive supporters are people who are passively consenting to the drug trafficking cause. First we consider the model on a fully-connected newtork, in such a way that we can write a rate equation for each subpopulation. Our analytical and numerical results show that the emergence of drug dealers is a consequence of the rapid increase number of passive supporters. Such increase is associated with a nonequilibrium active-absorbing phase transition. After that, we consider the model on a two-dimensional square lattice, in order to compare the results in the presence of a simple social network with the previous results. The Monte Carlo simulation results suggest a similar behavior in comparison with the fully-connected network case, but the location of the critical point of the transition is distinct, due to the neighbors' correlations introduced by the presence of the lattice. PACS Nos.: 05.10.-a, 05.70.Jk, 87.23.Ge, 89.75.Fb § INTRODUCTION The study of criminality has been the subject of interest for sciences like mathematics in the last years (for recent reviews, see <cit.>). Usual methods consider the range from partial differential equations and self-exciting point processes to agent-based models, evolutionary game theory and network science <cit.>. Recent reviews on mathematical models of social contagion and criminality can be found in Refs. <cit.>. Considering the particular problem of drug trafficking, a dynamic model was proposed to analyze the proliferation of drugs, based on the presence of two classes of individuals, namely dealers and producers. The model shows that equilibrium with no dealers and producers is fairly difficult. The model predicts a nonzero steady-state stable equilibrium, in which the number of dealers and producers can be kept at low levels if the repression against these activities focuses on their mutually reinforcing interaction. In this case, a reliable policy against drug proliferation should contemplate simultaneously the sides of supply and demand. The possibility of an equilibrium steady state with the total population being users or producers is excluded <cit.>. In countries like Brazil the drug traffiking is a huge problem. A recent work suggested an increase in incarceration rates in the Brazilian southern state of Rio Grande do Sul during the last decade, especially due to drug trafficking offenses. The authors connected such increase due to two factors: the increase in the number of trafficking and criminal gangs in the state observed after 2005; and the changes in the law and proposed distinct sanctions for users and traffickers <cit.>. Concerning mathematical analysis of criminality, some authors considered compartmental models in order to study the spreading of crime. The authors in <cit.> considered a nonlinear mathematical model to study the effect of police force in controlling crime in a society with variable population size. The authors show that the model has only one equilibrium, namely crime persistent equilibrium. This equilibrium always exists and is locally as well as globally stable under certain conditions. The authors in <cit.> present a mathematical model of a criminal-prone self-protected society that is divided into socio-economical classes. They studied the effect of a non-null crime rate on a free-of-criminals society which is taken as a reference system. A relevant conclusion that can be derived from the study is that the kind of systems under consideration are criminal-prone, in the sense that criminal-free steady states are unstable under small perturbations in the socio-economical context <cit.>. The authors in <cit.> discussed that not all kind of crimes can be eradicated. Thus, they proposed a compartmental model in order to study the eradication of unemployment-related crimes in the developing countries. The results suggest that vocational training and employment strategies are more effective in combating crime when applied simultaneously <cit.>. Another compartmental model incorporated education programs as tools to assess the population-level impact on the spread of crime. With no compliance, the authors observed a high level of active criminal population, and if the compliance rates are significantly improved, the active population level decreases <cit.>. The impact of legal and illegal guns on the growth of violent crimes was also analyzed through mathematical models in recent papers <cit.>. Another work considered a two-dimensional lattice model for residential burglary, where each site is characterized by a dynamic attractiveness variable, and where each criminal is represented as a random walker <cit.>. The authors considered that the dynamics of criminals and of the attractiveness field are coupled to each other via specific biasing and feedback mechanisms. They concluded that, depending on parameter choices, several regimes of aggregation, including hotspots of high criminal activity, can be described by the simple model. One can also mention a work where the authors considered a model based on the predator-prey problem, in order to study the interaction between criminal population and non-criminal population. Considering a law enforcement term in the model's equations, the authors discussed that the criminal minded population exist as long as coefficient of enforcement does not cross a threshold value and after this value the criminal minded population extinct <cit.>. An important concept concerning criminal activities is the idea of passive supporters. The passive supporters were introduced in mathematical models in the context of spreading of terrorism <cit.>. The passive supporters do not oppose a terrorist act. They go unnoticeable and most of them reject the violent aspect of the terrorist action. They only share in part their cause <cit.>. The mentioned works <cit.> showed that the presence of such individuals, passive supporters, are a key feature to understand the spreading of terrorism. Taking this idea in mind, we consider the presence of passive supporters in the dynamics of drug trafficking. We will discuss in this manuscript how even a small fraction of such passive supporters can be effective in the emergence of drug dealers in a population where they do not exist at the beginning. For this purpose, we built a mathematical compartmental model consisting of four subpopulations. Our analytical and numerical results show that the emergence of drug dealers is consequence of the rapid increase number of passive supporters. Such increase can be associated with a nonequilibrium phase transition in the language of Physics of critical phenomena <cit.>. § MODEL Let us consider a populations of N agents or individuals. Each individual can be in one of four possible states or compartments, namely: (1) susceptible individual (S), that is an individual that was never a drug dealer or he/she was in the past and quit; (2) passive supporter of drug trafficking (P), an agent that is not a drug dealer but he/she is a person who is passively consenting to the drug trafficking cause. This can be occur, for example, due to the perception that if the government allows the drug trafficking, less people will die in the “war” (police x drug dealers) [The mentioned war has been the cause of a lot of deaths in cities where the drug trafficking is a great social problem, like Rio de Janeiro, Brazil <cit.>.]; (3) drug dealer (D), that is an agent that do the activity of seeling drugs; and (4) arrested drug dealer (A), a drug dealer that was arrested by the police. The following microscopic rules control the dynamics of the population: * S β→ P: a susceptible agent (S) becomes a passive supporter (P) with probability β if he/she is in contact with passive suporters (P); * P δ→ D: a passive supporter (P) becomes a drug dealer (D) with probability δ if he/she is in contact with drug dealers (D); * P σ→ D: a passive supporter (P) can also becomes a drug dealer spontaneously, by his/her own free will, with probability σ; * D γ→ A: a drug dealer (D) can be arrested by police and he/she becomes an arrested drug dealer (A) with probability γ; * A α→ D: an arrested drug dealer (A), after leave the prison, can come back to the criminal activity (drug trafficking), i.e., he/she becomes again a drug dealer (D) with probability α; * A 1-α→ S: an arrested drug dealer (A), after leave the prison, can also abandon the criminal activity and he/she becomes a susceptible agent (S) with the complementary probability 1-α; Let us discuss briefly about the transitions probabilities. The probability β is a contagion probability, it quantifies the influence of passive suporters over susceptible individuals. Passive supporters do not act as drug dealers, but they are not against the criminal activity. Thus, passive supporters can engaje susceptible agents to become also passive suporters of drug trafficking. The probability δ also models a social contagion, in such a case the influence of drug dealers over passive supporters. Since passive supporters are favorable to drug trafficking, they can become also drug dealers over the influence of drug dealers with probability δ, or can turn in drug dealers spontaneously with probability σ. It is not unusual in Brazil that some middle-class individuals become drug dealers due to social contacts with drug dealers. Indeed, two of such individuals, Pedro Dom and Playboy, have histories that lead to the creation of TV series and movies <cit.>. As we will show in the sequence, the probability σ is a key parameter of the model to keep the long-run survival of drug trafficking. Since drug dealers are practicing a criminal activity, they can be arrested by the police action with probability γ. An arrested drug dealer, after leaving the prison, can choose one of two possibilities: return to the criminal activity (with probability α) or abandon such criminal activities and come back to the susceptible population (probability 1-α). In the next section we will discuss our analytical and numerical results. § RESULTS §.§ Fully-connected population If we consider that the N individuals in the population are fully mixed, we can write the mean-field rate equations for the time evolution of the subpopulations of susceptible agents S(t), the passive supporters P(t), the drug dealers D(t) and the arrested drug dealers A(t). Defining the four subpopulation densities, namely s(t)=S(t)/N, p(t)=P(t)/N, d(t)=D(t)/N and a(t)=A(t)/N, the rate equations can be written as follows: d/dts(t) = -β s(t) p(t) + (1-α) a(t) , d/dtp(t) = β s(t) p(t) - σ p(t) - δ p(t) d(t) , d/dtd(t) = σ p(t) + δ p(t) d(t) - γ d(t) + α a(t) , d/dta(t) = γ d(t) - a(t)  . For simplicity, we considered a fixed population, thus we have the normalization condition, s(t)+p(t)+d(t)+a(t) = 1  , valid at each time step t. Let us start considering the behavior of the model for short times. As it is usual in contagion epidemic models, we consider as initial condition the introduction of a single “infected” individual. In our case this means one passive supporter, i.e., our initial conditions are given by the densities p(0)=1/N, s(0)=1-1/N and d(0)=a(0)=0. In such a case, one can linearize Eq. (<ref>) to obtain <cit.> d/dtp(t) = (β-σ) p(t)  , that can be directly integrated to obtain p(t)=p(0) e^σ (R_o-1) t, where p(0)=p(t=0) and one can obtain the basic reproductive number, R_o = β/σ . As it is standard in epidemic models <cit.>, we will see an outbreak and the persistence of the “disease” (drug trafficking [Since the passive supporters are responsible for the emergence of drug dealers, the occurrence of an outbreak in p(t) will lead to the persistence of drug dealers in the long-time limit.]) in the long-time run if R_o > 1, i.e., for β > σ. In other words, for the initial time evolution of the population, the contagion probability β and the spontaneous transition probability σ governs the dynamics. As pointed in section II, the parameter σ is a key parameter in the dynamics of the model. Indeed, looking for the above-mentioned initial condition, namely p(0)=1/N, s(0)=1-1/N and d(0)=a(0)=0, we see that we start the model with no drug dealers D and no arrested drug dealers A. Thus, the transition P → D will not occur in the initial times due to social contagion among drug dealers and passive supporters. If σ=0 the spontaneous transition P→ D will not yet occur in the initial times. If the dynamics evolves in time, we wil observe in the long-time run the population in the state p=1 and s=d=a=0. In the language of Nonequilibrium Statistical Physics, it is an absorbing state since there are only passive supporters (P) in the population, and the dynamics becomes frozen since no transitions will occur anymore <cit.>. Such equilibrium solution (s,p,d,a)=(0,1,0,0) for σ=0 was confirmed through the numerical integration of Eqs. (<ref>) - (<ref>) (not shown). In addition, if we look for Eq. (<ref>) with σ=0, we can see that the fraction p(t) will grow exponentially in the form p(t)=p(0) e^β t. Thus, in the following we will consider the model always with σ≠ 0. Now we can analyze the time evolution of the subpopulation densities s(t), p(t), d(t) and a(t). We fixed the parameters δ=0.05, α=0.30 and σ=0.07 and varied the parameters β and γ. Results are exhibited in Fig. <ref>. For fixed γ=0.20, one can see in Fig. <ref> (a) that a small value of the contagion probability β like β=0.05 leads the population to evolve to a state where the populations p, d and a disapper of the system after a long time, and there will be only suscetible individuals in the population. This represents an absorbing state since the dynamics will become frozen with the extinction of the three subpopulations p, d and a. This result will be discussed in details analytically in the following. Keeping γ=0.20 and incresing β to β=0.10 this mentioned absorbing state will not occur anymore and we can see the coexistence of the four subpopulations (panel (b)). Looking now for fixed γ=0.10 and increasing the contagion probability β, we see an increase of the populations p, d and a and a deacrease of s (panels (c) and (d)). Since the contagion probability leading to the transition S → P increases, we observe an increase of the passive supporters P that leads to an increase of the drug dealers D. With more drug dealers, we observe the increase of the number of arrested drug dealers A. Taking the long-time limit t→∞ to obtain the stationary densities of the model, s=s(t→∞), p=p(t→∞), d=d(t→∞) and a=a(t→∞), and considering the density p as the order parameter of the model we can find a phase transition at a critical point (see Appendix) β_c = σ As also discussed in the appendix, for β≤β_c the population is in an absorbing phase where there are only susceptible individuals in the stationary states, which leads to the first solution of the model: s = 1 p = 0 d = 0 a = 0 In addition, for β> β_c the four subpopulations coexist in the steady states, and we have a second solution for the model: s = δ d+σ/β p = (1-α) γ d/σ+δ d a = γ d where d is given by d = c_2/2 c_1 {-1 + √(1-4 c_1 c_3/c_2^2)} and the coefficients c_1, c_2 and c_3 are given by (see the Appendix) c_1 = δ [δ+(1+γ) β] c_2 = 2 σ δ + [(1-α) γ+(1+γ) σ-δ] β c_3 = σ (σ-β) An illustration of the long-time behavior of the population is exhibited in Fig. <ref>, where we plot the stationary densities s, p, d and a as functions of the contagion probability β for fixed parameters γ=0.10, δ=0.05, α=0.30 and σ=0.07. For such values, we have β_c=0.07. The lines were obtained from the numerical integration of Eqs. (<ref>) - (<ref>), and agree very well with the analytical solutions: for β<β_c we have the solution given by Eqs. (<ref>) - (<ref>), and for β>β_c the solution is given by Eqs. (<ref>) - (<ref>). The increase of the contagion probability β leads to an increase of the passive supporter subpopulation. Since such passive supporters are agents that are not drug dealers but they are persons who are passively consenting to the drug trafficking cause, they are susceptible to the social influence of drug dealers (see the social interaction P+D → D + D). Thus, the increase of the passive supporters leads to the increase of the population of drug dealers in the stationary states, as we see in Fig. <ref>. For the considered parameters in Fig. <ref>, one can see that for β > ≈ 0.26 the stationary population of drug dealers becomes the majority subpopulation in the system. Also, we observe that the population of susceptible agents does not disappear in the steady state, even for the limiting case β=1.0. In order to analyze the effects of the parameters on the population of drug dealers, we exhibit in Fig. <ref> the stationary values of the density of drug dealers as functions of β, for fixed parameter δ=0.05. The lines were obtained from the numerical integration of Eqs. (<ref>) - (<ref>). In Fig. <ref> (a) we fixed α=0.3, σ=0.07 and the arrest probability γ is varied. We can see that the increase of such probability can effectively decrease the stationary values of drug dealers. Considering for example β=0.3, we have d≈ 0.373 for γ=0.1, d=0.186 for γ=0.3 and d=0.122 for γ=0.5. On the other hand, in Fig. <ref> (b) we fixed γ=0.1, σ=0.07 and the probability to come back to crime after a drug dealer is released α is varied. We can see that, in such a case, the values of d does not change rapidly as in the previous case, where we varied γ. Looking again for fixed β=0.3, we have d≈ 0.422 for α=0.5, d≈ 0.373 for α=0.3 and d≈ 0.332 for α=0.1. However, the values of d increases when we increaase α, i.e., if the majority of drug dealers come back to their criminal activities after leaving the prison, the number of drug dealers tend to increase. Looking for references <cit.>, we can see that 70% of prisoners released in 2012 in USA were arrested again within five years, according to data from the Bureau of Justice Statistics (BJS) <cit.>. The recidivism rate is over 70 - 80% for prisoners with juvenile records. Similar percentages are observed in Brazil <cit.>. Finally, another illustration of the key importance of the parameter σ in the model, the responsible for the spontaneous transition P→ D, is shown in Fig. <ref> (c), where we fixed γ=0.1 and α=0.3. We can see that this parameter determines the position of the critical point β_c, as predicted in Eq. (<ref>), as well as it leads the number of drug dealers to increase rapidly. §.§ Two-dimensional square lattice In this subsection we present some numerical results of numerical Monte Carlo simulations of the model on two-dimensional grids (square lattices). Thus, we built an agent-based formulation of the compartmental model proposed in the last subsection. For this purpose, the algorithm to simulate the model is defined as follows: * we generate a L x L grid or square lattice with a population size N=L^2 and periodic boundary conditions; * given an initial condition S(0), P(0), D(0) and A(0), we randomly distribute these agents in the lattice sites; * at each time step, every lattice site is visited in a sequential order; * if a given agent i is in S state, we choose at random one of his/her nearest neighbors, say j. If such neighbor j is in P state, we generate a random number r in the range [0,1]. If r<β, the agent i changes to state P; * if a given agent i is in P state, we choose at random one of his/her nearest neighbors, say k. If such neighbor j is not in D state, we generate a random number r in the range [0,1]. If r<σ, the agent i changes to state D (spontaneous P→ D transition). On the other hand, if the neighbor j is in D state, we generate a random number r in the range [0,1]. If r<(δ+σ), the agent i changes to state D [This rule takes into account both the spontaneous P→ D transition (probability σ) and the P→ D transition due to social pressure of D individuals over P ones (probability δ).]; * if a given agent i is in D state, we generate a random number r in the range [0,1]. If r<γ, the agent i changes to state A; * if a given agent i is in A state, we generate a random number r in the range [0,1]. If r<α, the agent i changes to state D. Otherwise, if r>α, the agent i changes to state S. One time step is defined by the visit of all lattice sites. The agents' states were updated syncronously, i.e., we considered parallel updating, as it is standard in probabilistic cellular automata in order to avoid correlations between consecutive steps <cit.>. In addition, all results are averaged over 100 independent simulations. Considering the same parameters of the previous subsection, namely δ=0.05, α=0.30 and σ=0.07, and fixing γ=0.10, we exhibit in Fig. <ref> the time evolution of the population on a square lattice of linear size L=100 for typical values of β. We can see a similar behavior observed in the fully-connected case. For example, for sufficient small values of the contagion probability β we see a small outbreak for the subpopulations P, D and A, but they disapear of the system for long times, and we will observe only S individuals (see Fig. <ref> (a)). For higher values of β, we observe the coexistence phase where the four compartments will survive in the stationary states. In general, the times to achieve such stationary states are larger than in the fully-connected network case, as it is usual in agent-based models defined on lattices <cit.>. We can also measure the stationary densities s, p, d and a in the simulations in order to compare them with the fully-connect network results. In Fig. <ref> we exhibit the stationary values of the four subpopulations as functions of the probability β for γ=0.10, δ=0.05, α=0.30 and σ=0.07, considering a square lattice with linear size L=100. We observe similar behaviors for such quantities in comparison with the analytical calculations presented in the previous subsection. However, as it is usual in lattice models, we observe a distinct location of the critical point β_c in comparison with the analytical mean-field calculations presented in the previous subsection. For the same set of parameters, looking for the data for the square lattice case we have β_c≈ 0.063, whereas in the previous subsection we observed β_c=0.07. This suggest that, in the presence of the lattice, a smaller value of the contagion probability β is sufficient to the spreading of drug trafficking in the population. The differences can be pointed to the correlations generated by the lattice structure, that are absent in the fully-connected case. For better visualization of the lattice and the subpopulations, we plot in Fig. <ref> some snapshots of the steady states of the model. For this figure we considered the lattice size as L=50, the same fixed parameters γ=0.10, δ=0.05, α=0.30 and σ=0.07, and we plot four distinct values of β. The distinct colors represent the subpopulations S (red), P (black), D (green) and A (blue). For small values of β like β=0.05, we observe the discussed absorbing phase where the subpopulations P, D and A disappear of the population after a long time, and only the Susceptible (S) subpopulation survives (upper panel, left side). If we increase β to β=0.08, we observe that the subpopulations P, D and A survive and coexist with the S population, but the Susceptibles are yet the majority in the population (upper panel, right side). For β=0.20 the drug dealers are the most dominant subpopulation in the grid. Finally, for β=0.50 the subpopulation S appears in a small number, and most of passive supporters (P) turned into drug dealers. Notice also that the drug dealers D (green squares) appear close to passive supporters P (black squares), which remember us the importance of the parameter σ, that are responsible for the spontaneous P → D transition. T § FINAL REMARKS In this work we propose a mathematical model in order to study the emergence of drug trafficking in a population. For this purpose, we considered the concept of passive supporters, individuals that do not oppose a drug dealing act (selling of drugs). They go unnoticeable and most of them reject the violent aspect of the drug trafficking action (robberies, confrontation with police force, and so on). The population is divided in four subpopulations, namely susceptibles (S), passive supporters (P), drug dealers (D) and arrested drug dealers (A). As it is standard in contagion epidemic-like models, the transitions among such subpopulations are ruled by probabilities. First, we considered the model in a fully-connected population. Thus, we write the rate equations for the evolutions of the four subpopulations. Analytical results were obtained for the equilibrium solutions of the model. Such solutions, together with the numerical integration of the model's equations, reveal that there are two distinct collective states or phases in the stationary states of the model, depending on the range of parameters: (I) a phase where only the S subpopulation survives and (II) a region where the four subpopulations S, P, D and A coexist. We showed that the emergence of drug dealers in the population is consequence of the presence of passive supporters in the system, and the higher the passive supporter subpopulation, the higher the fraction of drug dealers. We also verified that the police action, modelled by an arrest probability γ, can be effective to control the spreading of drug dealers. In addition, we also presented in this work results of agent-based Monte Carlo simulations of the model on two-dimensional square lattices. The results show a similar behavior in comparison with the fully-connected case, and we also verified the emergence of drug trafficking as a nonequilibrium phase transition. However, due to the presence of the lattice a smaller value of the contagion probability β is sufficient to the spreading of drug trafficking in the population. Finally, the times to the system achieves steady states are higher in comparison with the fully connected case. Emergence of criminality were associated with phase transitions also in other contexts <cit.>. § ACKNOWLEDGMENTS The author acknowledges financial support from the Brazilian scientific funding agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, Grant 308643/2023-2) and Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ, Grant 203.217/2017). § ANALYTICAL CALCULATIONS: MODEL ON A FULLY-CONNECTED NETWORK In this appendix we will detail some of the analytical calculations considering the model defined in a fully-connected population, presented in section 3.A. Let us start with the t→∞ in Eq. (<ref>). Taking da/dt = 0, we obtain a = γ d For Eq. (<ref>), taking dp/dt = 0 we obtain two solutions, p = 0 s = δ d+σ/β From Eq. (<ref>), we obtain a = β/1-α p s If the solution p=0, Eq. (<ref>), is valid, from Eq. (<ref>) we have a=0, which also leads to d=0 from Eq. (<ref>). Thus, from the normalization condition, Eq. (<ref>), we have s=1. This solution (s,p,d,a)=(1,0,0,0) defines the absorbing state of the model. From Eq. (<ref>), the starionary state gives us p = (1-α) γ d/δ d+σ where we considered a=γ d from Eq. (<ref>) to simplify the result. Substituting Eqs. (<ref>), (<ref>) and (<ref>) in the normalization condition Eq. (<ref>), we found a second-order polynomial for d, namely c_1d^2 + c_2d + c_3 = 0, where c_1 = δ [δ+(1+γ) β] c_2 = 2 σ δ + [(1-α) γ+(1+γ) σ-δ] β c_3 = σ (σ-β) which solution is d = c_2/2 c_1 {-1 ±√(1-4 c_1 c_3/c_2^2)} We verified numerically that the solution of Eq. (<ref>) with the plus signal is the relevant one for the problem, leading to d>0. We also can verify that we have d=0 from Eq. (<ref>) for β=σ, which defines the critical point of the model, β_c = σ that separates the above-mentioned absorbing phase, for β≤β_c, from the coexistence phase, for β>β_c, where the four subpopulations s, p, d and a coexist in the stationary states. The stationary densities s, p and a in such coexistence phase can be found considering the result of Eq. (<ref>) (with the plus signal, as discussed above) in Eqs. (<ref>), (<ref>) and (<ref>), respectively. The stationary solutions can be also analyzed from the Jacobian matrix point of view. Thus, the stability of the stationary solutions, Eqs. (<ref>) - (<ref>) can be analyzed from the eigenvalues of the Jacobian matrix of the system. A given solution is locally asymptotically stable if all eigenvalues of J have negative real parts <cit.>. The eigenvalues λ can be obtained from det(J-λ I)=0, where I is the identity matrix. The Jacobian matrix of Eqs. (<ref>) - (<ref>) is given by J = [ -β p -β s 0 1-α; β p β s-σ-δ d -δ p 0; 0 σ+δ d δ p-γ α; 0 0 γ -1 ] For the absorbing state given by (s,p,d,a) = (1,0,0,0), the eigenvelues of the Jacobian matrix are λ_1=0, λ_2=β-σ, λ_3=-γ and λ_4=-1. Clearly we have λ_3<0 and λ_4<0. Finally, we have λ_2<0 for β<σ, which is the condition for the validity of the absorbing state solution, as discussed in the text. The coexistence solution given by Eqs. (<ref>) - (<ref>) is the stable for β>σ. It defines the critical point β_c=σ. elsarticle-num-names 00 review M. R. D'Orsogna, M. Perc, Statistical physics of crime: A review, Physics of Life Reviews 12 (2015) 1-21. donnay K. Donnay, Why interdisciplinary research enriches the study of crime, Physics of Life Reviews 12 (2015) 26-27. pacheco J. M. Pacheco, Crime as a complex system, Physics of Life Reviews 12 (2015) 32-33. sooknanan J. Sooknanan, T. A. R. Seemungal, Criminals and their models - a review of epidemiological models describing criminal behaviour, App. Math. Comput. 458 (2023) 128212. sooknanan2 J. Sooknanan, D. M. G. Comissiong, When behaviour turns contagious: the use of deterministic epidemiological models in modeling social contagion phenomena, Int. J. Dynam. Control 5 (2017) 1046-1050. araujo R. A. Araujo, T. B. S. Moreira, A dynamic model of production and traffic of drugs, Economics Letters 82 (2004) 371-376. ornell F. Ornell et. al., High rates of incarceration due to drug trafficking in the last decade in southern Brazil, Trends Psychiatry Psychother 42(2), 153-160 (2020). misra A. K. Misra, Modeling the effect of police deterrence on the prevalence of crime in the society, App. Math. Comput. 237 (2014) 531-545. nuno J. C. Nuño, M. A. Herrero, M. Primicerio, A mathematical model of a criminal-prone society, Discrete and Continuous Dynamical Systems - S, 2011, 4(1): 193-207. mataru B. Mataru, O. J. Abonyo, D. Malonza, Mathematical Model for Crimes in Developing Countries with Some Control Strategies, Journal of Applied Mathematics Volume 2023, Article ID 8699882 (2023). kwofie T. Kwofie, M. Dogbatsey, S. E. Moore, Curtailing crime dynamics: A mathematical approach, Front. Appl. Math. Stat. 8:1086745 (2022). monteiro L. H. A. Monteiro, More guns, less crime? A dynamical systems approach, App. Math. Comput. 369 (2020) 124804. meu N. Crokidakis, Modeling the impact of civilian firearm ownership in the evolution of violent crimes, App. Math. Comput. 429 (2022) 127256. short M. B. Short et. al., A Statistical model of criminal behavior, Mathematical Models and Methods in Applied Sciences 18 (2008) 1249-1267. abbas S. Abbas, J. P. Tripathi, A. A. Neha, Dynamical analysis of a model of social behavior: Criminal vs non-criminal population, Chaos, Solitons & Fractals 98 (2017) 121-129. galam_epjb_2002 S. Galam, The September 11 attack: A percolation of individual passive support, Eur. Phys. J. B 26 (2002) 269-272. galam_2003 S. Galam, On reducing terrorism power: a hint from physics, Physica A 323 (2003) 695-704. galam_2003_2 S. Galam, Global physics: from percolation to terrorism, guerilla warfare and clandestine activities, Physica A 330 (2003) 139-149. galam_2023 S. Galam, Identifying a would-be terrorist: An ineradicable error in the data processing?, Chaos, Solitons & Fractals 168 (2023) 113119. marro2005nonequilibrium Marro, R. Dickman, Nonequilibrium phase transitions in lattice models (Cambridge University Press, 2005). jstor M. H. M. Alves, P. Evanson, C. P. de Faria, J. V. P. Vilches, Living in the Crossfire: Favela Residents, Drug Dealers, and Police Violence in Rio de Janeiro (Temple University Press, 2011), https://doi.org/10.2307/j.ctt14bt4r0. bailey N. T. Bailey, The Mathematical Theory of Infectious Diseases and Its Applications (Charles Griffin & Company Ltd, 5a Crendon Street, High Wycombe, Bucks HP13 6LE, 1975). hp1 Dom: Of family, feud, drugs and crime, https://www.telegraphindia.com/entertainment/interview-the-makers-of-brazilian-series-dom-featuring-on-amazon-prime-engages-in-a-candid-chat-with-the-telegraph/cid/1819602 hp2 What Does the Police Killing of a Local Drug Lord Mean for Rio de Janeiro?, https://www.vice.com/en/article/qbx3wm/the-police-killing-of-local-drug-lord-playboy-in-rio-de-janeiro-818 hinrichsen2000non H. Hinrichsen, Non-equilibrium critical phenomena and phase transitions into absorbing states, Advances in Physics 49 (2000) 815-958. usa https://usafacts.org/articles/how-common-is-it-for-released-prisoners-to-re-offend/ brazil https://www.jusbrasil.com.br/noticias/no-brasil-sete-em-cada-dez-ex-presidiarios-voltam-ao-crime-diz-presidente-do-stf/2828503 mf_keom S. Biswas, Mean-field solutions of kinetic-exchange opinion models, Phys. Rev. E 84, 056106 (2011). roos B. Schonfisch, A. de Roos, Synchronous and asynchronous updating in cellular automata, Biosystems 51, 123-143 (1999). religion N. Crokidakis, Nonequilibrium phase transitions and absorbing states in a model for the dynamics of religious affiliation, Physica A 643, 129820 (2024). pmco D. Stauffer, S. M. Moss de Oliveira, P. M. C. de Oliveira, J. S. Sá Martins, Biology, Sociology, Geology by Computational Physicists (Elsevier, 2006). tax1 R. M. Brum, N. Crokidakis, Dynamics of tax evasion through an epidemic-like model, Int. J. Mod. Phys. C 28, 1750023 (2017). tax2 N. Crokidakis, A simple mechanism leading to first-order phase transitions in a model of tax evasion, Int. J. Mod. Phys. C 33, 2250075 (2022). radical N. Crokidakis, Radicalization phenomena: Phase transitions, extinction processes and control of violent activities, Int. J. Mod. Phys. C 34, 2350100 (2023). violent N. Crokidakis, Recent violent political extremist events in Brazil and epidemic modeling: The role of a SIS-like model on the understanding of spreading and control of radicalism, Int. J. Mod. Phys. C 35, 2450015 (2024). rw A. E. Matouk, Dynamical analysis, feedback control and synchronization of Liu dynamical system, Nonlinear Analysis 69 (2008) 2113-3224.
http://arxiv.org/abs/2409.02904v1
20240904174718
From Finite to Continuous Phenotypes in (Visco-)Elastic Tissue Growth Models
[ "Tomasz Dębiec", "Mainak Mandal", "Markus Schmidtchen" ]
math.AP
[ "math.AP", "35K57, 47N60, 35B45, 35K55, 35K65, 35Q92" ]
Latent Watermarking of Audio Generative Models Robin San Roman Meta, FAIR Inria Nancy Pierre Fernandez Meta, FAIR Inria Rennes Antoine Deleforge Inria Nancy Yossi Adi Meta, FAIR Hebrew University of Jerusalem Romain Serizel Inria Nancy Received XXXX; Accepted YYYY =========================================================================================================================================================================================================== § ABSTRACT In this study, we explore a mathematical model for tissue growth focusing on the interplay between multiple cell subpopulations with distinct phenotypic characteristics. The model addresses the dynamics of tissue growth influenced by phenotype-dependent growth rates and collective population pressure, governed by Brinkman's law. We examine two primary objectives: the joint limit where viscosity tends to zero while the number of species approaches infinity, yielding an inviscid Darcy-type model with a continuous phenotype variable, and the continuous phenotype limit where the number of species becomes infinite with a fixed viscosity, resulting in a novel viscoelastic tissue growth model. In this sense, this paper provides a comprehensive framework that elucidates the relationships between four different modelling paradigms in tissue growth. .4cm [1in] 2010 Mathematics Subject Classification. 35K57, 47N60, 35B45, 35K55, 35K65, 35Q92; Keywords and phrases. Continuous Phenotype Limit, Inviscid Limit, Brinkman-to-Darcy Limit, Phenotypic Heterogeneity, Structured Population Models, Porous Medium, Tissue Growth, Parabolic-Hyperbolic Cross-Diffusion Systems. [1in] 1.5cm § INTRODUCTION In the study of tissue growth and multi-cell aggregates, understanding the interplay between different species or phenotypes as well as their influence on the collective behaviour is crucial. A key aspect of this interplay involves how various subpopulations, each with distinct characteristics, respond to and, concurrently, influence the overall population density as well as the overall growth dynamics within a given tissue. Mathematical models play an important role in unraveling these complex interactions and predicting the behaviour of biological systems under various conditions. Structured models, which account for the heterogeneity within cell populations, provide a more accurate representation of biological systems. The resulting models can capture the dynamics of different cell types, their interactions, and their responses to environmental stimuli such as species-dependent growth rates. Moreover, structured models help shape the way we design tumour therapy <cit.> and are a fundamental theoretical tool in the study of the development of drug resistance <cit.>. The fundamental building block in phenotype-structured models often comprises systems of Lotka-Volterra type of the form n(t; a) - Δ_a n(t; a) = R(n; a), where the density of cells, n=n(t; a), is assumed to be well-mixed in space (spatial homogeneity) and labelled by a continuous phenotype variable a <cit.>, and references therein. In this equation, the linear diffusion term models mutation and the phenotype-dependent right-hand side models selection and growth dynamics of each subpopulation. Multiple variations of this model exist including spatial resolution <cit.> in form of linear random dispersal with trait-dependent mobility (by incorporating “D(a) Δ n”) <cit.> or by a nonlinear response to the pressure, p, within the tissue (by incorporating “D(a) ∇· (n(x,t;a) ∇ p )”), <cit.>. In recent years, there has been significant progress in comprehending fluid-based models of tissue growth with variations of the continuity equation n(x,t) + ∇· (n(x,t) v(x,t)) = f(x,t; n), at their core <cit.>, and references therein. In this context, the population density, n=n(x,t), at a point x and time t, is influenced by a velocity field and subject to certain growth dynamics. Then, tissue growth arises from the interplay of these two effects: cell division driven by growth dynamics, f, increases the pressure, which, subsequently, accelerates dispersal as modelled by the velocity field v. Predominantly, there are three rheological laws in the literature relating the velocity and the pressure, p — Darcy's law, Brinkman's law, and Navier-Stokes equation. In alignment with the focus of our paper, let us briefly review the literature on elastic models (Darcy coupling) and viscoelastic models (Brinkman coupling). Darcy's Law One of the simplest relationships between velocity and pressure is encapsulated in Darcy's law, v = - ∇ p(n). Under this law, Eq. (<ref>) transforms into a semi-linear porous medium equation <cit.>, which was initially proposed in the context of tumour growth in <cit.> and later studied analytically in <cit.> for the constitutive law p(n) = n^γ and in <cit.> for the constitutive law p(n) = ϵ n / (1-n). Historically, two-species variants of this Darcy model were introduced in <cit.>, featuring a joint population pressure generated by the presence of all cells, regardless of their type. More recently, under less restrictive conditions on the initial data and growth dynamics, this system has been studied in <cit.>, with a formal derivation of an N-species model presented in <cit.>. Finally, <cit.> considered a Darcy model with a continuous phenotype variable. However, a rigorous connection between models with a finite number of phenotypes N < ∞ and those with a continuous trait has, to the best of our knowledge, never been shown, and this will be addressed as part of this work. Brinkman's Law When viscoelastic effects are incorporated, the velocity in Eq. (<ref>) is related to the pressure via Brinkman's law, i.e., v = - ∇ W, where - νΔ W + W = p, with ν >0 being the viscosity constant. This model was proposed and studied in <cit.>. In <cit.>, the authors considered a three-compartment model for proliferative cells, quiescent cells, and dead cells, with Brinkman coupling. These equations are coupled with a linear diffusion equation for nutrient distribution and drug distribution, respectively. Finally, a two-species version featuring sharp interfaces between the two phenotypes was proposed and studied analytically in <cit.>. Inviscid Limit Formally, it can be observed that as ν→ 0, W → p. In this case, we recover Darcy's law in the inviscid limit. This limit was rigorously established in <cit.> for the constitutive law p(n)=n, and using the same technique, the inviscid limit was extended to the power law case p(n) = n^γ in <cit.>, and to a certain class of systems involving interacting species in <cit.>. Phenotype Limit Models featuring N ∈ℕ distinct phenotypes quickly become numerically intractable. Moreover, in cases such as tumours, the diversity of phenotypes is so vast that it is often more effective to represent the phenotype space as a continuum, as mentioned earlier. This approach not only reduces the complexity of the mathematical model but also maintains biological accuracy. Goal of this work As starting point of this work, we consider an N-species system where n^(i) = n^(i)(x,t) denotes the density of the ith subpopulation at location x∈ and time t≥ 0, where i∈1,…,N. Each species responds to the collective population pressure via Brinkman's law, and their growth dynamics are governed by phenotype-dependent growth rates: n^(i) = ∇· (n^(i)∇ W) + n^(i) G^(i)*(n^(j))_j=1^N, coupled through Brinkman's law -νΔ W + W = 1/N∑_j=1^N n^(j), and equipped with nonnegative initial data n^(i)(x,0) = n^(i),in(x), for i=1, …, N. In this paper, we study this stratified tissue growth model, focusing on the behaviour of multiple species, (n^(i))_i=1^N, under different scaling limits. Specifically, the primary objectives of this paper are twofold: r0.38 < g r a p h i c s > * Joint Limit: We study the transition from a nonlocal, viscoelastic to an inviscid, local response in the tissue (ν→ 0) while, simultaneously, let N→∞, to obtain an inviscid Darcy-type model with continuous phenotype variable. * Continuous Phenotype Limit: Keeping the viscosity coefficient ν>0 fixed, we let the number of distinct species tend to infinity. This limit provides a novel viscoelastic tissue growth model with continuous phenotype variable. The paper provides several novel regularity results which are stable under these different limits. Specifically, we establish a continuous-phenotype entropy inequality and we introduce the notion of weak solutions and derive entropy inequalities that play a crucial role in ensuring the stability and convergence of the solutions. The rest of this paper is organised as follows. In Section <ref> we introduce the precise assumptions on the growth rates and the initial data. We introduce the notion of solutions, and present the main theorems. Subsequently, in Section <ref>, we establish the existence of weak solutions to the N-species viscoelastic model and we prove several uniform estimates culminating in an entropy dissipation inequality which is a centrepiece in the joint limit. Section <ref> is dedicated to the joint limit ν→ 0 and N→∞. In Section <ref> we establish the continuous-phenotype limit, N→∞, while keeping ν > 0 fixed and when ν =0. We conclude in Section <ref> with some remarks and future avenues. § NOTATION, DEFINITIONS, AND MAIN RESULTS This section establishes the notation used throughout this work. We introduce the main assumptions regarding the initial data and growth rates of each species. Additionally, we define our concept of weak solutions for the respective systems. Finally, we present the main results of this paper. §.§ Notation Throughout, let N∈, with N≥ 2 denote the total number of distinct species. Growth rates. We begin by addressing the assumptions on the growth rates. In the continuous-structure limit, we expect a growth rate function parameterised by the phenotype variable. Therefore, we consider G: × [0,1] →, (n, a) ↦ G(n; a), and we define G_Nı(n) Gn; iN^-1, for i=1, …, N and n ≥ 0, with C^1∩ L^∞-continuation on the negative half line. We make the following assumptions on G (G1[hyp:G1]) regularity: G ∈ C^1 (× [0,1]), (G2[hyp:G2]) monotonicity: max_a ∈ [0,1]∂_n G(· ; a) ≤ - α <0 for some α > 0, (G3[hyp:G3]) homeostatic pressure: ∀ a ∈ [0,1] there is n^⋆(a) >0 such that G(n^⋆(a); a) =0 and n̅:=sup_a ∈ [0,1] n^⋆(a) < ∞. Initial data. We define the initial data in a similar fashion. Let n^in: × [0,1] → [0,∞), (x, a) ↦ n^in(x;a), be a Carathéodory function such that for all a∈[0,1] n^in(· ; a) ∈ L^1∩ L^∞(), as well as n^in(x; a) ≤n̅, for all a∈[0,1] and almost every x ∈. We moreover assume that sup_a∈[0,1]∫_ n^in(x; a)|x|^2 x < ∞. Then, we set n^(i),in_N(x) n^in(x; iN^-1), almost everywhere on . Starting point and further notation. Having introduced all the necessary notation, we are now ready to present the starting point of our endeavour – an N-species system, where n_ν,N^(i)=n_ν,N^(i)(x,t) denotes the number density of the ith subpopulation at location x and time t, for i=1, …, N. Each species responds to the collective population pressure via Brinkman's law to avoid overcrowding, and the growth dynamics are governed by the phenotype-dependent growth rates introduced earlier. Altogether, the system's dynamics are expressed as follows: {[ nı - ÷*nı W = nı Gı_N(n ), ; nı(x,0) = n^(i),in_N(x), ; -ν W +W = n, ]. where the (rescaled) total population density is defined as n(x,t) = 1/N∑_i=1^N nı(x,t). Let us introduce the interpolated density as n(x,t; a) ∑_i=1^N nı(x,t) _(i-1/N,i/N](a), along with the interpolated initial data n_N^in(x; a) ∑_i=1^N n^(i),in_N(x) _(i-1/N,i/N](a). Similarly, we introduce the interpolated growth rate: G_N(n;a) ∑_i=1^N G_Nı(n)_(i-1/N,i/N](a). The advantage of introducing the interpolated quantities is that they naturally embed into function spaces continuous in space, time, and phenotype that play a crucial role in the continuous-phenotype limit. Indeed, using the interpolated quantities, System (<ref>) can be expressed as {[ n - ÷*n W = n G_N(n; a), ; n(x,0;a) = n^in_N(x;a), ; -ν W +W = n, ]. for all a ∈ [0,1]. Upon observing that n = 1/N∑_i=1^N nı = ∑_i=1^N nı∫_i/N^i+1/N 1 a = ∫_0^1 n(x,t;a)a, we find that the rescaled total population density satisfies {[ n - ÷*n W = ∫_0^1 n G_N(n;a)a, ; nı(x,0) = n_N^in(x), ; -ν W +W = n, ]. with n^in_N := ∫_0^1 n_N^in a. Limiting systems. Based on System (<ref>) we are interested in two limits — the continuous phenotype limit (N→∞) and the inviscid limit (ν→ 0). Regarding the first, letting N→∞, and assuming all limit objects exists, we formally obtain a phenotypically stratified viscoelastic tissue growth model of the form {[ n_ν,∞ - ÷*n_ν,∞ W_ν,∞ = n_ν,∞ G(n_ν,∞;a),; n_ν,∞(x,0;a) = n^in(x;a), ; -ν W_ν,∞ +W_ν,∞ = n_ν,∞, ]. where n_ν,∞ := ∫_0^1 n_ν,∞ a. Conversely, letting ν→ 0, we obtain the N-species inviscid system, similarly to <cit.>, {[ n_0,N - ÷*n_0,Nn_0,N = n_0,N G_N(n_0,N;a), ; n(x,0;a) = n^in_N(x;a), ; ]. where n_0, N := ∫_0^1 n_0, N a. However, the second result of this work is to obtain compactness sufficient to pass to the joint limit and obtain the phenotypically stratified inviscid system {[ n_0,∞ - ÷*n_0,∞n_0,∞ = n_0,∞ G(n_0,∞;a),; n_0,∞(x,0;a) = n^in(x;a), ; ]. where, as before, n_0,∞ := ∫_0^1 n_0, ∞ a. Below we introduce the notion of weak solutions that we will be working with in this paper. We say that the pair (n,W) is a weak solution to System (<ref>) with nonnegative initial data n^in∈ L^1()∩ L^∞() if, for almost every a ∈ [0,1], n(· ; a) ∈ L^∞(0,T; L^1() ∩ L^∞()) is nonnegative and there holds ∫_0^T∫_ nφ x t - ∫_0^T ∫_ n∇ W·∇φ x t = -∫_0^T ∫_φ n G_N(n; a) x t -∫_φ(x,0)n^in(x; a) x , for almost every a∈[0,1] and any test function φ∈ C_c^∞(× [0,T)), as well as - ν W + W = n, almost everywhere in ×(0,T). When N=∞, the same properties define a weak solution to System (<ref>) (with the convention that G_∞ = G). At this stage, let us recall that any solution of Brinkman's equation can be expressed as the convolution with the fundamental solution, denoted by K_ν, , the solution of - νΔ W + W = n, can be represented as W = K_ν⋆n, where K_ν(x) = 1/4π∫_0^∞exp*-π|x|^2/4sν - 4s/πs^-d/2 s. In particular K_ν≥ 0 and ∫ K_ν = 1. Regarding the inviscid limit, let us now introduce the notion of weak solution to the limiting Darcy models, System (<ref>) and System (<ref>). We call n_0,N≥ 0 a weak solution to System (<ref>) with nonnegative initial data n_0,N^in∈ L^1()∩ L^∞() if, for almost every a∈ [0,1], n_0,N(· ; a) ∈ L^∞(0,T; L^1()∩ L^∞()) and n_0,N∈ L^2(0,T; H^1()), and there holds ∫_0^T ∫_ n_0,Nφ x t - ∫_0^T ∫_ n_0,N∇n_0,N·∇φ x t =- ∫_0^T ∫_φ n_0,N G_N(n_0,N; a) x t - ∫_φ(x,0) n_0,N^in(x; a) x, for almost every a∈[0,1] and any test function φ∈ C_c^∞(×[0,T)). When N=∞, the same properties define a weak solution to System (<ref>) (with the convention that G_∞ = G). §.§ Main results Our first result concerns the existence of weak solutions to the N-species viscoelastic system, System (<ref>) as well as establishing an entropy inequality satisfied by the rescaled total population density. There exists a weak solution (nı,W)_i=1^N of System (<ref>) such that * nı∈ L^∞(0,T;L^1∩ L^∞()), uniformly in ν and N, * ∃ C>0 such that 0 ≤ nı≤ C, uniformly in ν and N, * W∈ L^∞(0,T;L^1∩ L^∞()), uniformly in ν and N, * W∈ L^∞(0,T;W^1,q()) ∩ L^∞(0,T;W^2,r()), for 1≤ q ≤∞ and 1 < r < ∞, uniformly in N, and such that n as defined in Eq. (<ref>) satisfies System (<ref>) in the sense of Definition <ref>. Let (nı,W)_i=1^N be the weak solution constructed in Lemma <ref>. Then, the following entropy inequality holds: [n](T) - [n^in] - ∫_0^T ∫_n Wxt ≤∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t, where [f](t) ∫_ f(x, t)(log f(x, t) - 1) x. Let ν>0 be fixed. For N∈, let (nı, W)_i=1^N be the weak solution constructed in Lemma <ref>. Then, there exists a function n_ν, ∞: × [0,T] × [0,1] → [0, ∞) with n_ν, ∞(·, ·, a)∈ L^∞(0,T; L^1() ∩ L^∞()), for almost every a ∈ [0,1], such that, up to a subsequence, n_ν, N ⋆⇀ n_ν, ∞ in L^∞(0,T;L^p()), 1≤ p≤∞, n_ν, N →n_ν,∞ in L^2(0,T;L^2()), W_ν, N → W_ν,∞:=K_ν⋆n_ν,∞ in L^2(0,T;H^1_loc()). The limit (n_ν,∞, W_ν,∞) is a weak solution to System (<ref>) in the sense of Definition <ref>. For N∈ and ν >0, let (nı,W)_i=1^N be the weak solution constructed in Lemma <ref>. Then, there exists a function n_0, ∞: × [0,T] × [0,1] → [0, ∞) with n_0, ∞(·, · ; a)∈ L^∞(0,T; L^1() ∩ L^∞()), for almost every a ∈ [0,1], and n_0, ∞∈ L^2(0,T;H^1()), such that, up to a subsequence, n_ν, N ⋆⇀ n_0,∞ in L^∞(0,T;L^p()), 1≤ p≤∞, n_ν, N →n_0,∞ in L^2(0,T;L^2()), W_ν, N →n_0,∞ in L^2(0,T;H^1()). The limit function n_0,∞ is a weak solution to System (<ref>) in the sense of Definition <ref>. § WEAK SOLUTIONS AND THE ENTROPY INEQUALITY This section is devoted to proving Lemma <ref> and Lemma <ref>. To this end, we first introduce a parabolic regularisation and, in a series of auxiliary propositions, derive the desired uniform bounds. Then, we prove that the solutions to the regularised system converge strongly to weak solutions of System (<ref>). Finally, we establish the entropy inequality. §.§ Approximate system Consider the following parabolic regularisation of System (<ref>) {[ nı - ϵ nı = ÷*nı W + nı Gı_N(n ), i = 1, …, N,; nı(x,0) = n^(i),in_ϵ,N(x), ; -ν W +W = n. ]. As in the previous section, we introduce the piecewise constant interpolation in phenotype n_ϵ,ν,N(x,t;a) := ∑_i=1^N nı_ϵ,ν,N(x,t) _(i-1/N,i/N](a), and rewrite System (<ref>) as {[ n- ϵ n = ÷*n W + n G_N(n; a), ∀ a ∈ [0,1],; n(x,0;a) = n^in_ϵ,N(x;a), ; -ν W +W = n. ]. Regularised initial data. The initial data for the above regularised system is given as follows. Let n^in_ϵ∈ C^∞_c(^d × [0,1]) approximate n^in such that n_ϵ^in(· ; a)_L^p(^d)≲n^in(· ; a)_L^p(^d), for any 1≤ p≤∞ and all a∈[0,1], and n_ϵ^in(· ; a) ⟶ n^in(· ; a), strongly in L^1() for each a∈[0,1] as well as almost everywhere in ^d × [0,1]. Finally, n^(i),in_ϵ,N, n^in_ϵ,N, and n_ϵ,N^in are defined in the same way as in the previous section. Let us point out that the condition ∫_ n^in|x|^2 x < ∞ can be guaranteed to be preserved by the approximation of the initial data, i.e., we have sup_ϵ>0 ∫_ n_ϵ^in|x|^2 x < ∞. Indeed, let R>0 be fixed. Then, for ϵ small enough we have ∫_ n_ϵ^in|x|^2_B_R(0) x ≤ 1 + ∫_ n^in|x|^2 x. Passing to the limit R→∞, we obtain (<ref>). Existence of solutions. Following the construction of <cit.>, we obtain a nonnegative solution nı_ϵ,ν,N which is bounded uniformly in ϵ in the following sense nı_ϵ,ν,N∈ L^∞(0,T;L^1∩ L^∞(^d)), ∂_t nı_ϵ,ν,N ∈ L^2(0,T;H^-1(^d)). Moreover, the following regularity holds for each ϵ>0 nı_ϵ,ν,N∈ L^2(0,T;H^1(^d)), ∂_t nı_ϵ,ν,N∈ L^2(0,T;L^2(^d)). §.§ Uniform estimates in ν and N Before proving strong convergence of the regularised densities towards a weak solution of the Brinkman system, let us discuss some properties of the regularised rescaled total population density, n, which satisfies the equation {[ n - ϵn = ∇· (n∇ W); + ∫_0^1 n G_N(n;a)a, in ×(0,T)×[0,1],; n(x,0) = n_ϵ,N^in(x)∫_0^1 n_ϵ,N^in(x;a)a. ]. There holds 0 ≤n≤n̅, almost everywhere. The uniform L^∞-bound is obtained by a standard argument following <cit.>. To this end, we assume (x^⋆, t^⋆) is a maximum point for n and n(x^⋆, t^⋆) > n̅. In such a point, there holds 0 = n(x^⋆, t^⋆) = ϵΔn(x^⋆, t^⋆) + ∇n(x^⋆, t^⋆) ·∇ W(x^⋆, t^⋆) + n(x^⋆, t^⋆) Δ W(x^⋆, t^⋆) + ∫_0^1 n(x^⋆, t^⋆) G(n(x^⋆, t^⋆); a) a < ν^-1n(x^⋆, t^⋆)(W(x^⋆, t^⋆) - n(x^⋆, t^⋆)) ≤ 0, which is a contradiction. Hence, 0 ≤n≤n̅. As an immediate consequence, let us integrate Equation (<ref>) to obtain nı_L^1()≤G_L^∞([0,n̅] × [0,1])nı_L^1(), so that, by Gronwall's lemma, we deduce that nı∈ L^∞(0,T;L^1()), uniformly in ϵ, ν, and N. Using the uniform L^∞-control on the rescaled total population we also get the following result. For N∈, ν>0 and ϵ>0, let (nı)_i=1^N be the solution to Eq. (<ref>). Then, there holds 0 ≤ nı≤ C, where the constant C>0 is independent of N, ν, and ϵ. Furthermore, there holds 0 ≤ n(x,t;a) ≤ C, almost everywhere with the same constant C. We define the quantity Ψ(x, t) := αn (x, t) e^2 β t, where α = 1/n̅ n_ϵ, N^in_L^∞(ℝ^d × [0,1]) and β = G _L^∞([0,n̅] × [0,1]) . Let |u|_- and _-(u) we denote the negative part and negative sign of the function u, i.e., |u|_- := -u, for u<0, 0, for u≥ 0, and _-(u):= -1, for u<0, 0, for u≥ 0. Then, we consider ∫_| Ψ(x, t) - n(x, t; a) |_- x = ∫_sign_- ( Ψ - n) ( ∂_t Ψ - ∂_t n) x = ∫_sign_-( Ψ - n) ( 2 βΨ + α e^2 β t∂_t n - ∂_t n) x = ∫_sign_-( Ψ - n) ∇·*( Ψ - n) ∇ W x + ∫_sign_-( Ψ - n) ( 2 βΨ + α e^2 β t∫_0^1 n G_N(n; ã) d ã - n G_N(n;a) ) x ≤∫_sign_-( Ψ - n) ( βΨ - n G_N(n;a) ) x ≤β∫_| Ψ - n|_- x. By Gronwall's Lemma, we obtain 0 ≤sup_t ∈ [0,T]sup_a ∈ [0,1]∫_| Ψ - n|_- x ≤ 0, and therefore 0 ≤ n(x, t; a) ≤ n_ϵ, N^in_L^∞() e^2 G _L^∞([0,n̅] × [0,1])T. The result follows by observing that n_ϵ, N^in_L^∞()≲n^in_L^∞(). Having established the uniform L^p-bounds, we derive further regularity results. First, we prove a bound on the gradient of the solution to Eq. (<ref>). Let (n)_ϵ>0 be the family of solutions to Eq. (<ref>). Then, there holds √(ϵ)*∇n_L^2(0,T; L^2())≤ C, for some constant C>0 independent of ϵ, ν, and N. Testing Eq. (<ref>) by the solution and integrating by parts, we obtain 1/2 t ∫_n^2 x + ϵ∫_n^2 x = 1/2∫_n^2 W x + ∫_^n*∫_0^1 n G_N(n;a) ax ≤1/2∫_n^2 W x + G_L^∞([0,n̅] × [0,1])∫_n^2 x. We now observe that the first term on the right-hand side has a sign: ∫_n^2 W x = 1/ν∫_n^2 *W-n x ≤1/νn_L^3()^2W_L^3() - 1/ν∫_n^3 x ≤1/νn_L^3()^3K_ν_L^1() - 1/ν∫_n^3 x ≤ 0, recalling that K_ν_L^1()=1. Using this computation in (<ref>) and applying Gronwall's lemma yields the statement. Let (n)_ϵ > 0 be the family of solutions to Eq. (<ref>). Then, ∫_0^T∫_^n* W^2xt≤ C, where C is independent of ϵ, ν, and N. Multiplying Eq. (<ref>) by W and integrating in space, we obtain 1/2∫_ W n x = ϵ∫_nΔ W x -∫_n*∇ W^2 x + ∫_ W[ ∫_0^1 n G_N(n;a)a] x. For the reaction term, we have ∫_ W[ ∫_0^1 n G_N(n;a)a] x ≤G_L^∞([0,n̅] × [0,1])∫_^ W nx, while proceeding similarly as in (<ref>), we see that ϵ∫_nΔ W x ≤ 0. Thus, we obtain 1/2∫_ W n x + ∫_n*∇ W^2 x ≤ C ∫_^ Wnx. The result follows from Gronwall's lemma. Let (n)_ϵ>0 be the family of solutions to Eq. (<ref>). Then, there holds ∂_t n∈ L^2(0,T;H^-1()), uniformly in ϵ, ν, and N. For a test function φ∈ L^2(0,T;H^1()), we have *∫_0^T∫_∂_t nφ x t ≤ϵ∫_0^T∫_ |∇n||∇ϕ| x t + ∫_0^T∫_n|∇ W||∇ϕ| x t + G_L^∞([0,n̅] × [0,1])∫_0^T∫_n|ϕ| x t ≤ C∇φ_L^2(0,T;L^2()) + Cφ_L^2(0,T;L^2()), where the constants are independent of any parameters by Propositions <ref> and <ref>. Let n be the solution to Eq. (<ref>). Assume that there is a constant C>0 such that sup_ϵ >0∫_n_ϵ, N^in|x|^2 x ≤ C. Then, the second moment remains bounded and there holds sup_t∈ [0,T]∫_n(x,t) x^2 x≤ C, for some constant independent of ϵ, ν, and N. We compute 1/2∫_nx^2 x = ϵ d ∫_n x - ∫_n x ·∇ W x + 1/2∫_x^2*∫_0^1 n G_N(n;a)a x ≤ϵ d n_L^∞(0,T; L^1()) +1/2* 1 + sup_a∈ [0,1]G_L^∞([0,n̅]×[0,1])∫_n|x|^2 x + 1/2∫_n |∇ W|^2 x ≤ C + C ∫_nx^2 x, and, again, we conclude by Gronwall's lemma. Let us point out that the above proposition implies also uniform second-moment control for n(x,t;a) for all phenotypes a∈ [0,1]. Let n be the solution to Eq. (<ref>). Then, there holds sup_t∈[0,T]∫_n (x,t) logn (x,t) x ≤ C, for some constant independent of ϵ, ν, and N. Let us consider ∫_nlogn x = ∫_n≥ 1nlogn x - ∫_n < 1nlogn x ≤n_L^∞(0,T; L^∞())n_L^∞(0,T;L^1()) + J, where J - ∫_n < 1nlogn x. In order to estimate J, let 𝒩(x) denote the standard normal Gaussian. Then, we have J = - ∫_n < 1nlognx = - ∫_n_n < 1/𝒩(x)log*n_n < 1/𝒩(x)𝒩(x) x + ∫_n_n < 1*x^2 x ≤ - ∫_n_n < 1/𝒩(x)log*n_n < 1/𝒩(x)𝒩(x) x + C, having used the second-order moment bound from Proposition <ref>. Applying Jensen's inequality to the first term, we observe - ∫_ n_n < 1/𝒩(x)log*n_n < 1/𝒩(x)𝒩(x) x ≤ - *∫_n_n < 1/𝒩(x)(x) 𝒩(x) xlog*∫_n_n < 1/𝒩(x)𝒩 x ≤ e^-1, as s↦ s log(s) is convex. In conclusion, we obtain J ≤ C + e^-1, whence ∫_nlogn x ≤n_L^∞(0,T; L^∞())n_L^∞(0,T;L^1()) + C, which concludes the proof. §.§ Compactness of solutions to the regularised equation Let (_h)_0<h<1 be a family of nonnegative functions such that _h ⊂ B_2(0), _h ∈ C^∞(^d∖ B_1(0)), _h(x) = 1/(|x|^2+h^2)^d/2 for |x|≤ 1. The following lemma was proved in <cit.>. Let (u_k) be a bounded sequence in L^p(^d×(0,T)) for some 1≤ p<∞. If (∂_t u_k) is uniformly bounded in L^r(0,T;W^-1,r) for some r≥ 1, and if lim_h→0 lim sup_k→∞ |log h|^-1 ∫_0^T∫_^2d_h(x-y)*u_k(x,t)-u_k(y,t)^p x y t = 0, then (u_k) is compact in L^p_loc(^d×(0,T)). Conversely, if (u_k) is globally compact in L^p, then the above limit holds. The sequence (W)_ϵ>0 is compact in L^1(0,T;L^1()). Consequently lim_h→0 lim sup_ϵ→0 |log h|^-1 ∫_0^T∫_^2d_h(x-y)*W(x) - W(y) x y t = 0. From the Brinkman equation we know that W is uniformly bounded in any L^∞(0,T;W^1,q()) for q∈[1,∞]. Moreover, since ∂_t W = K_ν⋆∂_t n, we have ∂_t W∈ L^2(0,T;H^-1(^d)) uniformly, using Proposition <ref>. Hence, by the Aubin-Lions lemma, W is compact in L^1(0,T;L^1_loc()). To obtain global compactness we argue that the sequence is equi-tight. Indeed, testing the Brinkman equation with 1/2|x|^2 and integrating by parts, we see -ν d ∫_ W x + 1/2∫_ |x|^2 W x = 1/2∫_ |x|^2 n x. Using Proposition <ref>, we deduce that W has a uniformly finite second moment, implying global compactness. Finally, Eq. (<ref>) holds by Lemma <ref>. The family (n)_ϵ > 0 is compact in L^1(0,T;L^1()). Let us denote Q_h(t):=∫_^2d_h(x-y)*n(x) - n(y) x y, and Q_h(t) := ∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y. Using Eq (<ref>), we derive t *n(x) - n(y) - ∇_x·*∇ W(x)*n(x) - n(y) - ∇_y·*∇ W(y)*n(x) - n(y) + 1/2*Δ W(x)+Δ W(y)*n(x) - n(y) - 1/2*Δ W(x)-Δ W(y)*n(x) + n(y)σ - ϵ*Δ_x+Δ_y*n(x) - n(y) ≤*n(x)G(n(x)) - n(x)G(n(x))σ, where σ = σ(x,t;a):=(n(x,t;a)-n(y,t;a)). Multiplying by _h(x-y) and integrating, we obtain, using the symmetry of , ∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y ≤ - ∫_^2d_h(x-y)Δ W(x)∫_0^1*n(x) - n(y) a x y +∫_^2d_h(x-y)*Δ W(x)-Δ W(y)∫_0^1 n(x)σ a x y -2∫_^2d∇_h(x-y)·*∇ W(x)-∇ W(y)∫_0^1*n(x) - n(y) a x y +2ϵ∫_^2dΔ_h(x-y)∫_0^1*n(x) - n(y) a x y +∫_^2d_h(x-y)∫_0^1 G_N(n(x))*n(x) - n(y) a x y +∫_^2d_h(x-y)∫_0^1 n(y)*G_N(n(x))-G_N(n(y))σ a x y =: ℐ_1 + … + ℐ_6. We now observe the following bounds: ℐ_2 ≤*∫_^2d_h(x-y)*Δ W(x)-Δ W(y)∫_0^1 n(x)σ a x y ≤1/ν*n_L^∞(0,T; L^∞())∫_^2d_h(x-y)*W(x)-W(y) x y + 1/ν∫_^2d_h(x-y)*n(x)-n(y)n(x) x y, having used Brinkman's law, and ℐ_4 ≤ 2ϵ∫_^2dΔ_h(x-y)∫_0^1*n(x) - n(y) a x y ≤ C*n_L^∞(0,T; L^∞())ϵ/h^2, from directly estimating the Laplacian of the kernel _h. For the terms involving the growth rates, we have ℐ_5 = ∫_^2d_h(x-y)∫_0^1 G_N(n(x))*n(x) - n(y) a x y ≤sup_a∈[0,1]sup_n∈[0,n̅] |G(n;a)| ∫_^2d_h(x-y)∫_0^1 *n(x) - n(y) a x y, and ℐ_6 = ∫_^2d_h(x-y)∫_0^1 n(y)*G_N(n(x))-G_N(n(y))σ a x y ≤sup_a∈[0,1]sup_n∈[0,n̅] |∂_n G(n;a)| ∫_^2d_h(x-y)n(y)*n(x)-n(y) x y ≤α*n_L^∞(0,T;L^∞())∫_^2d_h(x-y)*n(x)-n(y) x y. Finally, we consider the main commutator ℐ_3 = ∫_^2d∇_h(x-y)·*∇ W(x)-∇ W(y)∫_0^1*n(x) - n(y) a x y. The careful treatment of commutators of this form is one of the main contributions of the papers <cit.>. Since our case is covered by these results, we omit the details, and refer the reader to the aforementioned papers for full proof. Applying <cit.>, we deduce ℐ_3 ≤ C_1D^2 W_L^∞(0,T; L^2())*log h^1/2 + C_2*Δ W_L^∞(0,T;L^∞(^d))∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y, where the constant C_1 depends on the norms of n. Putting all the estimates together, we have ∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y ≤ C∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y + C∫_^2d_h(x-y)*n(x) - n(y) x y +C∫_^2d_h(x-y)*W(x) - W(y) x y + Cϵ/h^2 + C*log h^1/2, where we stress that the constants may depend unfavourably on ν>0 but are independent of N and ϵ. Proceeding in the exact same way with the equation for n, we obtain ∫_^2d_h(x-y)*n(x) - n(y) x y ≤ +C∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y + C∫_^2d_h(x-y)*n(x) - n(y) x y +C∫_^2d_h(x-y)*W(x) - W(y) x y +Cϵ/h^2 + C*log h^1/2. Hence, the sum of the two compactness quantities satisfies *Q_h(t) + Q_h(t) ≤ Cϵ/h^2 + C*log h^1/2 + C*Q_h(t) + Q_h(t) +C∫_^2d_h(x-y)*W(x) - W(y) x y. Applying Gronwall's lemma, we have Q_h(t) + Q_h(t) ≤ Cϵ/h^2 + C*log h^1/2 + C*Q_h(0) + Q_h(0) +C∫_0^T∫_^2d_h(x-y)*W(x) - W(y) x y t. Multiplying by |log h|, taking the limit superior over ϵ>0, and letting h→0, we deduce that lim_h→0 lim sup_ϵ→0 |log h|^-1 sup_t∈[0,T] Q_h(t) = 0, where we used the fact that the initial data is compact in ϵ by construction, and (<ref>) from Proposition <ref>. Using Lemma <ref>, we conclude that the sequence (n)_ϵ>0 is compact in L^1(0,T;L^1_loc(^d)). The same limit is true for Q_h. Observing that Q_h(t) = ∫_^2d_h(x-y)∫_0^1*n(x) - n(y) a x y =∫_0^1∫_^2d_h(x-y)*n(x) - n(y) x y a = ∫_0^1∫_^2d_h(x-y)∑_i=1^N*nı(x) - nı(y)_(i-1/N,i/N](a) x y a = 1/N∑_i=1^N∫_^2d_h(x-y)*nı(x) - nı(y) x y, we have, for each i=1,…,N, lim_h→0 lim sup_ϵ→0 |log h|^-1 sup_t∈[0,T] ∫_^2d_h(x-y)*nı(x) - nı(y) x y = 0, which implies local compactness of each of the densities nı as ϵ→0. Global compactness is obtained by virtue of the second-moment bound from Proposition <ref>. As a consequence of the above proposition in conjunction with the uniform L^∞ bounds, we deduce that the densities nı are compact in L^q(0,T,L^p()) for all p,q ∈ [1,∞). Moreover, since * W - W_L^q(0,T;L^p())≤ K_ν_L^1()n- n_L^q(0,T;L^p()), the same range of strong convergence is true for ∇ W. Putting together the results obtained above, we can formulate the following summary: Upon passing to a subsequence, we can find nı∈ L^∞(0,T;L^1∩ L^∞()) such that * nı⋆⇀ nı in L^∞(0,T;L^p()), for all 1≤ p ≤∞, * nı→ nı in L^q(0,T,L^p()), for all p,q ∈ [1,∞), and, by their definition, * n(·, · ; a) ⋆⇀ n_(·, · ; a) in L^∞(0,T;L^p()), for all 1≤ p ≤∞ and a.e. a∈ [0,1], * n→ n in L^q(0,T,L^p()), for all p,q ∈ [1,∞), * n⋆⇀n_ in L^∞(0,T;L^p()), for all 1≤ p ≤∞, * n→n in L^q(0,T,L^p()), for all p,q ∈ [1,∞), * n→n almost everywhere in ×[0,T]. * ∂_tn⇀∂_tn_, in L^2(0,T;H^-1()), as well as * W→ W in L^q(0,T,W^1,p()), for all p,q ∈ [1,∞). With these convergence results it is now trivial to pass to the limit in the weak form of System (<ref>) and deduce that the tuple (nı, W)_i=1^N is a weak solution of System (<ref>). Thus, the proof of Lemma <ref> is complete. We conclude this subsection with the following observation regarding continuity in time of the solutions. For each ν>0, the function n_ν, N constructed in the previous section belongs to C([0,T]; L^2()), after possibly changing it on a set of measure zero. Moreover, we have n_ϵ, ν, N(t) →n_ν, N(t), for every t∈[0,T], and max_t∈[0,T]n_ϵ, ν, N(t)_L^2()≤ C, where C>0 is independent of ϵ, ν, and N. The argument is based on a generalised Arzelà-Ascoli theorem <cit.>. First, let us observe that we can establish the uniform-in-time L^2-control, by considering 1/2n_ϵ, ν, N_L^2()^2 ≤ C n_ϵ, ν, N_L^2()^2, where the constant C>0 is independent of ν, ϵ, and N. An application of Gronwall's lemma yields sup_0 ≤ t ≤ Tn_ϵ, ν, N_L^2()^2 ≤ C. In addition, with the fact that sup_t∈[0,T] |log h|^-1∬_^2d_h(x-y) |n_ϵ, ν, N(x,t) - n_ϵ, ν, N(y,t)|^2 x y ≤ C sup_t∈[0,T] |log h|^-1∬_^2d_h(x-y) |n_ϵ, ν, N(x,t) - n_ϵ, ν, N(y,t)| x y → 0, by Proposition <ref>, we can conclude that the set (n_ϵ, ν, N(t))_ϵ>0 is relatively compact in L^2() for each t∈[0,T]. It remains to show equi-continuity in C([0,T];L^2()). To this end, we consider n_ϵ, ν, N(t+h) - n_ϵ, ν, N(t)_L^2()^2 ≲n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N(t+h)_L^2()^2 + n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N, α(t)_L^2()^2 + n_ϵ, ν, N, α(t) - n_ϵ, ν, N(t)_L^2()^2, where we used the triangular inequality with the mollified densities n_ϵ, ν, N, α(t) := _α⋆_x n_ϵ, ν, N(t), for every t∈[0,T]. Here, the kernel _α is the one introduced in Lemma <ref>, normalised to have unit mass. Now, we observe that n_ϵ, ν, N, α(t) - n_ϵ, ν, N(t)_L^2()^2 ≤∫_*∫__α(x-y)(n_ϵ, ν, N(y, t) - n_ϵ, ν, N(x, t) ) y^2 x ≤∬_^2d_α(x-y)*n_ϵ, ν, N(y, t) - n_ϵ, ν, N(x, t)^2 y x ≤ C _α_L^1()Q_α(t), having used Jensen's inequality, the uniform L^∞-bounds, and the notation from Proposition <ref>. Similarly, we obtain n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N(t+h)_L^2()^2 ≤ C _α_L^1()Q_α(t+h). Finally, let us address the second term. We find n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N, α(t)_L^2()^2 = ∫_n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N, α(t)∫_t^t+h∂_s n_ϵ, ν, N, α(s) s x = ∫_t^t+h∫_n_ϵ, ν, N, α(t+h) - n_ϵ, ν, N, α(t)∂_s n_ϵ, ν, N, α(s) x s ≤ C n_ϵ, ν, N, α_L^∞(0,T;H^1())∂_tn_ϵ, ν, N, α_L^2(t,t+h;H^-1()) ≤ C n_ϵ, ν, N_L^∞(0,T;L^2())∂_tn_ϵ, ν, N, α_L^2(0,T;H^-1())√(h)/α ≤ C √(h)/α. Thus, we conclude that n_ϵ, ν, N(t+h) - n_ϵ, ν, N(t)_L^2()^2 ≤ Csup_ϵ>0sup_t∈[0,T]|logα|^-1Q_α(t) + C√(h)/α. Choosing α=h^1/4, we have that n_ϵ, ν, N(t+h) - n_ϵ, ν, N(t)_L^2()^2 → 0, as h→ 0 uniformly in ϵ>0. Now, by the Arzelà-Ascoli theorem, there exists a function g ∈ C([0,T];L^2()) and a subsequence up to which n_ϵ, ν, N(t) → g(t), for all t∈[0,T]. Thus g is the time-continuous representative of n_ν, N and, henceforth, we identify n_ν, N with g. In particular, we have n_ϵ, ν, N(T) →n_ν, N(T). For the subsequent analysis, let us point out that the continuity of n_ν, N in C([0,T];L^2()) and the uniform boundedness in L^∞(0,T;L^1(, (1 + |x|^2) x) implies that n_ν, N∈ C([0,T];L^1()) and n_ν, N(T)_L^1(, (1+|x|^2) x)≤ C, uniformly in ν and N. §.§ The entropy inequality We are now in a position to prove the entropy inequality. The limit n of the sequence (n)_ϵ>0 constructed in the previous section satisfies the entropy inequality ℋ[n](T) - ℋ[n_N^in] - ∫_0^T ∫_n W x t ≤∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t. Let δ > 0 and ϕ∈ C^∞_c (^d), ϕ≥ 0. We consider the following regularised form of the entropy functional ℋ_δ^ϕ[n](t) ∫_^d^( n(x,t) + δ)(log(n(x,t)+δ)-1) ϕ(x) x, and by ℋ^ϕ[f] we denote the above functional with δ=0. Given the regularity of n, the weak form of Eq. (<ref>) can be formulated as ∫_0^T ∫_nφ(x,t) + nφ(x,t) · Wxt + ϵ∫_0^T ∫_n·φ(x,t)xt = ∫_0^T ∫_φ(x,t) ∫_0^1 n G_N(n;a) axt, for any φ∈ L^2(0,T;H^1(^d)). Choosing φ(x,t) log(n + δ) ϕ(x), we obtain ∫_0^T ddtℋ^ϕ_δ [n](t) t+ I^ϵ, δ_1 + I^ϵ, δ_2 + I^ϵ, δ_3 + I^ϵ, δ_4 = I^ϵ, δ_5, where I^ϵ, δ_1 =∫_0^T ∫_^d^n/n+ δn· Wϕ x t, I^ϵ, δ_2 = ∫_0^T ∫_nlog(n+δ)ϕ· W x t, I^ϵ, δ_3 = ϵ∫_0^T ∫_n^2/n+δϕ x t, I^ϵ, δ_4 = ϵ∫_0^T ∫_log(n +δ)ϕ·n x t, I^ϵ, δ_5 = ∫_0^T ∫_log(n+δ) ϕ∫_0^1 n G_N(n;a) axt. Now we investigate each term individually. Starting with I_1^ϵ,δ we see I_1^ϵ,δ = ∫_0^T ∫_^d^n/n+ δn· Wϕ x t, = ∫_0^T ∫_^d^n· Wϕ x t- δ∫_0^T ∫_^d^n· W/n+ δϕ x t, = -∫_0^T ∫_n Wϕ x t- ∫_0^T∫_n∇ W·∇ϕ x t + δ∫_0^T ∫_^d^log(n + δ) Wϕ x t +δ∫_0^T ∫_log(n + δ) W·ϕ x t. Passing to the limit ϵ→ 0, we readily obtain I_1^ϵ,δ→ I_1^δ = -∫_0^T ∫_n Wϕ x t- ∫_0^T∫_n∇ W·∇ϕ x t + δ∫_0^T ∫_^d^log(n + δ) Wϕ x t+ δ∫_0^T ∫_log(n + δ) W·ϕ x t. For the last two integrals, we observe that δ∫_0^T ∫_^d|log(n + δ)| | W| ϕ x t + δ∫_0^T ∫_log(n + δ) | W·ϕ| x t≤ C δlogδ. Thus, when ϵ→ 0 and δ→ 0, I_1^ϵ,δ→ -∫_0^T ∫_n Wϕ x t- ∫_0^T∫_n∇ W·∇ϕ x t. Next, the convergence I_2^ϵ, δ→∫_0^T ∫_nlog( n) ϕ· W x t follows easily from the dominated convergence theorem. The next term from Eq. (<ref>), I_3^ϵ,δ, is nonnegative and can be dropped in the limit. The fourth term from Eq. (<ref>) is estimated by *I_4^ϵ,δ ≤ϵ∫_0^T ∫_ϕn*logδ + n x t ≤ϵ*n_L^∞(0,T;L^∞()) + |logδ|∇ϕ_L^2()∇n_L^2(0,T;L^2()) ≤ C√(ϵ)log(δ), having used Lemma <ref>. Thus, as ϵ→ 0, and then δ→ 0 we have I_4^ϵ,δ→ 0. The remaining term of Eq. (<ref>) is given by I^ϵ, δ_5 = ∫_0^T ∫_log(n+δ)ϕ(x) ∫_0^1n G_N(n;a)axt. Notice that n G_N(n;a) converges strongly in L^2 to n G_N(n;a) for every a∈[0,1] and *n G_N(n;a)≤G_C([0,n̅]× [0,1])n̅. Therefore ∫_0^1n G_N(n;a)a→∫_0^1n G_N(n;a)a in L^2(×(0,T)). It follows that, as ϵ→ 0 I^ϵ, δ_5→∫_0^T ∫_log(n+δ)ϕ(x) ∫_0^1n G_N(n;a)axt. Then, as δ→ 0, we deduce I^ϵ, δ_5→∫_0^T ∫_log(n)ϕ(x) ∫_0^1n G_N(n;a)axt, using the dominated convergence theorem. Finally, we consider the term involving the time derivative. Since n∈ C([0,T];L^2()), the mapping t ↦ℋ^ϕ_δ [n(t)] is continuous in [0,T]. We therefore have ∫_0^T ddtℋ^ϕ_δ [n] t= ℋ^ϕ_δ [n](T) - lim_s→ 0ℋ^ϕ_δ [n](s) = ℋ^ϕ_δ [n](T) - ℋ^ϕ_δ [n_ϵ, N^in]. Then, since n(T)→n(T) in L^2(), we have as ϵ→ 0, ℋ^ϕ_δ [n](T)-ℋ^ϕ_δ[n_ϵ,N^in] →ℋ^ϕ_δ [n](T)-ℋ^ϕ_δ[n_N^in]. Now, by the dominated convergence theorem, we obtain ℋ^ϕ_δ [n](T) = ∫_n(T)+δlog(n(T)+δ)-1ϕ x →∫_n(T)logn(T)-1ϕ x, as δ→0. We have thus shown that, in the limit ϵ→ 0 and, subsequently δ→ 0, Eq. (<ref>) becomes ℋ^ϕ[n](T) - ℋ^ϕ[n_N^in] - ∫_0^T ∫_n Wϕ x t- ∫_0^T ∫_n W·ϕ x t + ∫_0^T ∫_nlognϕ· W x t ≤∫_0^T ∫_ϕlogn∫_0^1 n G_N(n;a) a x t, for any ϕ∈ C_c^∞(), ϕ≥ 0. Let us now choose ϕ=χ_R, where χ_R is a sequence of smooth cut-off functions such that |∇χ_R|≲ R^-1. Then, using the L^∞ L^1-control of nlogn from Proposition <ref>, we can pass to the limit R→∞ by the monotone convergence theorem to obtain Eq. (<ref>). § THE JOINT LIMIT Before we start discussing the joint limit let us prove a short convergence result of the initial data and the growth rates. The initial data of the N-system converges in the following sense: * For each a∈[0,1], n^in_N(· ;a) → n^in(· ;a) in L^p(^d), 1≤ p < ∞, * n^in_N →n^in:=∫_0^1 n^in a in L^p(), 1≤ p < ∞, * up to a subsequence, ℋ[n_N^in]→ℋ[n^in]. Likewise, the growth rate G_N converges in the following sense: * For each a∈ [0,1], G_N(· ;a)→ G(· ;a) uniformly on I for each compact set I⊂, * let (m_N)_N be a sequence of functions such that 0 ≤ m_N≤n̅ and such that m_N converges to m in L^p(×(0,T)). Then G_N(m_N;a) converges to G(m;a) in L^q(×(0,T)) for any p≤ q<∞, for any a∈[0,1]. Ad (1). Let a∈[0,1] be fixed and note that there is an integer i(N) such that a∈(i(N)-1/N,i(N)/N], for each each N. Now, let a_N:=i(N)/N and observe that then |a_N-a|<1/N, so that a_N→ a. Next, consider *n_N^in(· ;a)-n^in(· ;a)_L^p() = *n_N^(i(N)),in(·)-n^in(· ;a)_L^p() = *n^in(· ;a_N)-n^in(· ;a)_L^p(). Let δ>0 and choose a compact set K_δ⊂ such that *n^in(· ;a_N)-n^in(· ;a)_L^p() ≤*n^in(· ;a_N)-n^in(· ;a)_L^p(K_δ) + *n^in(· ;a_N)-n^in(· ;a)_L^p(∖ K_δ) ≤*n^in(· ;a_N)-n^in(· ;a)_L^p(K_δ) + δ/3, where we used the uniform moment bound for n^in and the L^∞-bound n^in≤n̅. Regarding the first term, we use the continuity of n^in in the a-variable, Property (<ref>), and the dominated convergence theorem. Ad (2). For n^in_N = ∫_0^1 n^in_N a, we use Minkowski's inequality to estimate *n^in_N-n^in_L^p()≤∫_0^1 *n^in_N(· ;a)-n^in(· ;a)_L^p() a. The integrand converges for each a and is bounded by n^in_N(· ;a)-n^in(· ;a)_L^p()≤ 2sup_a∈[0,1]n^in(· ; a)_L^p()≤ C. Therefore, by the dominated convergence theorem, we deduce n^in_N →n^in, in L^p(). Ad (3). Recall that ℋ[n_N^in] = ∫_n_N^inlogn_N^in - n_N^in x. To show convergence towards ℋ[n^in], we investigate the logarithmic term. Since n_N^in is bounded in L^∞() and converges to n^in in L^1(), it follows from the dominated convergence theorem that, for a subsequence, n_N^inlogn_N^in converges strongly to n^inlogn_N^in in L^2. Indeed, there is a subsequence such that n_N^in converges almost everywhere on ^d and a function h_1∈ L^1() such that n_N^in≤ h_1 almost everywhere. Since n_N^in converges also in L^2(), we can choose a further subsequence and a function h_2 ∈ L^2() with n_N^in≤ h_2 almost everywhere. Note that *n_N^inlogn_N^in≤√(n_N^in) + (n_N^in)^2 ≤√(h_1) + *n_N^in_L^∞() h_2. Since √(h_1)+h_2 ∈ L^2(), we obtain the desired convergence in L^2, by dominated convergence. Now, using the second-moment control, we deduce convergence in L^1 as follows. Let 1>δ>0 be given and let K_δ = B_R(δ)(0) ⊂^d be a ball such that sup_N≥ 2 ∫_∖ K_δn_N^in*logn_N^in x < δ/3. Indeed, this is possible which can be established as follows. First, let us observe that ∫_∖ K_δn_N^in |logn_N^in| x ≤ d ∫_∖ K_δ*n_N^in^d+1/d+3 x + ∫_∖ K_δ*n_N^in^2 x ≤ d ∫_∖ K_δ*n_N^in^d+1/d+3 x + C/R(δ)^2. Note that ∫_∖ K_δ*n_N^in^d+1/d+3 x = ∫_∖ K_δ*n_N^in^d+1/d+3|x|^2(d+1)/d+3/|x|^2(d+1)/d+3 x ≤( ∫_∖ K_δ1/|x|^d+1 x )^2/d+3( ∫_∖ K_δn_N^in|x|^2 x )^d+1/d+3, where the second term is bounded by the uniform moment estimate. The first term can be estimated by ( ∫_∖ K_δ1/|x|^d+1 x )^2/d+3 = (∫_|x|> R(δ) |x|^-d-1 x)^2/d+3≈ R(δ)^-2/(d+3). Choosing R(δ)≈δ^-d+3/2, we deduce (<ref>). Now, in conjunction with the L^2-convergence above, we find N_0 ∈ N such that n_N^inlogn_N^in - n^inlogn^in_L^2() < δ/3 |K_δ|^1/2, for any N≥ N_0. Then, n_N^inlogn_N^in - n^inlogn^in_L^1() ≤n_N^inlogn_N^in - n^inlogn^in_L^1(K_δ) + 2δ/3 ≤ |K_δ|^1/2n_N^inlogn_N^in - n^inlogn^in_L^2() + 2δ/3 <δ, which proves the claim. Ad (4). Let us now consider the growth rates. Let n∈ I for some compact set I⊂. For a∈[0,1], let a_N be a sequence constructed as before. Then max_n∈ I*G_N(n;a) - G(n;a) = max_n∈ I*G(n;a_N) - G(n;a)≤∂_a G_L^∞(I × [0,1]) |a_N-a| → 0. Ad (5). Finally, let (m_N)_N be a sequence strongly converging to m in L^p. Then G_N(m_N;a) - G(m;a)_L^p≤G_N(m_N;a) - G_N(m;a)_L^p + G_N(m;a) - G(m;a)_L^p. The last term converges to zero by item (4) of this proof. For the remaining term we write G_N(m_N;a) - G_N(m;a)_L^p≤αm_n-m_L^p→ 0, which gives the desired convergence in L^p. Interpolating with the L^∞ bound (since 0 ≤ m_N≤n̅), we conclude the proof. §.§ Compactness of the rescaled total population density For the remainder of this section let ν = ν_N be any sequence such that ν_N→ 0 as N →∞. Let us stress that we impose no conditions on the speed or monotonicity of the convergence ν_N→0. The uniform bounds in L^∞(0,T;L^1∩ L^∞()) obtained in the proof of Lemma <ref> allow us to extract subsequences of n and n which converge weakly in L^∞(0,T,L^p()), 1≤ p ≤∞, namely n⇀n , and n(·,· ;a) ⇀ n(·,· ;a), for every a∈[0,1]. Clearly, we have n = ∫_0^1 n a. The following convergence holds: W→n, in L^2(0,T,L^2()). In particular, n∈ L^2(0,T;H^1()). The local compactness of W is established using the Aubin-Lions lemma. We begin by proving space regularity, using the Brinkman equation ∫_0^T∫_ W^2xt = -∫_0^T∫_ W Wxt, = - ∫_0^T∫_^n Wxt + ∫_0^T∫_^(n-W) Wxt, = - ∫_0^T∫_^n Wxt - ν∫_0^T∫_^ W^2xt, ≤- ∫_0^T∫_^n Wxt. Now, rearranging the entropy inequality, Lemma <ref>, we obtain - ∫_0^T ∫_n W x t ≤ℋ[n_N^in] -ℋ[n](T) + ∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t≤ C, which follows from the entropy bound in Proposition <ref>. Therefore, W is uniformly bounded in L^2(0,T;L^2()). Next, we prove time regularity. For any φ∈ L^2(0,T;H^1()), we have *⟨∂_t W, φ⟩ = *⟨∂_t n, K_ν_N⋆φ⟩ ≤∂_t n_L^2(0,T; H^-1())K_ν_N⋆φ_L^2(0,T;H^1()) ≤ Cφ_L^2(0,T;H^1()), so that ∂_t W is uniformly bounded in L^2(0,T;H^-1()). This gives us the local compactness of W in L^2(0,T,L^2()). Now we show that the second moment of W is bounded, ∫_0^T∫_ Wx^2xt = ∫_0^T∫_nx^2xt + ν∫_0^T∫_ Wx^2xt, = ∫_0^T∫_nx^2xt + 2 d ν∫_0^T∫_ Wxt, which is bounded by Proposition <ref>. We thus deduce global compactness of W. Identifying the limit of W as n follows easily from the weak convergence n⇀n and the representation W = K_ν_N⋆n. The rescaled total population density n converges strongly to n in L^2(0,T;L^2()). Let us write n - n_L^2(0,T;L^2())≤n - W_L^2(0,T;L^2()) + n -W_L^2(0,T;L^2()). It only remains to estimate the first term: n-W_L^2(0,T;L^2())^2 = - ν∫_0^T∫_^( n -W) Wxt = - ν∫_0^T∫_^n Wxt - ν∫_0^T∫_^ W^2xt ≤- ν∫_0^T∫_^n Wxt. Applying the entropy inequality, Lemma <ref>, as before we get - ν∫_0^T∫_n Wxt≤ Cν_N. Hence, n - n_L^2(0,T;L^2())≤ C√(ν_N) + n -W_L^2(0,T;L^2()), which goes to zero as N →∞ using Lemma <ref>. §.§ The limit equation for the rescaled total population density Having obtained the strong compactness of n and the weak compactness of ∇ W, we can now pass to the limit in Eq. (<ref>) to obtain, in the weak sense, ∂n/∂ t - ÷(nn) = ∫_0^1 n G(n;a)a. Testing the limiting equation by log(n_0,∞+ δ) ϕ and following an argument similar to the derivation of the entropy inequality, we obtain ℋ[n](T) - ℋ[n^in] + ∫_0^T ∫_n^2 x t = ∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t. §.§ Strong convergence of the velocity In this subsection we will show that the L^2-norm of the velocity ∇ W converges to the L^2-norm of the limit velocity ∇n. Combining this fact with the weak convergence of the velocities from Lemma <ref>, we will deduce strong convergence. We can compare the entropy equality, Eq. (<ref>), and the entropy inequality,. Eq. (<ref>), to deduce -∫_0^T∫_ n Wxt ≤∫_0^T∫_^*n^2xt +ℋ[n](T) -ℋ[n](T) +ℋ[n_N^in] -ℋ[n^in] +React(n,n) -React(n,n), where React(n,n) ∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t. We now wish to pass to the limit N→∞ in (<ref>). To this end, we investigate the terms on the right-hand side in pairs. From Proposition <ref> we have lim_N→∞ℋ[n_N^in] = ℋ[n^in]. To compare the final-time entropies, we first state the following proposition. Up to extracting a subsequence it holds that n(T) ⇀n(T) weakly in L^1(). In light of Remark <ref> and the uniform L^2-control of n_ν_N, N(T)_L^2(), we may apply the Dunford-Pettis theorem to infer the existence of some χ∈ L^1() and a subsequence such that n_ν_N, N⇀χ, as N→∞. This limit can be identified using the equation for n. Let δ>0 and let ξ_δ be a sequence of smooth nonnegative functions approximating the continuous nonnegative piecewise linear function ξ_δ(s):= 1 for s∈[0,T-2δ), (T-δ)-s/δ for s∈ [T-2δ,T-δ), 0 for s∈ (T-δ,T]. Then, taking ϕ∈ C_c^∞() and using ξ_δ(t)ϕ(x) as test function in the weak formulation of System (<ref>), we have ∫_0^T∫_ n ξ_δ' ϕ x t - ∫_0^T ξ_δ(t) ∫_n∇ W·∇ϕ x t = -∫_0^T ξ_δ(t) ∫_∫_0^1 ϕ n G_N(n; a) a x t -∫_ϕ(x)n^in_N(x) x. Passing to the limit δ→ 0, we obtain ∫_ n(T)ϕ x - ∫_0^T ∫_n∇ W·∇ϕ x t = -∫_0^T ∫_∫_0^1 ϕ n G_N(n; a) a x t -∫_ϕ(x)n^in_N(x) x, and then passing to the limit N→∞, we have ∫_ χϕ x - ∫_0^T ∫_n∇n·∇ϕ x t = -∫_0^T ∫_∫_0^1 ϕ n G(n; a) a x t -∫_ϕ(x)n^in(x) x, where we have used Proposition <ref>, Proposition <ref> and the weak convergences ∇ W⇀ W, and n_ν_N, N(T) ⇀χ. However, testing the weak form of the Darcy system (<ref>) with the same test function ξ_δ(t)ϕ(x) and passing to the limit δ→ 0, we obtain ∫_ n(T)ϕ x - ∫_0^T ∫_n∇n·∇ϕ x t = -∫_0^T ∫_∫_0^1 ϕ n G(n; a) a x t -∫_ϕ(x)n^in(x) x. Juxtaposing the last two equalities, we easily deduce that χ = n(T) a.e. in . By convexity of the entropy functional ℋ[·] and Proposition <ref>, we have lim sup_ν→ 0^( ℋ[n](T) - ℋ[n](T) ) ≤ 0, It remains to treat the reaction terms. By the uniform L^1-control on nlogn from Proposition <ref>, for every δ > 0 there exists a compact set K=K_δ⊂, such that sup_N ≥ 2 ∫_0^T ∫_K^cn*logn x t ≤δ. Then, since the growth rate is uniformly bounded, *∫_0^T ∫_K^clogn∫_0^1 n G_N(n;a)a x t≤ Cδ, for each N ≥ 2 as well as for the limit quantity at N=∞. For the integral on K, we apply Egorov's Theorem to find A⊂ K with μ(A) ≤δ such that n→n uniformly in K∖ A. Let 0 < β < 1/e. By uniform convergence, on K∖ A we have (for N large enough) n≥β/2n≥β/4 and n < β/2n < β. We now partition the compact domain K as follows, K = (K ∖ A ∩n≥β/2) ∪ (K ∖ A ∩n<β/2) ∪ A ≡ K_1∪ K_2∪ A. On K_1 the sequence logn converges strongly in L^p. Thus, together with the weak convergence of n and the strong convergence of G_N(n,a) (cf. Proposition <ref>), we readily obtain ∫_0^T ∫_K_1logn ∫_0^1 n G_N(n;a)a x t →∫_0^T ∫_K_1logn∫_0^1 n G(n;a)a x t. On the set K_2, using the uniform bound for G_N and n < β, we have |∫_0^T ∫_K_2logn∫_0^1 n G_N(n;a)a x t| ≤ Cmax_[0,1]× [0,n̅]*G(n;a)μ(K)β*logβ. Lastly, on the set A | ∫_0^T ∫_Alogn∫_0^1 n G_N(n;a)a x t| ≤ C δ. Let us stress the the last two bounds hold for N≥ 2 and for N=∞. Combining all the above results, we conclude that lim_N→∞*React(n,n) - React(n,n)≤ C(δ + μ(K_δ)β|logβ|). Sending β→ 0 and then δ→0, we deduce that lim_N→∞React(n,n) = React(n,n). Finally, let us recall from the proof of Proposition <ref> that -∫_0^T∫_n Wxt≥∫_0^T∫_* W^2xt. Therefore, using (<ref>), and results (<ref>), (<ref>), and (<ref>), we can pass to the limit N→∞ in Eq. (<ref>) to obtain lim sup_ν→ 0^∫_0^T∫_* W^2xt≤∫_0^T∫_^*n^2xt, combining which with the weak convergence ∇ W⇀∇ n, yields strong convergence of the velocity ∇ W. §.§ Conclusion of the proof Using the weak convergence (<ref>) of the interpolated density n, the strong convergence of the rescaled population density n, the strong convergence of the velocity ∇ W→∇n, and the results of Proposition <ref> (for the convergence of the initial data and the reaction term), it is straightforward to pass to the limit in the weak formulation of System (<ref>) to obtain that n satisfies System (<ref>), in the weak sense, with velocity ∇n. Thus, the proof of Theorem <ref> is complete. § THE LIMIT N→∞ This section is devoted to proving Theorem <ref>. To this end, let ν>0 be fixed and let (nı, W)_i=1^N be the weak solution constructed in Lemma <ref> as the limit of the approximate sequence (nı, W)_i=1^N. With the uniform bounds guaranteed by Lemma <ref>, we can extract a subsequence such that n(·,· ;a) ⋆⇀ n_ν,∞(·,· ;a) weakly-* in L^∞(0,T;L^p()) for any p∈[1,∞]. Defining n_ν,∞:=∫_0^1 n_ν,∞ a, we immediately obtain that n converges weakly to n_ν,∞ in the same range of topologies. Now define W_ν,∞ as the solution to the Brinkman equation -νΔ W_ν,∞ + W_ν,∞ = n_ν,∞. Clearly W converges weakly to W_ν,∞. To pass to the limit N→∞ in the weak formulation of System (<ref>), we will demonstrate that the sequences (n)_N and (∇ W)_N converge strongly. The sequence (W)_N converges, as N→∞, to W_ν,∞ strongly in L^1(0,T;L^1(^d)). This convergence is obtained similarly as in Proposition <ref>. The gradient ∇ W is uniformly bounded in L^∞(0,T;L^p()) for any 1≤ p ≤∞, the time derivative ∂_t W = ∂_t n⋆ K_ν is uniformly bounded in L^2(0,T;H^-1()) using the uniform bound on ∂_t n from Corollary <ref>, and the second-moment control in Proposition <ref> provides equi-tightness. The sequence (∇ W)_N converges to ∇ W_ν,∞ strongly in L^2(0,T;L^2_loc()). This follows from the Aubin-Lions lemma. From the Brinkman equation, we have D^2 W bounded in L^2(0,T;L^2()). Since ν is fixed, the bound for the time derivative is obtained from ∂_t∂_x_k W = ∂_x_kK_ν⋆∂_t n and Corollary <ref> again. §.§ Compactness of the rescaled total population density We now revisit the proof of Proposition <ref> to deduce strong convergence of the rescaled total population density. The sequence (n)_N converges strongly in L^1(×(0,T)). Recall from the proof of Proposition <ref> (cf. Eq. (<ref>)) that ∫_^2d_h(x-y)*n(x) - n(y) x y ≤ Cϵ/h^2 + C*log h^1/2 + C∫_0^T∫_^2d_h(x-y)*W(x) - W(y) x y t + C∫_^2d_h(x-y)∫_0^1*n_ϵ,N^in(x) - n_ϵ,N^in(y) a x y + C∫_^2d_h(x-y)*n_ϵ,N^in(x) - n_ϵ,N^in(y) x y, where all the constants are independent of ϵ and N. However, we already know that n converges to n in L^1(× (0,T)), as ϵ→0. Therefore, passing to the limit in ϵ, we obtain ∫_^2d_h(x-y)*n(x) - n(y) x y ≤ C*log h^1/2 + C∫_0^T∫_^2d_h(x-y)*W(x) - W(y) x y t + C∫_^2d_h(x-y)∫_0^1*n_N^in(x) - n_N^in(y) a x y + C∫_^2d_h(x-y)*n_N^in(x) - n_N^in(y) x y. Subsequently, using Proposition <ref> and Proposition <ref>, we infer lim_h→0 lim sup_N→∞ |log h|^-1 ∫_0^T∫_^2d_h(x-y)*n(x) - n(y) x y t = 0. Applying Lemma <ref>, we deduce the result. §.§ Passing to the limit We are now ready to perform the limit passage in the weak formulation of Eq. (<ref>), see Definition <ref>. Note that to pass to the limit in the reaction term we use Proposition <ref>. We can also pass to the limit in the Brinkman equation to obtain -νΔ W_ν,∞ + W_ν,∞ = n_ν,∞, almost everywhere in ×(0,T). We have thus obtained a weak solution (n_ν,∞, W_ν,∞) of System (<ref>) as the limit of weak solutions (n, W) of System (<ref>), which concludes the proof of Theorem <ref>. We conclude this section by observing that the entropy inequality holds also for System (<ref>). The weak solution (n_ν,∞, W_ν,∞) of System (<ref>) obtained above satisfies the following entropy inequality ℋ[n_ν,∞](T) - ℋ[n^in] - ∫_0^T ∫_n_ν,∞ W_ν,∞ x t ≤∫_0^T ∫_logn_ν,∞∫_0^1 n_ν,∞ G(n_ν,∞;a)a x t. This follows from taking the limit N→∞ in the inequality for the localised entropies, Eq. (<ref>). For the entropy terms we use the strong convergence property derived in Proposition <ref> and Proposition <ref>, respectively. The critical term is the reaction term, since we only have weak convergence of n. However, using the same approach as in Section <ref> (Egorov's theorem and partition of the domain ϕ), we can show that ∫_0^T ∫_ϕlogn∫_0^1 n G_N(n;a) a x t →∫_0^T ∫_ϕlogn_ν,∞∫_0^1 n_ν,∞ G(n_ν,∞;a)a x t. Considering a sequence of cut-off converging to unity, we remove the localisation and deduce Eq.  (<ref>). With the above entropy inequality and the uniform bounds from Lemma <ref>, we can easily follow the strategy explained in Section <ref> to deduce the following result on the inviscid limit at the continuous phenotype level. Let (n_ν,∞, W_ν,∞) be the weak solution of System (<ref>) obtained above. Then, there exists a function n_0, ∞: × [0,T] × [0,1] → [0, ∞) with n_0, ∞(·, · ; a)∈ L^∞(0,T; L^1 ∩ L^∞()), for almost every a ∈ [0,1], and n_0, ∞∈ L^2(0,T;H^1()), such that, up to a subsequence, n_ν, ∞ ⋆⇀ n_0,∞ in L^∞(0,T;L^p()), 1≤ p≤∞, n_ν, ∞ →n_0,∞ in L^2(0,T;L^2()), W_ν, ∞ →n_0,∞ in L^2(0,T;H^1()), The limit function n_0,∞ is a weak solution to System (<ref>) in the sense of Definition <ref>. §.§ The phenotype limit at the Darcy level We conclude this section with a short discussion of the limit N→∞ for the inviscid system (<ref>). Let (n)_N be a sequence of nonnegative weak solutions of System (<ref>). The existence of such solutions can be obtained by taking the limit ν→0 in the family of solutions (n)_ν to the Brinkman system (<ref>). In particular, these solutions are uniformly bounded in L^∞(0,T;L^1∩ L^∞()). Hence, up to a subsequence, we have n(·,·;a)⇀ n(·,·;a) in L^2(0,T;L^2()), for each a∈[0,1], for some function n∈ L^∞(0,T;L^1∩ L^∞()). We then have n := ∫_0^1 n a ⇀∫_0^1 n a =: n, weakly in L^2(0,T;L^2()). The rescaled total density n satisfies the following equation: ∂n/∂ t - ÷(nn) = ∫_0^1 n G(n;a)a. Testing this equation by logn and bounding the entropy terms, we obtain ∫_0^T ∫_n^2 x t ≤ℋ[n](T) + ℋ[n^in] + ∫_0^T ∫_logn∫_0^1 n G_N(n;a)a x t ≤ C, where the constant is independent of N. This implies that ∇n is uniformly bounded in L^2(0,T;L^2()). In conjunction with the uniform bound ∂_tn∈ L^2(0,T;H^-1()), we deduce that there exists a subsequence such that n converges to n in the L^2-norm. Along the same subsequence we also have ∇n⇀∇n in L^2(0,T;L^2()). These convergences are sufficient to pass to the limit in the weak formulation of System (<ref>) to deduce that n satisfies the weak formulation of System (<ref>). § CONCLUDING REMARKS In this work, we proposed tissue growth models featuring N subpopulations governed by viscoelastic interactions through Brinkman's law. Our investigation focused on two distinct limit processes. First, in the joint limit as ν→ 0 and N →∞, we recovered the inviscid tissue growth model with a continuous phenotype variable, as introduced in <cit.>. The second limit considered the number of phenotype traits approaching infinity (N →∞) for both ν > 0 and ν = 0. For ν > 0, we derived a viscoelastic tissue growth model with a continuous phenotype variable. In the case of ν = 0, we again obtained the inviscid tissue growth model with continuous traits, as studied in <cit.>. Additionally, the inviscid limit ν→ 0 for a fixed finite number of phenotypes is a straightforward generalisation of <cit.>. Thus, our work provides a comprehensive framework that elucidates the relationships between these four modelling paradigms, as depicted in the diagram presented in the introduction. To the best of our knowledge, our results constitute the first rigorous `phenotype-to-infinity' limits in this context, opening up exciting future research avenues. These include the incorporation of convective effects and the exploration of broader classes of constitutive pressure laws, for which we believe our findings will also hold true. Acknowledgement T.D. acknowledges the support of the Polish National Agency for Academic Exchange (NAWA), agreement no. BPN/BDE/2023/1/00011/U/00001. M.M. and M.S. would like to acknowledge the support of the German Academic Exchange Service (DAAD) Project ID 57699543. abbrv
http://arxiv.org/abs/2409.03020v1
20240904182556
Online Scheduling via Gradient Descent for Weighted Flow Time Minimization
[ "Qingyun Chen", "Sungjin Im", "Aditya Petety" ]
cs.DS
[ "cs.DS" ]
shadows patterns my crosshatch dots-1pt-1pt1pt1pt6pt6pt 0pt0pt.5pt 3pt3pt.5pt fill properties[2][1] theoremTheorem[section] lemma[theorem]Lemma fact[theorem]Fact observation[theorem]Observation proposition[theorem]Proposition corollary[theorem]Corollary definition[theorem]Definition claim[theorem]Claim assumption[theorem]Assumption remark[theorem]Remark #1#1 true #1#1 command[1] true #1#1 command[1] Online Scheduling via Gradient Descent for Weighted Flow Time Minimization Qingyun Chen Electrical Engineering and Computer Science, University of California, 5200 N. Lake Road, Merced CA 95344. . Sungjin Im Electrical Engineering and Computer Science, University of California, 5200 N. Lake Road, Merced CA 95344. . Aditya Petety Electrical Engineering and Computer Science, University of California, 5200 N. Lake Road, Merced CA 95344. . ========================================================================================================================================================================================================================================================================================================================================================================================= empty § ABSTRACT In this paper, we explore how a natural generalization of Shortest Remaining Processing Time (SRPT) can be a powerful meta-algorithm for online scheduling. The meta-algorithm processes jobs to maximally reduce the objective of the corresponding offline scheduling problem of the remaining jobs: minimizing the total weighted completion time of them (the residual optimum). We show that it achieves scalability for minimizing total weighted flow time when the residual optimum exhibits supermodularity. Scalability here means it is O(1)-competitive with an arbitrarily small speed augmentation advantage over the adversary, representing the best possible outcome achievable for various scheduling problems. Thanks to this finding, our approach does not require the residual optimum to have a closed mathematical form. Consequently, we can obtain the schedule by solving a linear program, which makes our approach readily applicable to a rich body of applications. Furthermore, by establishing a novel connection to substitute valuations in Walrasian markets, we show how to achieve supermodularity, thereby obtaining scalable algorithms for various scheduling problems, such as matroid scheduling, generalized network flow, and generalized arbitrary speed-up curves, etc., and this is the first non-trivial or scalable algorithm for many of them. § INTRODUCTION Scheduling is a fundamental algorithmic problem because it is at the heart of any resource allocation process. Because of its wide appearance in practice, scheduling has been a central field in many disciplines such as operations research and computer science. Further, scheduling problems have provided arenas where theoretical computer science has developed and tested new theory. Scheduling theory has also become richer from various points of view, such as approximation, online, stochastic, game theory, etc. In the modern era, scheduling is giving increasing challenges and opportunities as more powerful computing resources are becoming available at low cost for novel applications. Online scheduling studies dynamic scheduling settings and is found in various applications. In the online setting, the algorithm must make scheduling decisions without knowing jobs arriving in the future. Competitive analysis is commonly used to measure the quality of an online algorithm in contrast to the optimal offline algorithm <cit.>. Unfortunately, in competitive analysis, online algorithms are often too weak compared to their offline adversaries that know the whole input sequence from the beginning. Particularly, the effect of online scheduling algorithms making sub-optimal decisions can accrue over time, often rendering their competitive analysis pointless. To overcome this pessimistic landscape, in the mid-'90s, a resource augmentation model was introduced to online scheduling <cit.>. For example, if the augmented resource is speed, which is the most commonly augmented resource, the scheduling algorithm is compared to the adversary that can process jobs slightly slower. This beyond-worst-case analysis enabled meaningful study of a large number of scheduling problems, which would otherwise not be possible <cit.>. New analysis tools have been developed for online scheduling, such as potential functions <cit.> and dual fitting <cit.>, which have resulted in a plethora of new exciting results for various online scheduling problems under the resource augmentation model. Despite all this success, our understanding of online scheduling can be considerably improved. First of all, we only have limited ways to design online scheduling algorithms systematically. For example, the powerful online primal-dual framework <cit.> is mainly applicable for monotone packing and covering problems but a large number of scheduling problems are not as such. Dual fitting tries to analyze a candidate algorithm by setting the primal and dual LP variables according to the algorithm's behavior. Thus, designing online scheduling algorithms still requires much creativity, often non-trivially combining commonly used scheduling strategies via repeated trial and error. There is a need to find new algorithm design strategies that can keep pace with increasingly complicated scheduling environments. Up to date, Proportional Fairness (PF) has been the only online meta scheduling algorithm known. The algorithm seeks to maximize the product of how fast jobs get processed at each time (subject to the scheduling constraints), and it generalizes Round-Robin.[In Round-Robin scheduling for a single machine, jobs are processed in turns after each unit of processing time. We assume that each unit of time is infinitesimally small. Under this assumption, the algorithm is equivalent to Processor Sharing, where all active jobs are processed at an equal rate at all times. In the single machine, PF solves max∏_j = 1^n z_j s.t. ∑_j z_j ≤ 1 and ≥ 0, thus yields the same assignment as Round-Robin.] It was only recently shown that PF is O(1/ )-competitive for a large class of scheduling problems, given (e+)-speed over the adversary <cit.>. Specifically, it was shown that if we view jobs as agents and how fast they get processed as their utility, PF finds a fair resource allocation that leads to a competitive schedule for the total flow time objective. The algorithm was not new, but the main contribution was to demonstrate its effectiveness as a meta scheduling algorithm—the guarantee holds true for various problems as long as the allocation is monotone under PF.[This means that when another job is added to the PF allocation, the share or speed allocated to each existing job decreases. ] In this paper we aim to discover more meta scheduling algorithms that work for various scheduling environments. We focus on perhaps the most widely used metric, namely average (weighted) flow time, which measures the average time a job stays in the system from its arrival until completion. Roadmap. Due to the general applicability of our approach, we will employ somewhat heavy notation. Further, we use concepts from market equilibria, which are not commonly used in the online scheduling/algorithm literature. To make the paper more accessible, we first describe our results informally (Section <ref>) and then illustrate them using a single machine scheduling example (Section <ref>). Readers unfamiliar with online scheduling are strongly recommended to read Section <ref> before Section <ref>. There is some intentional overlap between these sections. We then formally define our problem and introduce preliminary concepts about online scheduling and substitute valuations in Section <ref>. The linear substitute results and examples discussed in Section <ref> are, to our best knowledge, new. We formally present our algorithms and theorems in Section <ref>. Section <ref> is devoted to proving the main theorems. Section <ref> demonstrates how these theorems are applied to the applications discussed in Section <ref>. Other related work is discussed in Section <ref>. Appendix <ref> discusses solving the residual optimum problem efficiently and implementing our meta algorithm efficiently. All missing proofs are provided in the Appendix. §.§ Informal Description of Our Results and Contributions We seek to find a meta scheduling algorithm that works for a large class of problems that fall into the polytope scheduling, which encompasses numerous scheduling problems <cit.>. Polytope Scheduling. In the Polytope Scheduling Problem (PSP), we are given a set of constraints that form a downward-closed[If 0 ≤' ≤ and ∈, then ' ∈ P.] polytope (more generally a convex body), which constrains the feasible processing rate vector at each time: each job j can be processed at a rate of z_j if and only if := {z_j}_j∈. For example, we have = { | ∑_j z_j ≤ 1 ≥ 0} in the single machine scheduling, which says jobs can be processed at a rate of up to 1 in total. We assume that remains fixed over time. In the online setting, each job j arrives at time r_j with (original) processing size p_j and weight w_j. Note that the online scheduler learns about job j, including p_j and w_j, only when it arrives. Preemption is allowed and the goal is to minimize total (weighted) flow time. To develop a general approach to this problem, we get an inspiration from the Shortest Remaining Processing Time (SRPT): The SRPT algorithm, which prioritizes jobs with the shortest remaining time, is well-known for optimally minimizing the average (unweighted) flow time of jobs on a single machine. Interestingly, the scheduling literature implicitly suggests a possible interpretation of SRPT as a form of gradient descent <cit.>. The residual optimum <cit.>, informally, is the min cost for completing all remaining jobs, assuming no more jobs arrive. Taking gradient descent on the residual objective, meaning that we process the jobs that decrease the residual optimum the most, is natural because we complete all the remaining jobs when the residual optimum becomes zero. Further, the spirit of gradient descent is present in the analysis based on potential functions <cit.>. Inspired by this, we explore gradient descent as a meta algorithm for online scheduling. Our approach is general and streamlined: i) Algorithm Execution: Solve an LP relaxation of the offline residual optimum and perform gradient descent; and ii) Performance Guarantee: If a valuation function coming from the scheduling constraints is a substitute function, gradient descent is competitive. Below, we outline our results and contributions. * First Scalable Meta-algorithm. We demonstrate the effectiveness of gradient descent as a meta-algorithm. We use gradient descent to give the first scalable algorithm for various scheduling problems, such as matroid scheduling, generalized flow scheduling, generalized arbitrary speed-up curves, etc.; see Section <ref> for the problem definition. An algorithm is called scalable if it is O(1_)-competitive with (1+) factor more speed given over the adversary. Scalable algorithms are essentially the best one can hope for problems that have strong lower bounds on competitive ratio, and all the above problems are as such. Since the aforementioned PF (Proportional Fairness) requires at least (2+)-speed to be O(1_)-competitive <cit.>, our result gives the first scalable meta algorithm. We also note that we give the first non-trivial results for generalized flow scheduling and generalized arbitrary speed-up curves. For a comparison of our results to prior work, please see Table <ref>. As discussed earlier, the concept of gradient descent is implicitly embedded within the online scheduling literature. In this paper, we present a general approach to using gradient descent and establish a sufficient condition that ensures its competitiveness. Rather than showcasing gradient descent's applicability to several scheduling problems by case-by-case analysis, we seek a general approach that can be used to solve a wide range of scheduling problems. * Reduction to Obtaining an Offline (LP) Relaxation with Supermodularity. We reduce designing online scheduling algorithms to showing supermodularity of the offline residual optimum problem. The residual optimum is the minimum weighted completion time objective to complete the remaining jobs, assuming no more jobs arrive. * The (approximate) residual optimum need not have a closed-form mathematical expression. Instead, it can be computed using mathematical programming, such as time-indexed linear programming (LP). This allows us to directly derive a gradient descent-based schedule. Consequently, our approach has a wide range of potential applications. In contrast, traditional potential function analysis relies on finding closed-form expressions, which has been a major hurdle in designing and analyzing online scheduling algorithms <cit.>. Additionally, dual fitting is primarily an analysis tool, rather than a method for algorithm design. * We show that supermodularity of the residual optimum is a sufficient condition for gradient descent to be scalable. Intuitively, supermodularity implies adding a new job can only increase the residual optimum more when there are more existing jobs. Towards this end, we generalize the beautiful potential function introduced by <cit.> and crystallize the analysis by identifying supermodularity as the key to the analysis. However, it is in general challenging to show the residual optimum is supermodular. We address this issue by showing that it suffices to check the constraints that characterize the scheduling problem for a large class of problems. For example, for scheduling m identical parallel machines (we can process up to m jobs at a rate of one at each time), we only need to take a close look at the valuation function v() = max∑_0 ≤≤a· s.t. || ||_1 ≤ m and ||||_∞≤ 1 for some vector a≥ 0. Intuitively, the valuation function means this: Given a set of jobs J', what is the maximum value we can get by choosing m jobs from J' when each job j has a value of a_j? Obviously, it is obtained by choosing m jobs with the highest value of a_j. Alternatively, one can view the valuation as the weighted rank function of a uniform matroid of rank m. * Substitutes Empower Gradient Descent. We show that if maximizing a linear function subject to the constraints is a substitute function, then the residual optimum is supermodular. A valuation function is called gross/linear substitutes (GS/LS) <cit.> if increasing a resource's price does not reduce the demand for other resources. Consider the above valuation function v() for m identical machines: Items are individually priced at and valued at a by you. If you buy m items (say 1, 2, …, m) with the maximum value of a_j - q_j (so, your utility is the valuation of the items you buy minus the payment for that), increasing item 1's price might make you buy a different item instead, but you would still want items 2,3, …, m. This shows that v() is a gross substitute; linear substitute is a continuous extension of this concept. Substitute valuations play a crucial role in understanding market equilibria in Walrasian markets where agents aim to maximize their utility, defined as the difference between their valuation of purchased items and the price paid. Notably, Walrasian markets admit an equilibrium when all agents have GS valuations. An equilibrium is an allocation of items to agents alongside with a price vector where every agent gets an allocation maximizing their utility and all items are allocated. Furthermore, LP duality implies that this equilibrium maximizes the total valuation of all agents, also known as social surplus. We view the residual optimum problem (with a sign flip) as the social surplus maximization problem in Walrasian markets. Conceptually, we introduce a phantom agent for each time slot, where each job's unit processing is a distinct item. If the valuation function is gross/linear substitutes (GS/LS), we prove that the residual optimum is supermodular. This leverages the fact that GS/LS valuations are a subclass of submodular functions and closed under max-convolution <cit.>. This sufficient condition simplifies supermodularity checks. For example, in matroid scheduling, showing the valuation function (essentially the weighted rank) is GS/LS is easier than proving supermodularity for the entire residual optimum. Many scheduling scenarios involve jobs with processing rates dependent on allocated resources. To address this heterogeneity, linear substitutes (LS) <cit.> become more relevant. Linear substitutes provide a natural continuous extension of GS. Less studied than GS, LS exhibits distinct behavior. We present novel results for LS, including Theorem <ref>, which shows a concave closure of a GS valuation function is LS and Theorem <ref>, which shows generalized flow being LS. These findings hold independent interest. See Sections <ref> and <ref> for more details and formal definitions. Our work builds upon and extends numerous groundbreaking results that have been established throughout the history of scheduling research, e.g. <cit.>. As a consequence, we demonstrate the efficacy of gradient descent as a meta-algorithm, which has perhaps been considered a possibility by many. This work perhaps suggests that SRPT, a common topic covered in algorithms and operating systems courses, should be taught alongside its connection to gradient descent. §.§ Overview of Our Approach We illustrate our approach and its benefits using the single machine scheduling problem as a running example. Single machine scheduling. A set of n jobs arrive over time to be processed on a single machine/processor and the scheduler is not aware of jobs that have not arrived yet. Job j has a processing time (size) p_j and arrives at time r_j. The scheduler is clairvoyant, meaning that it learns p_j upon j's arrival. The single machine can process at most one job or equivalently one unit of jobs at any point in time. Thus, letting z_j denote the rate we process job j at a time, feasible schedules are constrained by ∈ = {≥ 0 | ∑_j z_j ≤ 1}.[If is fractional, we can implement it by preemptive time sharing.] A job j is completed when it gets processed for p_j units of time in total. Preemption is allowed, meaning that the scheduler can preempt a job being processed and resume it from where it left off. If we denote j's completion time as C_j, its flow time[Flow time is also known as response time or waiting time.] is defined as F_j := C_j - r_j. Suppose the objective is to find an online schedule that minimizes total (or equivalently average) flow time, i.e., ∑_j F_j. For this simple problem, as mentioned before, at each time t, SRPT (Shortest Remaining Processing Time) processes the job j with the minimum p_j(t), which denotes j's remaining size at time t, that is, p_j minus the amount of time for which it has been processed until time t. SRPT optimally minimizes the total flow time—intuitively, it minimizes the overall delay of job processing by processing the job it can complete the earliest. §.§.§ Supermodularity Makes Gradient Descent Work (Theorem <ref>) Let us first interpret SRPT as a form of gradient descent (GD). Suppose the current time is t. Since we would like to minimize the total flow time, ∑_j (C_j - r_j), by ignoring the cost of each job j already incurred (how long it has waited), t - r_j, pretending the current time is 0, and assuming no more jobs arrive in the future, let us try to measure the minimum cost we should pay, which is termed the residual optimum. The residual optimum then becomes the minimum total completion time of jobs with sizes {p_j(t)}_j: f((t)) = n p_π(1)(t)+ (n-1) p_π(2)(t) + (n-2) p_π(3)(t) + ⋯ + 1 p_π(n)(t), where jobs are ordered so π(1) has the smallest remaining size and π(n) has the largest; we hide π's dependency on t for notational convenience. Note that computing f((t)) is a purely offline problem. In general, f() does not depend on a specific algorithm but only on the scheduling problem specified by the constraints, . Given the constraint that ∑_j p_j(t) can be reduced by one at a time, GD processes job π(1), because it decreases f((t)) the most. This coincides with SRPT. Thus, GD decreases the residual objective by exactly n, the number of jobs currently alive, at a time. Note that the algorithm designer only needs to know the residual objective to derive GD. While f((t)) has a nice closed form in the single machine case, it is often NP-hard to compute exactly. In such cases, we can use linear or convex programming (Sections <ref> and <ref>). Let ^O(t) denote the remaining job sizes under the adversary's schedule. Let A denote GD algorithm (in this case, SRPT) and O the adversary. Similarly, A(t) and O(t) imply the set of jobs currently alive at time t in their schedule; we may drop t for notational brevity. For analysis, we use the following generic potential function Φ(t), which is directly derived from f in a black-box manner: Φ(t) := 2/(f((t)) - 1/2f( (t) || ^O(t)) ) , where ^O(t) := {^O_j(t) }_j denotes a vector of remaining job sizes in O's schedule and || denotes concatenation of two vectors. In other words, f( (t) || ^O(t)) computes the residual optimum pretending that each job exists in two separate copies, with remaining sizes p_j(t) and p^O_j(t), respectively. As mentioned before, this abstracts away the beautiful potential function used in <cit.> for unrelated machines.[The potential function in <cit.> looks somewhat different at first sight as it measures the aggregate increase of the residual optimum when each job in A is added to O, and vice versa. But, by rearranging terms carefully, one can observe that our potential function is essentially equivalent to theirs in the setting of unrelated machines. ] It is known <cit.> that an online algorithm can be shown to be O(1)-competitive (with some speed augmentation) if the potential satisfies the following: * (P1). Φ does not increase when a job arrives or is completed by A or O. * (P2). / tΦ(t) ≤ - |A(t)| + O(1) |O(t)| at all the other times. This is because Φ(t = 0) = Φ(t = ∞) = 0 (either no jobs have arrived or all jobs have been completed) and due to (P1) and (P2), we know that 0 ≤∫_t = 0^∞/ tΦ(t) t ≤ - ∫_t = 0^∞ |A(t)| t+ O(1) ·∫_t = 0^∞ |O(t)| t (the integral is taken over the whole time except when jobs arrive or are completed.) Note that ∫_t = 0^∞ |A(t)| t is A's objective since every job alive at time t incurs a unit cost at the time. Similarly, ∫_t = 0^∞ |O(t)| t is O's objective. Thus we can establish A's O(1)-competitiveness. Unlike <cit.> that repeats trial-errors to find a potential function that satisfies these properties to analyze an online scheduling algorithm that a creative algorithm designer came up with, our approach reduces the whole algorithm design and analysis to checking if f() satisfies supermodularity. More formally, let g(Y) := f(⊙_Y)[The operation ⊙ represents component-wise multiplication.] be the residual optimum only considering jobs in Y when each job j's (remaining) size is x_j. Supermodularity means g(X ∪{j}) - g(X) ≥ g(Y ∪{j}) - g(Y) for any j and X ⊇ Y. In other words, it implies that the increase in the residual optimum caused by the addition of a new job can only be larger if there are more jobs. We show that we can obtain (P1) from the supermodularity of f by crystallizing the analysis of <cit.>. Further, (P2) follows from the nature of GD. In general, we can show that GD just follows the residual optimum schedule (until a new job arrives) and therefore decreases every alive job's projected completion time (counted from now) by one. Thus, the residual optimum (the minimum total completion time) decreases at a rate of s |A(t)| when GD is given speed s > 1, i.e., / t f((t)) = - s |A(t)|.[Speed augmentation is a common beyond-worst-case model where the algorithm with slightly higher speed is compared to the adversary with speed 1.] By the nature of GD, this is also the maximum possible decrease rate, which is the speed times the number of jobs alive. By thinking of the processing being done on (t) || ^O(t) as a feasible schedule with speed s+1 (combining GD's speed s and the adversary's speed 1), we have / t f( (t) || ^O(t)) ≥ - (s+1) (|A(t)| + |O(t)|). Thus, (P2) follows with s = 1+. In summary, we demonstrated that GD can be naturally derived from the residual optimum and its competitiveness follows from the residual optimum's supermodularity. This argument extends to an arbitrary polytope scheduling problem (Theorem <ref>). §.§.§ Substitutes Empower Gradient Descent (Theorems <ref> and <ref>) Theorem <ref> establishes supermodularity as a crucial structural property that ensures the effectiveness of GD. However, demonstrating supermodularity of the residual optimum is challenging for arbitrary problems, while trivial for the single-machine case. In fact, computing the residual optimum is NP-hard for many scheduling scenarios. Proving supermodularity without access to an efficient value oracle for the function appears practically impossible. To address these challenges, we can first consider an approximate residual optimum, which can be efficiently obtained by solving a time-indexed LP. Time-indexed LPs are widely used in the literature, e.g. <cit.>. We maintain our focus on single machine scheduling for illustration. min∑_j, t̃≥ 1t̃/p_j z_j t̃ ∑_j z_j t̃≤ 1 ∀t̃≥ 1; ∑_t̃≥ 1 z_j t̃≥ p_j(t) ∀ j; ≥ 0 Here, we use t̃ in the LP to distinguish the time variables[Here, a unit time is assumed to be sufficiently small.] from actual time t. Variable z_jt̃ implies that job j is processed at a rate of z_jt̃ at time t̃. The second constraint ensures that all jobs must be eventually completed. In general, the first constraint can be replaced with {z_j t̃}_j ∈, which encodes each scheduling problem's constraints. The objective is fractional since when job j is processed at time t̃ by one unit, we pretend 1/p_j fraction of it is completed by the algorithm. It is well known that an algorithm that is scalable for the fractional objective can be converted into one that is scalable for the integer objective (Lemma <ref>). Thus, it suffices to derive GD from the fractional residual optimum. We show that we can implement GD as follows: Solve the LP and follow the solution, i.e., process job j at a rate of z_jt̃ at time t + t̃, until a new job arrives. We show how to make the LP compact in Appendix <ref> to make GD run in polynomial time. However, we now face another challenge: The optimum LP solution lacks a nice structure. Despite this, how can we show the supermodularity of the optimum LP objective as a set function of jobs? To show supermodularity, we make a connection to market equilibria. To this end, by flipping the sign (and adding some large constants B), we can convert the problem into a maximization problem as follows. Now the goal becomes showing submodularity of the objective. max∑_t̃≥ 1∑_j (B - t̃/p_j) z_j t̃ ∑_j z_j t̃≤ 1 ∀t̃≥ 1; ∑_t̃≥ 1 z_j t̃≤ p_j(t) ∀ j; ≥ 0 We consider the following (Walrasian) market: Items correspond to the remaining job sizes. Each item j exists in p_j(t) units. Each time t̃ has an agent with a valuation function, v_t̃() := ∑_j (B - t̃/p_j) x_j, where the function is defined over all ≥ 0 such that ||||_1 ≤ 1.[Technically we need to extend the domain of v_t̃ to encompass all non-negative vectors ≥ 0. However, we will disregard this for the sake of illustration in this section.] Suppose the items are priced at (each unit of job j is priced at q_j). To maximize her quasi-linear utility (valuation minus payment), the agent at time t̃ buys one unit of the item/job with the maximum value of B - t̃ / p_j - q_j. A market equilibrium is an allocation of items to agents along with a price vector where every agent gets an assignment maximizing their utility and all items are sold out. The maximum total valuation of all agents is called the maximum social surplus, and it is well known that an equilibrium results in the maximum social surplus from LP duality. With this interpretation, the question becomes whether the max social surplus is submodular as a set function of items/jobs. Gross substitute (GS) valuations come to the rescue here. As mentioned before, a valuation is GS if increasing one item's price does not decrease the demand for other items in the agent's optimal choice. It is trivial to see v_t̃ is GS: The agent buys the item j = max_j' B - t̃ / p_j' -q_j' and increasing any other job's price does not affect her choice; the GS condition trivially holds if we increase j's price. The beauty of GS lies in the fact that if every agent has a GS valuation, then the market equilibrium exists and can be found by a tatonnement process where prices are monotonically increased from zero ensuring every item is demanded by at least one agent. In such a market, the agents with GS valuations collectively behave as a single agent with GS valuation. This aligns with the fact that GS functions are closed under max-convolution <cit.>, resulting in the max social surplus being GS. Since GS is a subclass of submodular functions (but not vice versa) <cit.>, we can conclude that the max social surplus is submodular as a set function of items/jobs, as desired. §.§ Applications We outline some applications of GD. We give scalable algorithms for all these applications—by using Theorem <ref> for the first two and by using Theorem <ref> for the third. The last application uses Theorems <ref> and <ref> with a small tweak. The details, including how the scheduling problems are captured by the polytope scheduling (or equivalently multidimensional scheduling) are deferred to Section <ref>. No scalable algorithms were known for any applications below except unrelated machines, prior to our work. In particular, no positive algorithms existed for generalized flow scheduling and speed-up curves on partitioned resources. Matroid Scheduling <cit.>. In this problem, we are given a matroid M = (J, I) where I is a collection of job sets. The pair M = (J, I) is called a matroid if it satisfies: if I' ⊆ I and I ∈, then I' ∈; and if |I| < |I'| and I, I' ∈, then there exist j ∈ I' ∖ I such that I ∪{j'}∈. For numerous examples of matroids, see Chapter 8 of <cit.>. At any time instant t, a feasible schedule is an independent set I in . If we schedule an independent set I of jobs, then each job in I gets processed at a rate of 1. We give a (1+)-speed O(1 / ^2)-competitive algorithm. Previously, <cit.> gave a (e+)-speed O(1 / ^2)-competitive algorithm. Matroid scheduling encompasses many scheduling problems. As discussed earlier, a uniform matroid of rank m can model parallel identical machines (at every time step, we can process up to m jobs). A gammoid captures single-source routing in a directed graph with a single source where jobs are processed when there are vertex-disjoint paths from the source to their corresponding nodes. A cographic matroid captures edge maintenance scheduling in an undirected graph. When performing service on an edge, it becomes unusable at the moment. However, concurrent maintenance work on some edges is possible if the remaining graph stays connected without those edges. In this scenario, maintenance jobs arrive at edges. Extended Matroid Scheduling. This problem extends the matroid scheduling problem, we are given k matroids M_k = (J_k, I_k) defined the same as above. At any time instant t, a feasible schedule now is the union of the independent set I_k drawn from each matroid M_k. Generalized Flow Scheduling. This generalizes the gammoid scheduling, a special case of the matroid scheduling. We are given a network G=(V,E), where J ⊆ V is the set of source nodes. Arc e has capacity u_e and flow gain factor γ_e; a flow of value f becomes γ_e f after flowing through e. Gain factors distinguish generalized flow from standard flow and are extremely useful for modeling heterogeneity, as we can have a different factor for each edge. Job j gets processed at a rate equal to its (outgoing) net flow. It is critical for cloud computing to manage heterogeneous computing resources connected via a network; see <cit.>. A job gets processed by communicating with the resources and the heterogeneous benefit between jobs and resources can be captured by edge gain factors, e.g. <cit.>. We give a (1+)-speed O(1 / ^2)-competitive algorithm. No prior work exists for generalized flow scheduling. *Speed-up Curves on Partitioned Resources. This generalizes the arbitrary speed-up curves model on uniform processors <cit.>, which elegantly models how the parallelization effect degrades as we use more processors to speed-up processing a job. There are D different divisible resources, where the ith resource exists in m_i units. At every time, resources are allocated to jobs. Each job has a valuation function of the following form: Resources are partitioned into k disjoint groups G = G_1, ..., G_k; partitioning can be different for each job. Each group G_i ∈G is associated with a monotone univariate concave function g_i: _+ →_+ with g_i(0) = 0. Then, the processing rate of job j is v_j(_j) = ∑_i=1^k g_i(∑_d ∈ G_i a_jd x_jd), where a_jd≥ 0, when j receives _j = (x_j1, x_j2, ..., x_jD) resources. Concave functions are used to model the diminishing returns of parallelization as we add more computing resources. The former arbitrary speed-up curves is a special case of this model when D = 1. We give a (1+)-speed O(1 / ^2)-competitive algorithm. No prior work exists for this problem. To see why this general model is useful, suppose D = 2 and consider three types of valuation functions: x_j1^.5, x_j2^.75, and (x_j1+ x_j2)^.5, for an example. The first means the job can only use processors in G_1, and the second means the job can only use those in G_2, but the third can use any processors. This can model the restriction of processors available for each job's processing. CPU partitioning is common in practice <cit.>. Unrelated Machines <cit.>. There is a set of parallel machines. Each job j gets processed at a rate of λ_ij when scheduled on machine i; this is equivalent to j having a processing time p_j / λ_ij if scheduled only on machine i. Each job can be processed by at most one machine at any point in time. We first reproduce the seminal result in <cit.>, which gave a (1+)-speed O(1/ )-competitive algorithm for minimizing total weighted flow time (Section <ref>). We use the same algorithm of <cit.>, which immediately dispatches an arriving job to one of the machines and keeps the job on the same machine until its completion. Perhaps, the analysis of <cit.> becomes more transparent through the lens of our framework, supermodularity and gradient descent. While <cit.> gave essentially the best possible result and we can recover it, the work left an intriguing question from a technical point of view. Although immediate-dispatch and non-migration are highly desirable, they were strongly required in the analysis; it was not known whether we could still achieve a competitive algorithm if we migrated jobs in the middle of the execution for better load balancing. This has been puzzling in the research of online scheduling algorithms. Our approach answers this question positively by providing a scalable algorithm that changes job assignments when a new job arrives to achieve better load balancing. Furthermore, our algorithm is purely Markovian, meaning it only needs to remember the amount of jobs processed, eliminating the need to track job assignments. Specifically, we present a (1+)-speed, O(1 / )-competitive algorithm for unweighted total flow time (Section <ref>) and a (1+)-speed, O(1 / ^3)-competitive algorithm for weighted total flow time (Section <ref>). While this result has a worse competitive ratio for the weighted case and is migratory, we believe the Markovian property has the potential for broader applications; see Section <ref>. §.§ Comparison with the Previous Work As mentioned, Proportional Fairness (PF) <cit.> is the only online scheduling meta-algorithm known other than our gradient descent (GD). PF finds a fair allocation in Fisher markets, where each agent (job j) starts with an endowment (job j's weight w_j) and buys some resources to maximize its valuation (how fast it gets processed at the moment). PF solves max∑_j w_j log v_j(_j) s.t. ∑_j_j ≤, ≥ 0 where j has valuation v(_j) when it receives _j resources from the supply vector of divisible resources.[Equivalently, the scheduling problem can be expressed as a polytope that describes how fast jobs can be processed, i.e., := { x_j = v_j(_j) | ∑_j _j ≤ 1, ≥ 0}.] The solution can be viewed as an equilibrium where the market is cleared under the corresponding dual prices of the resources. Note that PF finds an instantaneous fair allocation without factoring in job sizes, hence is non-clairvoyant. The main contribution of <cit.> lies in showing that PF is O(1_)-competitive with (e+ )-speed for the total weighted flow time objective for scheduling problems when the allocation under PF is monotone—a job's speed/share can only decrease when other jobs arrive. Although PF is a remarkable meta-algorithm generalizing RR (Round-Robin), it exhibits limitations from an algorithm design point of view. It is because we lose all the nice properties of PF when we deviate from the equilibrium it finds. Thus, it is not clear how to reduce the speed requirement of the algorithm; even RR requires (2+)-speed over the adversary for competitiveness. Moreover, it disregards job sizes in scheduling decisions, a crucial aspect for designing machine learning-augmented algorithms that leverage job size estimates, a topic of recent significant interest <cit.>. Driven by these considerations, we investigate GD as a meta-algorithm, as the spirit of GD is often ingrained in online scheduling. Furthermore, unlike PF, GD offers flexibility. Implementing GD requires an estimate of the remaining cost. One common method of obtaining this estimate is to first fix a candidate algorithm and derive a reasonably accurate estimate of its remaining cost, which is converted into a potential function of a nice mathematical form <cit.>. However, this approach demands algorithmic creativity, which involves a lot of trial and error. Alternatively, one can utilize the (approximately) optimal remaining cost, also known as the residual optimum. Note that the residual optimum is not tied to specific algorithms. However, computing the residual optimum is often NP-hard, rendering it challenging to understand its evolution. One approach is employing an LP relaxation, but this results in an LP solution, which commonly lacks structure. Our work draws upon critical ideas from the seminal work of <cit.>, which presented the first scalable algorithm for unrelated machines; see <ref> for more details. They introduced an elegant potential function, and our meta-potential function is its generalization. Generalizing their work poses challenges. Their algorithm executes SRPT (or its weighted version) on each machine following the initial assignment, and SRPT admits a closed-form expression for the residual optimum. Their potential function computes the residual optimum for a mixture of jobs from the algorithm and the adversary. However, determining how to proceed when the residual optimum lacks a convenient closed-form mathematical expression remains unclear. We address the challenges as follows. First, we abstract away the potential function argument presented in <cit.> and demonstrate that their potential function can be generalized to work solely with supermodularity when gradient descent is employed. Furthermore, by establishing a novel connection to Walrasian markets, we simplify the task of verifying whether the residual optimum is supermodular. The connection we make between gradient descent to Walrasian markets is less obvious than PF's connection to the Fisher markets. It is well-known that PF finds a market-clearing equilibrium in the Fisher market, but it is unclear how markets are related to the residual optimum. In Walrasian markets, agents have no endowments and they try to maximize their quasi-linear utility, which is their valuation minus the payment for the resources bought. We use gross substitutes[It is important to note that they have different definitions in Fisher and Walrasian markets.] and linear substitutes to find sufficient conditions to obtain supermodularity. Finally, PF and GD are somewhat complementary to each other. PF makes scheduling decisions without knowing job sizes but uses (e+) speed to be O(1)-competitive. On the other hand, GD only needs (1+) speed augmentation. Further, there are problems for which one is O(1)-competitive (with O(1)-speed augmentation), but not the other. For example, for broadcast scheduling PF is O(1)-competitive, but not gradient descent (Appendix <ref>). For unrelated machines, PF is believed not to be O(1)-competitive <cit.>, but we show gradient descent is (Sections <ref>, <ref>, and <ref>). §.§ Future Work In this paper, we show a reduction from designing online scheduling problems to the offline problem of finding an LP relaxation whose optimum satisfies supermodularity. In particular, we show if the underlying valuation function subject to the polytope characterizing the scheduling constraints at each time satisfies substitute properties, gradient descent is scalable. We discuss several open problems below. * Gradient descent does not yield an O(1)-competitive algorithm for all scheduling problems as mentioned above (Appendix <ref>). Can we find other sufficient conditions for GD to be O(1)-competitive? For example, can we design algorithms for the intersection of two matroids? The matroid intersection is known not to be a gross substitute <cit.>. Here, it may increase the required speed or competitive ratio. * Can we extend gradient descent to ℓ_k-norms of flow time? Although a naive extension of our framework to ℓ_k-norms seems to create an undesirable increase of the potential due to job ages, we believe it should not be a major roadblock. No meta-algorithms have been considered for ℓ_k-norms of flow. We have some preliminary results for this objective. * Is there an online algorithm for minimizing ℓ_k-norms of flow time where the competitive ratio has no dependency on k? For unrelated machines, there exist (1+)-speed O(k)-competitive immediate-dispatch algorithms <cit.>. However, it is known that any immediate dispatch algorithm must have a linear dependency on k <cit.>. Since there exists a scalable algorithm for the maximum unweighted flow time with no dependency on k, it is plausible to remove the dependency by considering migratory algorithms <cit.>. As discussed, our algorithm allows migration, and thus has the potential to solve this problem. * Can we develop another nonclairvoyant meta-algorithm besides PF? PF generalizes Round Robin, which requires at least (2+)-speed to be O(1)-competitive. However, Shortest Elapsed Time First (SETF)[In the single machine scheduling, SETF processes the job that has been processed the least.] is known to be scalable in the single-machine setting and uses information about how much jobs have been processed. Can we generalize SETF in a similar way that SRPT and Round Robin were generalized to gradient descent and PF? § PRELIMINARIES §.§ Common Notation We use _X to denote a binary vector corresponding to X. We use bold fonts for vectors. For two vectors ,, ∧ takes the component-wise min of and . Similarly, ∨ takes the component-wise max of and . We let ⊙ denote the component-wise multiplication of and . When v is a set function, we will interchangeably use v(X) and v(_X) for a set X. §.§ Problem Definition and Scheduling Primitives For easy reference, we reproduce the definition of the polytope scheduling below. All scheduling problems we will study fall into the polytope scheduling problem, or equivalently multidimensional scheduling. Polytope Scheduling (Feasible Processing Rate View). Most preemptive scheduling problems with fixed computing resources can be represented as an instance of the polytope scheduling problem (PSP) <cit.>. In the PSP, we are given a set of constraints that form a downward-closed[If ' ≤ and ∈, then ' ∈.] polytope (more generally a convex body), which encodes the feasible processing rate vector at each time: each job j can be processed at a rate of z_j if and only if := {z_j}_j∈. For example, we have = { | ∑_j z_j ≤ 1 ≥ 0} in the single machine scheduling, which says jobs can be processed at a rate of up to 1 in total. We assume that remains fixed over time. Multi-dimensional Scheduling (Resource View). Equivalently, the PSP can be viewed as multidimensional scheduling where each dimension corresponds to a distinct resource. There exists a set of D divisible resources or a continuous supply vector ,[We can assume wlog that every resource exists by one unit by scaling.] which are replenished at each time. Each job j is associated with a concave valuation function, u_j: [0, 1]^D→_+, which denotes how fast it gets processed. A feasible allocation is constrained by ∑_j _j ≤: Job j gets assigned a resource vector _j and gets processed at a rate of u_j(_j). We will call this resource view and the above feasible processing rate view. We consider the clairvoyant scheduling setting: Each job j arrives at time r_j with (original) processing size p_j and weight w_j. Note that the online scheduler learns about job j, including w_j and p_j, only when it arrives. A job is completed when it has been processed by p_j units since its arrival time r_j. Preemption is allowed, meaning a job can be paused and resumed later. Our goal is to minimize the following objective function. Total Weighted Flow Time. For integral flow time objective, an individual job j's flow-time is defined as F_j = C_j - r_j which is the difference between its arrival time r_j and completion time C_j; we may add σ to C_j as superscript to make clear the schedule σ considered. Note that C_j is defined the min time t' such that ∫_t = r_j^t' z_j(t) t = p_j, where z_j(t) is the processing rate of job j at time t in the schedule σ. Then, the total weighted integral flow time is ∑_j ∈ J w_j F_j, where w_j is the weight of job j and J is the entire set of jobs arriving to be completed. Speed Augmentation. A large number of scheduling problems do not admit O(1)-competitive algorithms. In such cases, speed augmentation is commonly used for beyond-worst-case analysis. Here, the online scheduling algorithm runs with extra speed compared to the optimal scheduler. If an algorithm can process s >1 times more than the adversary can process at a time and has an objective at most c times than the adversary does, we say that the algorithm is s-speed c-competitive. Intuitively, this implies that the algorithm can handle 1 / s-fraction of what the optimum solution can handle <cit.>. Thus, an algorithm that is O(1_)-competitive with (1 + )-speed for any > 0 is of fundamental importance and termed scalable. Total Fractional Weighted Flow Time. In many scenarios, we will consider the fractional flow time which is a relaxation of integral flow time. The fractional weighted flow time objective is defined to be ∑_j ∈ J∫_t ≥ 0w_j/p_j t z_j(t) t. In other words, we view {z_j(t) / p_j}_t as a distribution when the job is completed. By definition, it is obvious that a job's fractional flow time is no greater than its integer flow time in any fixed schedule. It is easier to minimize fractional flow time than integer flow time. Fortunately, with an extra speed augmentation, we can convert an algorithm that is good for fractional flow time into one that is good for integer flow time. For any polytope scheduling problem, we can convert an online algorithm whose total fractional weighted flow time is C into an online algorithm with (integer) weighted flow time O(C / ) given (1+)-speed. Therefore, if there is an online scheduling algorithm that is c-competitive with s > 1 speed for total fractional weighted flow time, then for any > 0, there exists an algorithm that is s(1+)-speed O(c / )-competitive for total (integral) weighted flow time. For a fixed schedule σ, we will let σ(t) := {j | r_j ≤ t < C^σ_j} denote the set of jobs alive at time t in σ. We will often use A and O to denote our algorithm('s schedule) and the adversary('s schedule), respectively. As an alive job j adds w_j at each time to the total weighted flow time objective, we immediately have the following. We have ∫_t = 0^∞ W^σ(t) t = ∑_j w_j (C^σ_j - r_j), where W^σ(t) := ∑_j ∈σ(t) w_j is the total weight of jobs alive at time t in σ. This observation also extends to the fractional objective. Let p^σ_j(t) be j's remaining size at time t in schedule σ. We have ∫_t = 0^∞W̃^σ(t) t = ∑_j w_j ∫_t = r_j^∞z_j^σ(t)/p_j t t, the total fractional weighted flow time of σ, where W̃^σ(t) := ∑_j ∈σ(t) w_j p_j^σ(t)/p_j is the total fractional weight of jobs alive at time t in σ. §.§ Gross Substitutes for Indivisible Resources We first consider valuation functions for indivisible items. Note that we consider the Walrasian market model throughout this paper, which is different from the Fisher market, as discussed in Section <ref>. Formally, the Walrasian market is defined as follows. A Walrasian market consists of a set of n indivisible items and m agents where each agent i ∈ [m] has a valuation function v_i: 2^[n]→. Each item j ∈ [n] is priced at q_j ∈_+. Then, the market gives a disjoint allocation { S_i}_i∈[m] of items over agents. We define the social surplus of the market as ∑_i ∈[m] v_i(S_i). In the Walrasian market, each agent has a preference for bundles of items specified by the quasi-linear utility function, which can be formally defined by the demand correspondence as follows. Given a set function v : 2^[n]→ and a price vector ∈_+^n, we define the demand correspondence (or set) D(v, ) as the family of sets that maximize the quasi-linear utility of the agent: D(v, ) := max_Xv(X) - ·_X. In other words, D(v, ) represents the set of item subsets that maximize our total happiness (the difference between our valuation and the total cost) under the price vector , given our valuation function v. The Walrasian equilibrium asks that every agent maximizes its utility function and all items are allocated. Formally, the Walrasian equilibrium is defined as follows. Given a Walrasian market with n indivisible items and m agents with valuations {v_i}_i ∈ [m], a Walrasian equilibrium consists of a partition { S_i}_i∈[m] of n and a price vector ∈^n_+ such that S_i ∈D(v_i,) for all i ∈ [m]. It is known that an equilibrium exists if all agents' valuation function satisfies the GS property. GS is an important class of valuation functions <cit.>. A valuation function v : 2^[n]→ is gross-substitute (GS) if for any price vectors ' ≥≥ 0 and any X ∈D(v, ), there is a set Y ∈D(v, ') such that X ∩j : q_j = q'_j⊆ Y. In other words, a GS valuation means that if we increase an item j's price, we can update an optimum allocation without dropping any other items. Intuitively, this is possible because j can be substituted with other items unlike some items behaving like a bundle, e.g. milk and cereal. It is known that GS is a strict subclass of submodular functions <cit.>. A valuation function v: 2^[n]→ is submodular if for all X, Y ⊆ [n], v(X) + v(Y) ≥ v(X ∩ Y) + v(X ∪ Y). Alternatively, it is submodular if v(X ∪{e}) - v(X) ≤ v(Y ∪{e}) - v(Y) for all X ⊇ Y and e ∈ [n]. Symmetrically, v is supermodular if -v is submodular. A valuation function v that satisfies the GS property is submodular. Furthermore, Lehmann, Lehmann and Nisan <cit.> showed GS valuations are closed under convolution, meaning that the aggregate max-valuation of GS valuations is GS. The class of gross-substitute valuations is closed under convolution, i.e., for any two gross-substitute valuations v_1 and v_2, v_1 ⊕ v_2 is gross substitutes, where v_1 ⊕ v_2 (S) = max_T ⊆ Sv_1(T) + v_2(S ∖ T) It is known that a Walrasian equilibrium exists when all agents have GS valuations, and a Walrasian equilbrium results in the max social surplus <cit.>. Thus, the lemma can also be interpreted as stating that the social surplus of a Walrasian market exhibits GS if all agents have GS valuations. §.§.§ Examples We state a few examples of GS valuation functions. For a comprehensive list, see <cit.>. * Symmetric concave functions: f: 0, 1^n→∪{∞} is a symmetric concave function if there exists a concave function φ: _+ → s.t f() = φ(∑_i=1^nx_i ). * Laminar concave functions: A set family 𝒯⊆ 2^N is laminar if X ∩ Y = ∅, X ⊆ Y or X ⊇ Y for every X ≠ Y ⊆𝒯. Define φ_Y: _+→ to be a univariate concave function. Then the function given by f(x) = ∑_Y ∈𝒯φ_Y(∑_i ∈ Y x_i ) is called a laminar concave function. * Maximum weight bipartite matching: Let G=(U, W; E) be a weighted bipartite graph with an edge cost function w : E →. Define f : 0, 1^n →∪-∞ to be the maximum weight bipartite matching w.r.t a vertex set X ⊆ U by f(_X) = max{∑_e ∈ M w(e) | M ⊆ E is a matching in G covers all vertices in X} * Weighted rank functions of matroids: Let M = (E, ℐ) be a matroid (see Section <ref> for definition), where E is a universe of n elements, and I ⊆ 2^E is a collection of independent sets. Let w : E →_+ be a weight function. Then define f: 0, 1^n →_+ to be the weighted rank function w.r.t an element set X ⊆ E, i.e., f(_X) = max{∑_i ∈ Yw(i) | Y ⊆ X, Y ∈ℐ} §.§ Linear Substitutes for Divisible Resources Since we consider preemptive schedules where jobs can be processed by fractional amounts at a time and we will handle heterogeneous resources, we need a continuous extension of GS and Walrasian markets where resources to be allocated are divisible. Continuous definitions of gross substitutes, termed linear substitutes (LS), were introduced by Milgrom and Strulovici <cit.>. Interestingly, although the continuous setting does not inherit all properties that are guaranteed in the discrete setting <cit.>, LS still offers some useful properties we can leverage. In particular, LS valuations also are closed under convolution (or equivalently called aggregation) and are a subclass of submodular functions.[For a continuous function v, submodularity is defined analogously to Definition <ref>: v() + v() ≥ v(∨) + v(∧). Further, v is supermodular iff -v is submodular.] v: ∏_i ∈ [n] [0, L_i] → is a linear-substitute valuation if for any price vector and ', then whenever q_j ≤ q'_j, q_k = q'_k for all k ≠ j and X ∈D(v, ), there is a Y ∈D(v, ') such that Y_k ≥ X_k for all k ≠ j. The class of linear-substitute valuations is closed under aggregation, i.e., for any linear-substitute valuations v_1, ..., v_n, v is linear substitutes, where v() = max∑_i ∈ [n]v_i(_i) : ∑_i ∈ [n]_i = , _i ≥ 0 ∀ i ∈ [n] Similar to GS, this lemma implies that the social surplus is LS if all the agents have LS valuations in the Walrasian market considered. To verify the LS property, we can check the submodularity of its dual profit function. v : ^n→ is a linear-substitute valuation iff π is submodular, where π() := max_∈^n v() - · We can easily extend the proof of GS implying submodularity (Lemma 5 in <cit.>) to LS implying submodularity; for completeness, we give the proof in Appendix <ref>. Let v : ^n → be a concave linear-substitute valuation. Then, v() is submodular. §.§.§ Examples We discuss some examples of linear substitutes, which will be used as building blocks in applying our theorems. Compared to gross substitutes, relatively fewer examples are known in the literature. We show that the concave closure of GS and generalized flow satisfy LS, which could be of independent interest; the proofs are deferred to Appendix <ref>. For more examples, see Section 5 of <cit.>. [Concave Closure of Gross Substitutes is LS]theoremThmGsClosure Given a gross-substitute valuation v : 0, 1^n →, let v^+() := max∑_S v(S) λ_S : ∑_Sλ_S _S = , ∑_Sλ_S = 1, λ_S ≥ 0 be the concave closure of v. Then, v^+(): [0, 1]^n →_+ is linear substitute. To keep the flow of presentation, we defer the formal definition of generalized flow to Section <ref>. For any cost vector , v() := max_∈· for the in Eqn. (<ref>), which is defined for generalized flow, is LS. § FORMAL DESCRIPTION OF OUR RESULTS §.§ Gradient Descent on the Residual Objective We begin by formally defining the residual objective at each time t, namely the minimum projected cost of the remaining schedule. Let A(t) := { j | r_j ≤ t < C_j} be the set of jobs alive at time t in our schedule. Let p_j(t) denote j's remaining size at time t; by definition, p_j(t) = 0 for all t ≥ C_j. Since the objective we consider is a linear combination of jobs' flow time, we can observe that w_j (C_j - t), is exactly how much j will contribute to the objective from time t until it is completed at time C_j as j gets processed by p_j(t) units by the algorithm during [t, C_j]. So, if we shift time by t for easy indexing, we have the following residual scheduling problem: Each alive job that has arrived has a (remaining) size p_j(t) and has arrival time 0 (in the shifted time horizon), and weight w_j. Noticing that completion time is equivalent to flow time when jobs arrive at time 0, the residual optimum can be expressed as follows. f() := min∑_j ∈ A(t) w_j C̃_j s.t. ∫_t̃≥ 0^C̃_j z_j(t̃) t̃ = x_j ∀ j ∈ A(t) (t̃) ∈ ∀t̃≥ 0, where we set x_j = p_j(t) for all j ∈ A(t). Note that we added tilde to C_j and t to make it clear that we are considering the shifted time horizon. The first constraint ensures that every job j must be processed by x_j units, and the second guarantees feasible scheduling at every time. Let ^_() denote the objective of ^_ for . In other words, ^_() is the extra cost we should pay if we follow from time t, i.e., process each job j at a rate of _j(t̃) at time t + t̃, assuming no more jobs arrive. If is infeasible, then let ^_() = ∞. Let f() := min_^_(), which we will term algorithm A's (integral) residual optimum at time t. We are now ready to formally define our gradient descent algorithm: At each time t we try to decrease f() the most, i.e., min_∇ f() |_ = (t) · ∈ We will let ^() denote the decrease of the integral residual optimum due to the gradient descent, i.e., the optimum to Eqn. (<ref>). However, we will see that if (t̃) for t̃≥ 0 is an optimum solution that achieves f((t)), to implement GD, we can simply process jobs according to (0) at time t. In fact, we will show that we can exactly follow the residual optimum schedule until a new job arrives (Lemma <ref>). For brevity, we may omit from notation. Notably, we take this time-indexed view since it explicitly describes the residual optimum solution at each time. Let us illustrate this with a single machine scheduling example we discussed in Section <ref>: After solving ^_(t) we obtain: z_π(j)(t̃) = 1 for t̃∈ [0, p_π(1)(t)) only for j = 1 (0 for all other jobs), z_π(j)(t̃) = 1 for t̃∈ [p_π(1)(t), p_π(1)(t)+ p_π(2)(t)) only for j = 2, and so forth. By solving Eqn.(<ref>), we obtain such that z_π(1) = 1 and z_π(j) = 0 for all j ≠ 1. Note that is identical to (0). This is not a coincidence. Lemma <ref> says that we can follow (t̃) to implement GD: process job π(1) during time [t, t+ p_π(1)(t)) and job π(2) during [t+p_π(1)(t), t+ p_π(1)(t)+ p_π(2)(t)), and so forth. This is exactly what SRPT does. In the following, we define the key desired property we use in our paper: for any fixed remaining job sizes , the residual optimum viewed as a set function is supermodular. There is an extended notion of submodularity/supermodularity in the continuous setting which is needed for linear substitutes (Section <ref>). However, this notion of supermodularity is sufficient for our purpose here. For brevity, we may simply say supermodularity without explicitly mentioning `discrete.' We say a function g: ^|J|_≥ 0→ is (discrete-)supermodular if for any fixed , g̅_(Z) := g(⊙_Z): 2^J→ is supermodular, i.e., g̅(U) + g̅(V) ≤g̅(U ∩ V) + g̅(U ∪ V) for all U, V ⊆ J, where _Z is the binary characteristic vector of Z for any Z ⊆ J and ⊙ denotes component-wise multiplication. We identify supermodularity as a key sufficient condition for the gradient descent on the optimum integral residual objective to give a scalable online algorithm by the following theorem. [Gradient Descent Desiderata for Integral Objective]theoremthmone For a polytope scheduling problem with polytope , suppose f() := min_^_() is discrete-supermodular. Then, for any > 0, GD is (1+)-speed O(1 / )-competitive for minimizing weighted integral flow time for PSP . §.§ Substitutes Empower Gradient Descent Unfortunately, a large number of scheduling problems are NP-hard to optimize for the integral completion objective. Further, it is non-trivial to check if the residual optimum objective is supermodular. Thus, to make the gradient descent framework more readily applicable, we consider a fractional version of the residual objective and show it satisfies supermodularity for a large class of scheduling problems related to substitute valuation functions. Recall that we can use a standard conversion to translate an algorithm that is scalable for fractional weighted flow into one that is scalable for integer weighted flow; see Lemma <ref>. As discussed in Section <ref>, we can take the feasible processing rate view (polytope scheduling) or equivalently the resource view (multi-dimensional scheduling). In the following, we consider both views. §.§.§ Feasible Processing Rate View The following is a well-known time-indexed LP <cit.>, which we will use to measure our fractional residual objective. As before, we use t̃ to distinguish the time used in the residual schedule from the global time t. f():= min∑_j ∈ A(t) w_j/p_j∫_t̃≥ 0t̃· z_j(t̃) t̃ ∫_t̃≥ 0 z_j(t̃) t̃ = x_j ∀ j ∈ A(t) (t̃) ∈ ∀t̃≥ 0 This LP tries to minimize total weighted fractional completion time. Gradient descent on f() := min_^_() solves min∇ f() |_ = (t)· subject to ∈ and processes jobs at time t + t̃ according to (t̃), but as before, we will show that we can follow the LP solution until a new job arrives. [Gradient Descent Works for Linear Substitutes in Feasible Processing Rate View]theoremthmtwo For a polytope scheduling problem with polytope , suppose that for any ∈_+^|J|, v() = max_0 ≤≤, ∈· is linear substitute. Then, for any > 0, the GD gives a (1 + )-speed O(1 / )-competitive online algorithm for minimizing total fractional weighted flow time; thus, there exists a (1+)-speed O(1 / ^2)-competitive algorithm for total weighted flow time. The theorem can be proved separately for gross substitutes (GS) for indivisible resources. However, we only state the theorem for linear substitutes (LS) because we can show that the concave closure of GS is LS (Theorem <ref>). The conceptual contribution of the theorem lies in connecting GD to substitute valuation. The time-indexed LP could be exponentially large. In Appendix <ref>, we show that we can consider an approximate residual optimum to obtain a polynomial time algorithm—assuming that we have an efficient oracle to test whether ' ∈ or not. §.§.§ Resource View We now consider the other equivalent view where each job gets some resources at each time from a D-dimensional supply resource vector , which is replenished at every time. Recall that each job j is associated with a valuation function u_j. Let _jt denote the resource vector job j receives at time t. Job j then gets processed at a rate of u_j(_jt). We will keep reserving t̃ to refer to times in the residual schedule. The LP we solve is essentially identical to ^_, but has more explicit constraints that resources cannot be over-allocated. Concretely, we use the following linear programming. f() := min∑_j w_j/p_j∫_t̃≥ 0t̃· u_j(_jt̃) t̃ ∫_t̃≥ 0 u_j(_jt̃) t̃ = x_j ∀ j ∑_j_j t̃ ≤ ∀t̃≥ 0 We prove the following theorem. [Gradient Descent Works for Linear Substitutes in Resource View]theoremthmthree For a multi-dimensional scheduling problem with a supply vector of divisible resources, suppose that for every job j, u_j : [0, 1]^D→_+ is strictly monotone and strictly concave[Under this assumption, the demand set is a singleton set.] linear substitute and further satisfies the following: * For any ∈D(u_j, ) and λ≥ 1, there exists ' ∈D(u_j, λ) such that ≥'. * For any ∈D(u_j, ) and ' ≤, there exists ' ∈D(u_j, ') such that u_j(') ≥ u_j(). Then, for any > 0, the GD applied to _(t) yields a (1 + )-speed O(1 / )-competitive online algorithm for minimizing total fractional weighted flow time; thus, there exists a (1+)-speed O(1 / ^2)-competitive algorithm for integer weigthed flow time. In the theorem, in addition to every job j having an LS valuation function, we require two additional properties. The first says that if we increase all prices by the same factor, then each job demands only less resources. The second says that if we increase resource prices, each job's optimal demand maximizing its quasi-linear utility should have a lower valuation. While these two properties are intuitive, not all LS valuations satisfy them[For example, u() := max∑_i λ_i y'_i s.t. ||'||_1 ≤ 1 and ||'||_1 ≤ ||||_1 is LS, but does not satisfy the first property.]. To prove the theorem, we show that the valuation functions considered here induce an LS function in the feasible processing rate view, which allows us to use Theorem <ref>. The high-level proof idea is as follows. Consider v() = max_0 ≤≤, ∈· for a fixed . We want to show the valuation v is LS. Let be the job prices per unit. By processing job j by one unit, we obtain value c_j, yet have to pay the price q_j. Finding the demand set for v can be viewed as finding the maximum social surplus when each job (agent) j has valuation (c_j - q_j)u_j in the resource view; let be the equilibrium prices of the D resources. Increasing q_j lowers (c_j - q_j)u_j, which has the same effect of increasing the resource prices for j by a certain factor λ > 1. Thus it results in resources being under-utilized due to the first property. Then, it is known that the equilibrium prices drop <cit.>. All jobs' valuation functions (c_i - q_i) u_i, except for j, remain the same. Therefore, they can only get processed more due to the second property. § PROOFS OF MAIN THEOREMS §.§ Proof of Theorem <ref> It is highly important to note that in the proof when a job j arrives, we pretend that GD and the optimum schedule each are given a distinct copy of j. Therefore, if A(t) and O(t) denote the sets of jobs that have arrived yet have not been completed at time t by GD and by the optimum schedule respectively, then A(t) ∩ O(t) = ∅ always. We will use A and O to denote GD's schedule and the optimum schedule, respectively. We use || to denote the concatenation of two vectors and . Let p^A_j(t) and p^O_j(t) denote the remaining size of job j under our algorithm GD and a fixed optimum schedule respectively. For brevity we let ^A(t) := {p^A_j(t)}_j ∈ A(t) and ^O(t) := {p^O_j(t)}_j ∈ O(t). For analysis, we will use the following potential function. Φ(t) = 2/(f(^A(t)) - 1/2f(^A(t) || ^O(t))) Following the standard potential function argument in online scheduling <cit.> we will show the following: * Boundary Conditions: Φ(0) = Φ(∞) = 0. In other words, at the beginning when no jobs have arrived, and at the end when all jobs have been completed, the potential is 0. This is obvious because the residual optimum is 0 when there are no jobs to be processed and it is only concerned with processing jobs that are currently alive. Further, it's worth noting that jobs of zero remaining sizes have no effect on the residual optimum. * Discrete Changes: Φ does not increase when a job arrives or is completed by GD or the optimum schedule. To this end, we will crucially use supermodularity. * Continuous Changes: Φ(t)/ t≤ - W^A(t) + O(1/) W^O(t), when A is given 1+ speed and O 1-speed, where W^A(t) : = ∑_j ∈ A(t) w_j and W^O(t) := ∑_j ∈ O(t) w_j denote the total weight of jobs alive at time t in GD's schedule and the optimum's, respectively. Recall that ^() := - min_∈∇ f() · denotes the change rate of the residual optimum for remaining sizes due to GD's processing. As mentioned earlier, the can be dropped for convenience, so () denotes the same concept. If GD uses speed s, we denote it as _s(). Note that GD with s speed can process jobs at rates s if ∈. For any s >0, we have _s() = - s(). The following lemma proves that the change in the optimal residual objective value due to GD is precisely equal to the total weight of the uncompleted jobs. The intuition is simple: the residual optimal schedule remains optimal for minimizing total weighted completion time until a new job arrives. Consequently, every job's completion time in the residual schedule decreases by one in each time step. This establishes the “≤" direction. The other direction can be proven similarly. When the residual optimum is f() := min_^_(), we have () = - ∑_x_j>0w_j. Let (τ), τ∈ [0, +∞) be the optimal residual schedule for remaining size , i.e., = min_'^_('). Consider the decrease of the residual optimum in t units of time when we run GD. Let t be an infinitesimally small time step such that no job is completed during time [0, t). Recall that GD schedules (τ), τ∈ [0, t). Let (τ') := ∫_τ”≥ 0^τ'(τ”) τ”. Let ' = - ( t) be the remaining sizes after processing in t time. Note that '(τ) := (τ + t), τ∈ [0, ∞) is a feasible schedule, for remaining job sizes '. Then we have f(') ≤_'^(') because f(') minimizes ^_' w.r.t. remaining sizes ', and ' is a feasible schedule w.r.t. the same vector '. Thus we have f() - f(') ≥^_()- ^_'(') = t ∑_x_j>0w_j This is because every job j s.t. x_j > 0 has a t units larger completion time in than in schedule ' and is still alive in the infinitesimally time step. Formally, j completes in at the maximum (supremum) time c such that z_cj > 0, and therefore it completes at time c - t in ' since (c) = '(c - t). To show the other direction, suppose GD schedules ”(τ) for τ∈ [0, t) and let ' be the resulting remaining job sizes. Let ' be the optimal residual schedule for '; thus we have f(') = ^_'('). Let be the schedule that first has ” for t units of time and then exactly follows '. Since is a valid residual schedule for remaining sizes , we have f() ≤^_(). Therefore, f(') - f() ≥^_'(') - ^_() = - t ∑_x_j>0w_j As above, the equality holds as each job j's completion time (such that x_j > 0) differs exactly by t in the two schedules and '. This implies GD decreases the residual objective by at most t ∑_x_j>0w_j. Thus we have () = - ∑_x_j>0w_j, as desired. The next lemma demonstrates that we can implement GD more efficiently by following the optimal residual schedule until a new job arrives. Upon a new job arrival, we recompute a new optimal residual schedule. The lemma is unnecessary if we are willing to compute the optimal residual schedule at every time step. Let denote the optimum residual schedule for remaining sizes (t). Suppose no jobs arrive during the time interval [t, t']. Then, we can assume wlog that GD processes jobs at rates (t̃) at each time t + t̃ for all t̃∈ [0, t' - t]. The proof of this lemma is implicit in that of Lemma <ref>. Eqn. (<ref>) shows that processing along , until a job is completed, is as effective as GD, which by definition is the most effective in decreasing the residual objective. Note that the remaining schedule must be an optimum residual schedule for the rest of the jobs. Then, we can recurse on the remaining sizes, until time t' when a new job arrives and we recompute the optimum residual schedule. The following lemma bounds the potential change when a job arrives or is completed. This is where we use supermodularity. If Φ(t) changes discontinuously due to a job's arrival or completion, the change is non-positive. If there is no arrival and completion, thanks to Lemma <ref>, it is straightforward to see Φ(t) is continuous in t and differentiable. First, we observe that job completion in A's schedule or the optimum solution's schedule has no effect on f and therefore on Φ(t). This is because once a job j's remaining size becomes 0, then the residual optimum schedule does not process it. Thus, we only need to consider job arrival. Suppose a new job j arrives at a fixed time t. For brevity, we drop t in the following. Let _j^A denote the unit vector that has 1 only the position corresponding to j in A's schedule. Define _j^O analogously for the other copy of j in O's schedule. Here j^A and j^O refer to j's copy in the schedule of A and O, resp. Dropping t from the notation for brevity, f(^A || ^O) increases by: f(^A || p_j _j^A || ^O || p_j _j^O) - f(^A || ^O) = g̅_(A ∪ j^A ∪ O ∪ j^O) - g̅_(A ∪ O) where = ^A || p_j _j^A || ^O || p_j _j^O. Recall that the function g̅_(S) computes f assuming that each j' ∈ S has size ·_j' and every job not in S has zero size. Notably, we view f as a set function g̅ by fixing each job's (remaining) size. In the remaining proof, we use g instead of g̅_ for notational brevity. By supermodularity of g (due to the theorem condition), we have, g(A ∪ j^A ∪ O ∪ j^O) - g(A ∪ O) ≥ g(A ∪ j^A ∪ j^O) - g(A) = g(A ∪ j^A ∪ j^O) - g(A ∪ j^A ) + g(A ∪ j^A) - g(A) ≥ g(A ∪ j^O) - g(A) + g(A ∪ j^A) - g(A) = 2 (g(A ∪ j^A) - g(A)), where the last inequality follows from the fact that j^O and j^A both have the same size p_j when j arrives. Now, we shift to bounding the change of f(^A), which increases by exactly f(^A || p_j _j^A) - f(^A) = g(A ∪ j^A) - g(A) Therefore, Φ(t)'s change is at most 2/( g(A ∪ j^A) - g(A) - 1/2· 2(g(A ∪ j^A) - g(A))) = 0, as desired. Consider any time t when no job arrival or completion occurs. If A and O are given s-speed and 1-speed respectively, / t f(^A(t) || ^O(t)) ≥ - (s + 1) ( W^A(t) + W^O(t) ). We observe that processing jobs in A and O can be thought as a feasible schedule with speed s+1 (A and O are given speeds s and 1 respectively). How much can one decrease f(^A(t) || ^O(t)) when given s+1-speed? Lemma <ref>, together with Claim <ref>, implies that even GD, which is the most effective in decreasing the residual objective, can decrease at a rate up to the total weight of all jobs (of non-zero remaining sizes) times s+1 when given s+1 speed. Thus, the lemma follows. Consider any time t when no job arrival or completion occurs. Then, / tΦ(t) ≤ -W^A(t) + (1 + 2/) W^O(t) when GD is given 1+-speed. From Lemmas <ref> and <ref>, we have / tΦ(t) = / t2/(f(^A(t)) - 1/2f(^A(t) || ^O(t))) ≤2/ ( - sW^A(t) + s+1/2 (W^A(t) + W^O(t))) = -W^A(t) + (1 + 2/) W^O(t), where s = 1+. By Lemma <ref> and the fact that Φ(T) = Φ(0) = 0, we have ∫_t=0^∞/ tΦ(t) t ≥ 0. By Corollary <ref>, we know that ∫_t=0^∞ W^A(t) t ≤2 + /∫_t=0^∞ W^O(t) t. Since the LHS and RHS are the total integral weighted flow time of GD and the optimum schedule respectively (Proposition <ref>), we have Theorem <ref>. §.§ Proof of Theorem <ref> We continue to use the same notation ^A(t) and ^O(t) as we used in the previous section. We use the same meta-potential function. But, here we use fractional completion time for the residual optimum, so we have f() := min_^_() (see Eqn. (<ref>)). Φ(t) = 2/(f(^A(t)) - 1/2f(^A(t) || ^O(t))) As before, each job j has two distinct copies, which each appear in ^A and ^O. We first establish f's supermodularity. f() := min_^_() is (discrete-)supermodular. Showing the lemma is equivalent to showing - f() is submodular. Then, the objective in Eqn. (<ref>) becomes max∑_j ∈ A(t)∫_t̃≥ 0 - w_j/p_jt̃· z_j(t̃) t̃. This is equivalent to max∑_j ∈ A(t)∫_t̃≥ 0 (B - w_j/p_jt̃) z_j(t̃) t̃ for any constant B > 0 due to the constraint ∫_t̃≥ 0 z_j(t̃) t̃ = x_j for all j ∈ A(t); also adding a constant does not change a function's submodularity. Further, if we let B be a sufficiently large constant, we obtain the following equivalent formulation. max∑_j ∈ A(t) ∫_t̃≥ 0(B - w_j t̃/p_j) z_j(t̃) t̃ ∫_t̃≥ 0 z_j(t̃) t̃ ≤ x_j ∀ j ∈ A(t) (t̃) ∈ ∀t̃≥ 0 Note that although the first constraint does not require equality, due to the big constant B, the optimum solution to the above LP must process j to the full x_j units. We now create a valuation function v_t̃ for each time t̃. v_t̃((t̃)) := max_0 ≤≤(t̃), ∈∑_j ∈ A(t)(B - w_j t̃/p_j) y_j In other words, here we view each time t̃ as an agent[Technically, achieving this result requires discretizing the time horizon into a finite set of sufficiently small unit times and then taking the limit. However, to maintain clarity in the presentation, we omit this technical detail as it is straightforward. ] with valuation function v_t̃ and view each job j's size/processing as a divisible resource, which exists by x_j units. When agent t̃ receives processing units (t̃), it can process each job j and it would like to maximize an objective linear in subject to ∈ and ≤ z(t̃). By the precondition of the theorem, v_t̃ is LS. Then max LP objective is equivalent to max∫ v_t̃ (z(t̃)) t̃∫_t̃≥ 0 z(t̃) t̃≤ Since this is a convolution of LS valuations, thanks to Lemma <ref>, we have the max LP optimum is LS w.r.t. . From Lemma <ref>, we know that it is submodular. Thus, we have shown f is supermodular. It is an easy exercise to show that this means f's discrete-supermodularity (see Definition <ref>) using the fact that _A ∪ B⊙ = (_A⊙) ∨ (_B⊙) and _A ∩ B⊙ = (_A⊙) ∧ (_B⊙). The remaining analysis is almost identical to the proof of Theorem <ref>. We start the analysis by considering Φ(t)'s discontinuous changes. The proof of the following lemma is identical to that of Lemma <ref>, which crucially used f's supermodularity. If Φ(t) changes discontinuously, the change is non-positive. Let W̃^A(t) := ∑_j ∈ A(t)w_j/p_jp^A_j(t) denote the total fractional weight of jobs at time t in A's schedule. Let W̃^O(t) denote the corresponding quantity in O's schedule. We will assume that GD follows the optimum residual schedule and show that no algorithm can decrease the residual optimum more effectively than GD.[In the proof of Theorem <ref> we defined GD to be the most effective algorithm for decreasing the residual optimum, and we showed following the residual optimum schedule is as effective as GD. We make this small change, as we now know GD is really nothing but following the residual optimum.] The following lemma quantifies the decrease rate of the (fractional) residual optimum. For any time t when no job arrival or completion occurs, we have / t f(^A(t)) = - s W̃^A(t) when we run GD with s speed. Furthermore, for any algorithm with s speed, we have / t f(^A(t)) ≥ - s W̃^A(t). Fix a time t and consider how much f(^A(t)) changes during [t, t+ t) when we run GD with s-speed. Let = min_'_^A(t)('). Note that j contributes to the objective f(^A(t)) by ∫_τ≥ 0w_j/p_jτ z_j(τ) τ. GD schedules (τ), τ∈ [0, s t) during time interval [t, t+ t) as it is given s speed. Note that (τ), τ∈ [s t, ∞) is a feasible schedule (with 1-speed) for remaining job sizes ^A(t) - (s t) where (τ') := ∫^τ'_τ”≥ 0(τ”) τ”. Therefore, ', where '(τ) = (s t+ τ), is a feasible solution to _^A(t) - (s t). Thus, f(^A(t)) decreases by f(^A(t)) - f(^A(t)- (s t)) ≥ f(^A(t)) - _^A(t) - (s t)(') = ∑_j (∫_τ≥ 0w_j/p_jτ z_j(τ) τ - ∫_τ≥ s tw_j/p_j (τ - s t) z_j(τ) τ) ≥ s t ∑_j w_j/p_j∫_τ≥ s dt z_j(τ) τ = s t (W̃^A(t) - ∑_j w_j/p_j Z_j(s t)), which implies / t f(^A(t)) ≤ - s W̃^A(t). Similarly, we can show the other direction. Consider any feasible s-speed processing ' during the time interval; so, j is processed by Z'_j units. Let = min_'_^A(t) - '(') denote the optimum residual schedule for the remaining job sizes. Let ' ∘ denote the 1-speed schedule where we first process ' for s t time units and subsequently schedule . Note that ' ∘ is a feasible solution to _^A(t). Therefore, f(^A(t)) decreases by f(^A(t)) - f(^A(t)- ') ≤ _^A(t)(' ∘) - _^A(t)- '() ≤ ∑_j (∫_τ≥ 0w_j/p_j (τ + s t) z_j(τ) τ - ∫_τ≥ 0w_j/p_jτ z_j(τ) τ) + ∑_j t w_j/p_j Z'_j ≤ s t ∑_j w_j/p_j∫_τ≥ 0 z_j(τ) τ + t ∑_j w_j/p_j Z'_j = s t W̃^A(t) + t ∑_j w_j/p_j Z'_j, which implies / t f(^A(t)) ≥ - s W̃^A(t), as desired. This lemma allows us to bound the continuous change of the second term of Φ(t). The proof is essentially identical to that of Lemma <ref>: it follows from the observation that we can view A and O's combined processing as a s+1-speed schedule and the residual objective can change at a rate up to the total fractional weight of jobs in the system, times s+1. Consider any time t when no job arrival or completion occurs. If A and O are given s-speed and 1-speed respectively, / t f(^A(t) || ^O(t)) ≥ - (s + 1) ( W̃^A(t) + W̃^O(t)). Combining the two lemmas we obtain the following corollary. Consider any time t when no job arrival or completion occurs. When GD is given 1+-speed, we have d/dtΦ(t) ≤ -W̃^A(t) + 2 + /W̃^O(t). By Lemma <ref> and the fact that Φ(T) = Φ(0) = 0, it must be the case that ∫_t = 0^∞/ tΦ(t) t ≥ 0. By Corollary <ref> we know that ∫_t = 0^∞W̃^A(t) t ≤2 + /∫_t = 0^∞W̃^O(t) t where the LHS and RHS are exactly the total weighted fractional flow time of GD and the optimum schedule (Proposition <ref>). Thus, we have Theorem <ref> for total fractional weighted flow time. Lemma <ref> allows us to obtain an algorithm that is (1+)-speed O(1 / ^2)-competitive for the integer objective. §.§ Proof of Theorem <ref> This section is devoted to proving Theorem <ref>. The high-level idea is to show that when we take the feasible processing rate view, we satisfy the condition of Theorem <ref>, i.e., v() := max_0 ≤≤, ∈· is linear substitute. Note that ∈ if and only if _j = u_j(_j) for all j for some non-negative vectors {_j}_j ∈ J such that ∑_j _j ≤. For any price vector , let () denote the unique solution to: max_'∑_i ∈ J (c_i - q_i) x'_i ' ∈ To see that this is well defined, take the resource view of the CP. Recall in the resource view, each job i is associated with a valuation function u_i and the divisible resources [0, 1]^d are allocated to the jobs. If _i denotes the resources job i receives, we have the following equivalent CP. max∑_i ∈ J (c_i - q_i) u_i(_i) ∑_i ∈ J_i ≤ _i ≥0 ∀ i ∈ J It is straightforward to see that this CP has a unique solution due to the theorem condition that u_i is strictly concave for all jobs i. Without loss of generality, we assume <. This is because jobs i with c_i - q_i ≤ 0 do not play any roles and therefore we can pretend that they do not exist. Now consider the Walrasian market where each job/agent i has a valuation function (c_i - q_i) u_i for divisible resources [0, 1]^D. Because u_i is LS for all i, which is a precondition of Theorem <ref>, (c_i - q_i) u_i is LS for all i (LS is closed under a positive scalar multiplication). Thus, the CP formulates the social surplus maximization problem where all agents have LS valuations. In the following and ' are uniquely defined as u_i is assumed to be strictly monotone and strictly concave. Consider any ≤' ∈^|J|. Let be the equilibrium price vector for the market with valuations (c_i - q_i) u_i, i ∈ J along with divisible resources [0, 1]^d. Similarly, let ' be the equilibrium price vector for (c_i - q'_i) u_i, i ∈ J. We have ' ≤. To prove this lemma, we use the tatonnement process described in <cit.>.[Although the process is described for indivisible resources that exist in multiple units, the same proof extends to divisible resources. The proof can be much simplified if we assume valuation functions are strictly concave, as shown in <cit.>.] The tatonnement process is governed by a price vector (τ); the vector monotonically changes as τ increases. We say that all resources are under-demanded[The over-demanded case is symmetric.] under the current price vector ” if there exists an allocation ” that maximizes everyone's (here, every job i's) quasi-linear utility and ∑_i ”_i ≤. Here, if job i's utility is (c_i - q_i) u_i, then ”_i ∈D((c_i - q_i) u_i, ”). For brevity, we will equivalently say that ” is an under-demand price vector. It is shown in <cit.> that starting with any under-demand price vector (0), we can monotonically decrease (τ) to an under-demand equilibrium price, which yields the maximum social surplus. Let be the unique allocation achieving the unique equilibrium with the price vector . Define ' analogously for (c_i - q_i')u_i with the price vector . Since for all i, c_i - q_i ≥ c_i - q_i' and _i' ∈D((c_i - q_i') u_i, ) = D( (c_i - q_i) u_i, c_i - q_i/ c_i - q_i' ), we have _i' ≤_i due to the first property of Theorem <ref>. Thus, ∑_i '_i ≤, meaning is an under-demand price vector when each i has valuation (c_i - q'_i)u_i. Thanks to the above tatonnement process, we conclude ' ≤. Now we are ready to prove that v() is LS. Say q'_j > q_j and q'_i = q_i for all i ≠ j, so we only increased job j's price. Consider any i ≠ j. We use and ' as we defined in the proof of Lemma <ref>. Since u_i is strictly concave, we have, {_i} = D((c_i - q_i)u_i, ) = D(u_i, / (c_i - q_i)) {'_i} = D((c_i - q_i') u_i, ') = D((c_i - q_i) u_i, ') = D(u_i, ' / (c_i - q_i)) From Lemma <ref>, we know ' ≤. Thus, due to the second property stated in Theorem <ref>, we have u_i(_i') ≥ u_i(_i). Knowing that x_i() = u_i(_i) and x_i(') = u_i(_i'), we conclude x_i(') ≥ x_i() If > 0, then the demand set 𝒟(v, ) is a singleton set. Further, we have 𝒟(v, ) = {()}. The quasi-linear utility, v(' ) - ·' is maximized only when ' ∈ since > 0. Then, the quasi-linear utility maximization can be written as max_'·' - ·' s.t. ' ∈ which has a unique solution as discussed above. The solution is () by definition. Thus, if > 0, then we have ' > 0 as well. Due to Lemma <ref> and Eqn. (<ref>), we have shown that v is LS. Therefore, we have Theorem <ref> from Theorem <ref>. We extend the proof to an arbitrary ≥ 0. Let J^0 denote the set of jobs j with q_j = 0, and J^+ the other jobs. It is an easy exercise to show that D(v, ) = {() + ∑_j ∈ J^0λ_j _j | λ_j ≥ 0 ∀ j ∈ J^0}. In other words, the quasi-linear utility is maximized by purchasing any amount of free jobs on top of (). Consider any ∈D(v, ) and increase job j's price, so we have ' such that q'_j > q_j and q'_i = q_i for all i ≠ j. For any i ∈ J^0 such that i ≠ j, it is trivial to see that we can find ' ∈D(v, ') such that x'_i ≥ x_i as job i is free. If i ∈ J^+, we have x_i(') ≥ x_i() = x_i, where the inequality follows from Eqn. (<ref>) and the equality follows from Eqn. (<ref>) with the facts that , () ∈D (v, ) and i ∈ J^+. § APPLICATIONS OF THE THEOREMS In this section, we discuss some problems captured by our theorems. See Section <ref> for their definition. §.§ Matroid Scheduling We can formulate the matroid scheduling problem as an instance of the PSP as follows: = { | ∑_j ∈ S z_j ≤ r(S) ∀ S ⊆ J } where r is the rank function of the given matroid M = (J, I) and z_j is the processing rate of job j. For a cost vector , v̅(_S) = max_≤_S, ∈ 2^J, ∈· is equivalent to the weighted rank function of a matroid, which is known to be GS (Section <ref>). Note that v̅(_S) = v(_S) for all S ⊆ J because the polytope is integral, where v() = max_≤, ∈· (see Theorem <ref>). To show v is LS, we prove that the concave closure of v̅ is equivalent to v. For the concave closure definition, see Theorem <ref>. Note that here we used v̅ to distinguish the discrete valuation from the continuous valuation v. For ∈ [0,1]^J, v() = v̅^+(). First, we show the “≤" direction. Let ∈ such that v() = v(). Since is integral, we can express as a convex combination of the subsets of J, so = ∑_S ∈ℐλ_S ·_S, where ∑_S ∈ℐλ_S = 1. Then, v() = · = ∑_S ∈ℐλ_S (·_S) = ∑_S ∈ℐλ_S  v̅(_S) ≤ v^+(). To show the other direction, let v̅^+() = ∑_S v̅(_S) λ_S for some {λ_S}_S ⊆ J with ∑_S λ_S = 1 and ∑_S λ_S _S =. Let π(S) be S' ⊆ S such that _S'∈ and v̅(_S') = v̅(_S). We then have v̅^+() = ∑_S v̅(_π(S)) λ_S = ∑_S λ_S  ·_π(S) Let ' := ∑_S λ_S_π(S). Note that ' ≤. Further, ' ∈ due to 's integrality. By definition of v, we know v is monotone. Thus, we have, v() ≥ v(') = ·' = ∑_S λ_S  ·_π(S) From Eqn. (<ref>) and (<ref>), we conclude v() ≥v̅^+(), as desired. Since v̅^+ is LS, so is v. Therefore, Theorem <ref> establishes GD as a scalable algorithm. Finally, if {λ_S }_S ∈ℐ is the convex combination of independent sets that achieves v^+(), we can implement our algorithm by preemptively scheduling each independent set S in proportional to λ_S. §.§ Generalized Flow Scheduling Recall that we are given a network G=(V,E), where J ⊆ V is the set of source nodes. Arc e has capacity u_e and flow gain factor γ_e; a flow of value f becomes γ_e f after flowing through e. Job j gets processed at a rate equal to its (outgoing) net flow. Formally, node v has excess capacity b_v. Sources have non-negative excess capacities. A job j gets processed at a rate of z_j, when its net flow is z_j for ∈, which is defined as follows. = = ⟨∑_w (f_jw - γ_wj f_wj) ⟩_j ∈ J | ∑_w (f_vw - γ_wv f_wv)≤ b_v ∀ v ∈ V; f_e ≤ u_e ∀ e ∈ E; ≥ 0 Theorem <ref> shows v() = max_∈· is LS. Note that our network can ensure ≤ by setting the excess capacity of the node that represents job j to be x_j. Thus, we immediately have a scalable algorithm due to Theorem <ref>. §.§ Speed-up Curves on Partitioned Resources We consider this problem from the Resource View. Then we need to show that the utility u_j is linear-substitute and satisfies two desired properties in Theorem <ref>. Without loss of generality, we can assume the utility is determined by only one univariate function that is u_j() = g(∑_d∈ [D] a_d x_d). We drop j for brevity henceforth. We begin by showing a simple greedy algorithm being optimal. The proof follows from a simple swapping argument. For any price vector on resources, the greedy algorithm selects elements fractionally in decreasing order of a_d / q_d until no marginal increment, finds a solution maximizing the quasi-linear utility, i.e., a solution in D(u, ). Fix some price vector on resources. Assume that a_1 / q_1 ≥ a_2 / q_2 ≥ ...≥ a_D / q_D. Suppose we have an optimum solution where x_i < 1 and x_j > 0 for some i < j. Then, we maximally increase x_i by a_j δ / a_i and decrease x_j by δ for some δ >0 until x_i becomes 1 or x_j becomes 0. This preserves the value of ∑_d a_d x_d without increasing ·. By repeating this, we have the lemma. First, we prove that u satisfies linear-substitute. Although the claim is known to be true (see Section 5 in <cit.>), we include it as it is a nice warm-up before proving the subsequent claims. u is linear-substitute. Consider the greedy algorithm in Lemma <ref>. Now we increase the price of resource d. In the new greedy order, the resource d can only be considered later; other elements keep the same relative order. Thus ^*_i can not decrease for all i ≠ d because they are processed earlier than the original greedy order. Therefore, we have the claim. The following claim shows that when we increase the price vector simultaneously for each coordinate at the same scale, u can only demand less. For any ∈D(u, ) and λ≥ 1, there exists ' ∈D(u, λ) such that ≥'. Consider the greedy algorithm in Lemma <ref>. Note that increasing the price vector to λ does not change the order of a_d / q_d. Thus the stopping point in the execution of the greedy algorithm comes earlier as the marginal increment becomes zero with fewer resources in the order, which gives the proof. Further, we show that when we increase the price vector , the utility would not increase. For any ∈D(u, ) and ' ≥, there exists ' ∈D(u, ') such that u() ≥ u('). For the sake of contradiction, suppose u() < u ('). Let ' be the solution constructed by the greedy algorithm described in the proof of Lemma <ref> with price vector '. Define analogously with . Because of the assumption u() < u (') and g's monotonicity, it must be the case that ∑_i a_i y_i < ∑_i a_i y_i'. Let ” be the “prefix” of ' such that z :=∑_i a_i y_i = ∑_i a_i y_i”. In other words, we run the greedy with ' but stop when z = ∑_i a_i y_i”. Let k be the smallest such that y_k' > y_k”. The greedy with respect to ' further increases ” since the marginal gain g(z + a_k) - g(z) - '_k > 0 for a sufficiently small >0. Therefore, if _k < 1, the greedy must have continued increasing since g(z + a_k) - g(z) - q_k ≥ g(z + a_k) - g(z) - '_k > 0. Since the greedy with respect to considers resources in the order of 1, 2, …, it means _1 = _2 = … = _k = 1; yet ”_k < 1 and ”_k+1 = ”_k+2 = … = ”_D = 0. This is a contradiction to ∑_i a_i y_i < ∑_i a_i y_i'. Finally, we discuss why it is wlog to assume that u is strictly monotone and strictly concave. To obtain a strictly concave function that is arbitrarily close to u, we replace each x_d with x_d^1 - δ for an arbitrary small δ > 0. Further, we replace g with a strictly monotone function that is arbitrarily close to g. It is an easy exercise to show this new function is strictly concave and arbitrarily close to the original function u on [0, 1]^D. We then pretend an infinitesimal unit of each resource as a distinct item and order these infinitesimal-sized items in decreasing order of a_d (( k)^1 - δ - ((k-1))^ 1 - δ) / (p_d ), which corresponds to the effectiveness of the k-th -sized copy of resource d. It is straightforward to check that the whole analysis remains unchanged. Further, if u does not depend on some resources, then for each of such resources d we can add λ_d x_d^1-δ for a sufficiently small λ_d > 0, so this additional term essentially has no effect on u while ensuring u being strictly monotone. Thus, this application admits a scalable algorithm due to Theorem <ref>. §.§ Unrelated Machines for Unweighted Jobs In this section we show how to obtain a (1+)-speed O(1 / )-competitive algorithm for total unweighted flow time using gradient descent; the weighted case is presented in the following section. Our algorithm is migratory, unlike the algorithms in <cit.>. (We also reproduce the non-migratory result in <cit.> in Section <ref>, focusing on supermodularity and gradient descent.) Unfortunately, we can not directly use Theorem <ref> for the following reason: the problem we consider allows migration of jobs across machines. The residual optimum of a migratory schedule does not seem to be LS. Thus, we use the non-migratory residual optimum, which turns out to be GS. Interestingly, our resulting gradient descent algorithm is migratory. Nevertheless, the analysis will be very similar to the proof of Theorem <ref>. Thus, we will adopt the same notation throughout this section. We first describe the residual optimum for this problem. Let f() denote (the cost of) the min-cost non-migratory schedule of jobs, where each job j has (remaining) size x_j. It is well known that it can be computed by min-cost bipartite matching. For completeness, we describe it here: Consider a bipartite graph G = (J, M'; E) where there is a unique node, indexed by (i, k), for each pair of machine i and k ∈_> 0. There is an edge between j ∈ J and (i, k) ∈ M'. Choosing the edge means scheduling job j on machine i in the k-th reverse order non-preemptively, thus the edge has cost k · p_ij, where p_ij = x_j / λ_ij. Intuitively, this means there are k jobs behind job j on the machine, including itself—thus j delays k jobs—when following SRPT on each machine. The residual optimum f() is the min cost of a bipartite matching for this instance. g̅_(J') := f(⊙_J') is (discrete)-supermodular. From Section <ref>, we know that maximum weight bipartite matching is GS. Therefore, - g̅_ is GS and submodular. As before, we use () to denote the max decrease rate of the residual optimum. We will see that following the optimum residual schedule until a new job arrives is as effective as the true GD; that is, it decreases the residual optimum at a rate equal to (). Since the analysis is very similar to the proof of Theorem <ref>, we only highlight the differences. We use the same potential function. Lemma <ref> gives supermodularity of the residual optimum. Thus, we can show all discrete changes of Φ is non-positive following the same argument in the proof of Lemma <ref>. When the residual optimum is f(), we have () = -∑_x_j>0 1. Proving () ≤ -Σ_x_j>0 1 is identical to the first part proof of Lemma <ref>. In fact, we can decrease f() at a rate of at least Σ_x_j>0 1 by following the residual optimum schedule. Showing the other direction needs special care because while the residual optimum schedule is non-migratory, changing the job assignment could help decrease the residual optimum. Assume wlog that > 0, because we can ignore jobs j such that x_j = 0. Suppose GD processes j(i) on machine i for a sufficiently small unit time t, so no job completes during the time slot. Let ' be the resulting remaining job sizes and σ' be the optimal non-migratory schedule for ', where j is scheduled on machine ψ(j). Let σ be the non-migratory schedule obtained from σ' by increasing each job's remaining size from x'_j to x_j, without changing the assignment ψ and the job ordering on each machine. So, σ is a feasible non-migratory schedule for , but not necessarily optimum. Let (σ) denote the total completion time of schedule σ. Clearly we have, () · t ≥(σ') - (σ). Thus, it suffices to lower bound (σ') - (σ). For notational brevity, let x'_ij := x'_j / λ_ij, which is j's processing time on i when its remaining size is x'_j. Similarly, let x_ij := x_j / λ_ij. Define P_i, < k := ∑_j': x'_ij' < x'_ik x'_ij' to be the total remaining processing time of jobs smaller than job k on machine i w.r.t. ' in σ', breaking ties in an arbitrary but fixed order. Let N_i, ≥ k := ∑_j': x'_ij'≥ x'_ik 1 denote the number of jobs of remaining sizes no smaller than job k on machine i w.r.t. ' in σ'. Since j(i) gets processed on machine i, we have x_j(i) - x'_j(i) = λ_i, j(i) t. The job j(i) is on machine ψ(j(i)), and the number of jobs behind it[Note that on each machine i jobs are ordered following the SRPT order, i.e. in decreasing order of x_ij.], including itself, is exactly N_ψ(j(i)), ≥ j(i). Thus, due to x'_j(i) increasing to x_j(i), j(i)'s contribution to (σ') - (σ) is exactly, N_ψ(j(i)), ≥ j(i) x'_ψ(j(i)),j(i) - N_ψ(j(i)), ≥ j(i) x_ψ(j(i)), j(i) = - N_ψ(j(i)), ≥ j(i)λ_i, j(i)/λ_ψ(j(i)), j(i) t Therefore, we have () ≥ - ∑_i N_ψ(j(i)), ≥ j(i)λ_i, j(i)/λ_ψ(j(i)), j(i) t We complete the proof by showing N_i ≥ N_ψ(j(i)), ≥ j(i)λ_i, j(i)/λ_ψ(j(i)), j(i) for all i, where N_i := |ψ^-1(i)| is the number of jobs assigned to machine i. To see this, observe the following: P_i, < j(i) + N_i, ≥ j(i)x'_j(i)/λ_i, j(i)≥ P_ψ(j(i)), < j(i) + N_ψ(j(i)), ≥ j(i)x'_j(i)/λ_ψ(j(i)), j(i), This is because the LHS and RHS measure the increase of the objective due to j(i)'s presence on machines i and ψ(j(i)), respectively (in other words, the LHS is the increase of the residual objective on machine i when we move j(i) from ψ(j(i)) to i, and the RHS is the decrease of the residual objective on machine ψ(j(i)) for that move): The first term is j(i)'s start time and the second is the number of jobs delayed by j(i), including itself, multiplied by j(i)'s remaining size on the machine. Thus, the inequality is a necessary condition for σ' being the optimal residual schedule for '. Multiplying both sides in Eqn. (<ref>) by λ_i, j(i)/x'_j(i) = 1/x'_i,j(i), we have P_i, < j(i)/x'_i, j(i) + N_i, ≥ j(i)≥P_ψ(j(i)), < j(i)/x'_j /λ_i, j(i) + N_ψ(j(i)), ≥ j(i)λ_i, j(i)/λ_ψ(j(i)), j(i). For the first term, we have P_i, < j(i)/x'_i, j(i) = ∑_j': x'_ij' < x'_i, j(i) x'_ij' / x'_i, j(i)≤∑_j': x'_ij' < x'_i, j(i) 1. Thus, it is at most the number of jobs that have smaller processing times than j(i) on machine i. Thus, the LHS is at most the number of jobs assigned to machine i, i.e., N_i, as desired. As mentioned in the above proof, following the non-migratory residual optimum schedule decreases the residual as much as GD. Further, it is an easy exercise to show that we do not have to compute a residual optimum until a new job arrives. All the remaining analysis remains the same and is omitted. §.§ Unrelated Machines for Weighted Jobs In this section, we extend the GD algorithm to handle weighted jobs and give an algorithm that is (1+)-speed O(1 / ^3)-competitive algorithm for total weighted flow time. One may try the previous residual LP (Eqn. (<ref>)) where = { z_j(t̃) = ∑_iλ_ij y_ij(t̃) | ∑_i y_ij(t̃) ≤ 1 ∀ j, t̃ ; ∑_j y_ij(t̃) ≤ 1 ∀ i, t̃} Unfortunately, this objective does not seem to be LS. Instead, we consider essentially the same LP that was used in <cit.> for their dual fitting analysis. However, our algorithm is very different from the immediate-dispatch algorithm that was given in <cit.>. For notional simplicity, we may use p_ij := p_j / λ_ij to denote the processing time of job j on machine i when j is processed on machine i only. f() := min∑_i,jw_j λ_ij/p_j∫_t̃≥ 0t̃· z_ij(t̃) t̃_(*) + ∑_i,j w_j ∫_t̃≥ 0 z_ij(t̃) t̃_(**) ∑_j z_ij(t̃) ≤ 1 ∀ i, t̃ ∑_i∫_t̃≥ 0λ_ij z_ij(t̃)t̃ = x_j ∀ j This new objective will be shown to be supermodular from the LS property (see Lemma <ref>). Notice that this LP does not force a job to be processed on at most one machine; thus (t̃) may not be a feasible schedule. For this reason, we can not directly apply the GD algorithm in Section <ref>, which schedules (t̃ = 0). To handle this issue, we consider the following approximate GD: For each machine i, we only process one unit of jobs that appear the earliest in (t̃), ensuring that each job j gets processed up to the fraction of the job assigned to the machine. Formally, let t̃_i be the earliest time that ∑_j ∫_t̃=0^t̃_iz_ij(t̃)/p_ijt̃ = 1. Then we process ẑ_ij := ∫_t̃=0^t̃_iz_ij(t̃)/p_ijt̃ fraction of job j on machine i. We continue this schedule until the fraction of a job on some machine is completed. For brevity, we may call this approximate GD as GD. If we use the true GD algorithm, which decreases the residual optimum objective the most, we can still show the same guarantee. However, for the sake of analysis, we will consider the approximate GD. We first verify that is a feasible schedule. By definition of the algorithm, we have ∑_jẑ_ij≤ 1 for all i. This, together with the following claim, shows that ẑ_ij is a fractional matching between jobs and machines. Since a fractional bipartite matching can be expressed as a convex combination of integral matchings, we have that ẑ_ij is indeed a feasible schedule. For any job j, we have ∑_iẑ_ij≤ 1. From the second constraint of LP, we have ∑_iẑ_ij = ∑_i ∫_t̃=0^t̃_iz_ij(t̃)/p_ijt̃≤∑_i ∫_t̃≥ 0^t̃_iλ_ijz_ij(t̃)/p_jt̃ = x_j/p_j≤ 1. For simplicity, we assume wlog that ∑_i ẑ_ij = 1 and ∑_j ẑ_ij = 1, by adding 0 weight jobs. For the analysis, we use the following potential function. Φ(t) = 2/( f(^A(t)) - f (^A(t) || (1/ - 1)^O(t))) Here we note that f (^A(t) || (1/ - 1)^O(t)) is the optimum objective of when we have one copy of each job j in A(t) with remaining size p^A_j(t) and (1 / - 1) copies of each job j in O(t), each with remaining size p^O_j(t). §.§.§ Analysis Overview Before diving into the analysis, we first provide a high-level overview. Since the LP solution does not directly give us a time-indexed feasible schedule, we compromised true gradient descent. If possible, it would be the best for the algorithm to process exactly what the LP solution processes at time t̃ = 0. Unfortunately as discussed before, this may not be feasible. To intuitively understand our algorithm, consider a single machine i where jobs 1, 2, 3, … are processed in this order, each by a half fraction. For simplicity, let us drop i from the notation. Then, the LP solution on the machine is processing j for p_j / 2 time steps. As mentioned before, fully processing job 1 on the machine may result in an infeasible schedule if job 1 is scheduled before other jobs on another machine in the LP solution. Our algorithm processes job 1 and job 2, each by half on the machine (these two jobs are the first unit fraction of jobs), to ensure a feasible schedule. This is surely not so effective as fully processing job 1 in decreasing (*). However, for all jobs except 1 and 2, our algorithm is equally effective! Further, for the deficit due to jobs 1 and 2, (**) comes to rescue. So, if W̃_(1) is the total fractional weight processed by our algorithm, and we let W̃_(2) be the other total fractional weight, our algorithm gets W̃_(1) credit from (**) and W̃_(2) credit from (*). Thus, f(^A(t))) decreases at a rate of at least W̃_(1) + W̃_(2), which is the total fractional weight of the remaining jobs in our algorithm's schedule. To show the algorithm is scalable, we create (1/ - 1) copies of each job in the adversary's schedule and due to this we lose another 1 / factor in the competitive ratio, but this is a minor technicality. The critical observation is that the approximate GD is as effective as GD with the aid of the decrease of (**). As before we will consider the discrete changes and continuous changes separately. §.§.§ Analysis: Discrete Changes The proof of discrete changes of Φ is very similar to Lemma <ref>, so we only highlight the differences. First, we will use generalized flow to show the supermodularity of our new residual objective. f(x) is (discrete-)supermodular. We follow the same strategy as we used to prove Lemma <ref>. Consider the following valuation v_t̃((t̃)) := max∑_i,j(B λ_ij - w_j t̃/p_ij - 1 ) z_ij s.t. 0 ≤∑_i λ_ij z_ij≤ x_j(t̃), ∑_j z_ij≤ 1 for each time t̃. It is easy to see that the LP objective by flipping the sign and adding a big constant is equivalent to max∫_t̃≥ 0 v_t̃((t̃)) t̃ s.t. ∫_t̃≥ 0(t̃) t̃≤ To show f(x) is supermodular, it is sufficient to show that v_t̃ is LS for each time t̃. Toward this end, we can interpret the constraints in (<ref>) in the generalized flow setting as follows. For each machine i, node i has excess capacity 1. For each job j, node j has excess capacity x_j(t̃). Then add an arc from j to i with gain factor 1 / λ_ij for each job j and machine i. Thus the constraints fall in the generalized flow polytope (<ref>). Then we let the cost from j to i be (B λ_ij - w_j t̃/p_ij - 1). From Theorem <ref>, we know v_t̃ is LS which leads to the lemma. For the sake of completeness, we give the proof of the following lemma that bounds the change of jobs' arrival or completion. The change of Φ(t) due to the job's completion or arrival is non-positive. The proof is almost identical to Lemma <ref> although we use a slightly different potential function. Thus we only point out the differences when a new job j arrives. Then f (^A || (1/ - 1)^O) increases by at least: f(^A || p_j _j^A || (1/ - 1 )^O || (1/ - 1 ) p_j _j^O) - f(^A || (1/ - 1 ) ^O) = g_(A ∪ j^A ∪ (1/ - 1 ) O ∪ (1/ - 1 ) j^O) - g_(A ∪ (1/ - 1 ) O) where = ^A || p_j _j^A || (1/ - 1 ) ^O || (1/ - 1 ) p_j _j^O. By supermodularity of g and using the fact that p^A(r_j) = p^O(r_j) = p_j, we have g_(A ∪ j^A ∪ (1/ - 1 ) O ∪ (1/ - 1 ) j^O ) - g_(A ∪ (1/ - 1 ) O) ≥ g_(A ∪ j^A ∪ (1/ - 1 ) j^O) - g_(A) =g_(A ∪ ( 1/ ) j^A ) - g_(A) Then we can decompose it to a telescopic sum and apply the supermodularity as follows. = (g_(A ∪ (1 / ) j^A) )- g_(A ∪ (1 / - 1) j^A ) )+ ... + (g_(A ∪ j^A) - g_(A)) ≥1/ ( g_(A ∪ j^A) - g_(A)) Therefore, Φ(t)'s increase is at most 2/(g_(A ∪ j^A) - g_y(A) - ·1/ ( g_(A ∪ j^A) - g_(A)) ) = 0, as desired. §.§.§ Analysis: Continuous Changes As before, we let W̃^A(t) = ∑_j w_j/p_j p^A_j(t) be the total remaining fractional weight of our algorithm A at time t. Then, let W̃^A_(1)(t) be the fractional weight of jobs processed by the algorithm at t, i.e., W̃^A_(1)(t) := ∑_i,j w_j ∫_t̃ = 0^t̃_iz_ij(t̃)/p_ijt̃ = ∑_i,j w_j ẑ_ij. Let W̃^A_(2)(t) := W̃^A(t) - W̃^A_(1)(t). For brevity, in all of the following lemmas, we omit that we consider a time t when there is no jobs' completion and arrival. The following lemma shows that the approximate GD still decreases the new residual optimum effectively. / t f(^A(t)) ≤ -s W̃^A(t). Fix any time t and consider the change during time [t, t + t). Let be the optimum solution of f(^A(t)), i.e, = min_'_^A(t)('). Recall that our algorithm schedules during time interval [t, t + t) and it is given s-speed, i.e., GD processes a job j on machine i by s t ·ẑ_ij fraction, where ẑ_ij := ∫_t̃=0^t̃_iz_ij(t̃)/p_ijt̃. Now following the same strategy in Lemma <ref>, we need to find a feasible schedule ' (with 1-speed) for remaining job sizes ^A(t) - s t where Ẑ_j := ∑_i λ_ijẑ_ij. Then, the construction of ' is as follows. First, we remove s t · z_ij(t̃) / p_ij fraction from at local time t̃≤t̃_i for job j on machine i. Note that this empties out s t units of space on the machine because ∑_j ∫_t̃=0^t̃_iz_ij(t̃)/p_ijt̃ = 1. We obtain ' by removing the empty spaces on each machine. Let's fix machine i and consider the change of (*) in the residual LP (<ref>). For any t̃ > t̃_i, since s t ·∑_j ẑ_ij = s t fraction of jobs completed before time t̃_i, the contribution of z_ij(t̃) in _^A(t + t)(') becomes at most w_j/p_ij(t̃ - s t) z_ij(t̃). Thus, the decrease of f(^A(t)) from (*) is by at least ∑_i,jw_j/p_ij∫_t̃≥t̃_i(t̃ - (t̃ - s t )) z_ij(t̃) t̃ = s t ·∑_i,jw_j/p_ij∫_t̃≥t̃_i z_ij(t̃) t̃ = s t ·W̃^A_(2)(t) Since job j gets processed by s t ·ẑ_ij fraction on machine i, the change of f(^A(t)) from (**) is exactly ∑_i,j w_j ( ∫_t̃≥ 0 z_ij(t̃) t̃ - ∫_t̃≥ 0 z'_ij(t̃) t̃) = ∑_i,j w_j · s t ·ẑ_ij = s t ·W̃^A_(1)(t) Therefore, the lemma follows by putting two bounds together. In the following |_GD and |_ mean that we consider the change due to GD's processing and 's processing alone respectively. -/ t f (^A(t) || (1/ - 1)^O(t)) |_GD≤ 2s W̃^A(t) + s(1/ - 1) W̃^O(t). Fix a time t and consider the change during [t, t + t), where no job arrives or completes assuming that only GD processes jobs. For notational simplicity, let (t):= ^A(t) || (1/ - 1)^O(t). To achieve our goal of upper bounding the following quantity, f((t)) - f((t + t)), we construct a feasible solution ” to _(t). Let ' be an optimum solution of f((t + t)), i.e, ' = min_”'_(t + t)(”'). Let ” be the time-indexed schedule which first has ẑ_ij for s t units of time and then exactly follow ' on each machine i. Clearly, we have f((t)) - f((t + t)) ≤_(t) (”) - f((t + t)) Let's first consider the change of (**) in the LP objective. Since GD processes jobs according to the distribution {ẑ_ij}_j during [t, t+ t), (**) decreases by ∑_i,j w_j ẑ_ij s t = W̃^A_(1) s t, as we observed in the proof of Lemma <ref>. We now turn our attention to the change of (*), which is the fractional weighted completion time of jobs. By scheduling ẑ with speed s, every terms coefficient changes from w_j/ p_ijt̃ to w_j/ p_ij (t̃ - s t). Thus, the decrease is at most the total fractional weight of all jobs times s t, which is (W̃^A(t) + (1 / - 1) W̃^O(t)) s t By adding up the upper bounds on the change of (*) and (**) and using the fact that W̃_(1)^A(t) ≤W̃^A(t), we obtain the lemma. We now consider the change due to the adversary's processing. Note that the RHS consists of W̃^A(t) := ∑_j ∈ A(t) w_jp^A_j(t)/p_j and W^O(t) := ∑_j ∈ O(t) w_j, which denote the total fractional remaining weight of jobs in A(t) and the total integral weight of jobs in O(t), respectively. -/ t f (^A(t) || (1/ - 1)^O(t)) |_≤ (1/ - 1) W̃^A(t) + (1/ ) (1/ - 1) W^O(t). The proof of this lemma is very similar to that of Lemma <ref>. We keep the same notation for and '. Since the adversary processing job j with 1 speed is equivalent to processing all (1 / - 1) copies of the same job at the same rate, we will pretend that all the copies are distinct jobs and the adversary has (1/ -1)-speed. Clearly, this only gives more power to the adversary. For simplicity we assume that the adversary schedules an integral matching during [t, t + t), since the extension of the analysis to fractional matching is straightforward. That is, the adversary processes exactly one job j(i) on machine i during [t, t + t) and processes it on no other machines. Let ẑ^* denote this schedule. So, z^*_i, j(i) = 1 but z^*_i, j' = 0 for all j' ≠ j(i). Let ” be a schedule that has ẑ^* for (1/ - 1) t time steps, followed by '. We note that (**) decreases by ∑_i,j w_j ẑ^*_ij (1 / -1) t = ∑_i w_j(i) (1 / -1) t ≤ (1 / -1) W^O(t) t, where the last inequality follows from the fact that j(i) ≠ j(i') for all i ≠ i'. By observing that the total fractional weight in ^A(t) || (1/ - 1)^O(t) is W̃^A(t) + (1 / - 1)W̃^O(t), we can upper bound the change of (*) by (1/ - 1) t (W̃^A(t) + (1 / - 1)W̃^O(t)) We obtain the lemma by adding the two upper bounds and using the fact that W̃^O(t) ≤ W^O(t). Consider any time step t that no arrival or completion of jobs occurs. For < 1/8 and A is given s = 1 + 2-speed, we have / tΦ(t) ≤ - W̃^A(t) + O(1 / ^2) W̃^O(t). From Lemma <ref>, Lemma <ref>, and Lemma <ref>, we have / tΦ(t) ≤2/( -s W̃^A(t) + ( (2s + 1/ - 1) W̃^A(t) + (s + 1/)(1/ - 1) W^O(t) ) ) = 2/( (- + 4^2) W̃^A(t) + O(1 / ) W^O(t) )) ≤ -W̃^A(t) + O(1 / ^2) W^O(t) By Lemma <ref> and Φ(T) = Φ(0) = 0, we have ∫_t ≥ 0/ tΦ(t) ≥ 0. By Lemma <ref>, we have ∫_t ≥ 0W̃^A(t) ≤ O(1 / ^2) ∫_t ≥ 0 W^O(t). LHS is the total fractional weighted flow time of the algorithm and RHS is the total integral weighted flow time of the optimum schedule. Therefore, by Lemma <ref>, we obtain a (1+)-speed, O(1/^3)-competitive algorithm for the integral objective. §.§ Unrelated Machines via Immediate Dispatch In this section, we reproduce the result in <cit.>, which gave a (1+)-speed O(1/ )-competitive algorithm for minimizing total weighted flow time. The analysis of <cit.> was based on dual fitting unlike <cit.>, which gave a (1+)-speed O(1/ ^2)-competitive algorithm using potential functions. We show that potential functions can also achieve an O(1/)-competitive ratio. At the high level, <cit.> considered integral residual optimum unlike <cit.> that used fractional residual optimum. Since we will use exactly the same algorithm as <cit.> and obtain the same competitive ratio of O(1 /), we will only sketch the analysis, focusing on how our approach leverages supermodularity and gradient descent explicitly. The algorithm we consider is non-migratory and immediate-dispatch. Let A_i(t) be the jobs assigned to machine i. Note that {A_i(t)}_i will be a partition of A(t). Let ^A_i(t) be the (vector of the) remaining sizes of jobs assigned to machine i at time t in our algorithm and define ^O_i(t) analogously for the optimum schedule. We define f(^A_i(t)) as the residual optimum at time t of the jobs assigned to machine i. Formally, letting p_ij(t) = p_j(t) / λ_ij, f(^A_i(t)) = ∑_j w_j ∑_j' ≤ j s.t. w_j' / p_ij'(t) ≥ w_j /p_ij(t) p_ij'(t) is the minimum total weighted completion time of jobs in A_i(t) pretending that each job j ∈ A_i(t) has a remaining size p_j(t), weight w_j, arrival time 0; ties are broken in an arbitrary but fixed order. Since f is concerned with the single machine scheduling, it is straightforward to show its supermodularity. Then we let f̅(^A(t)) := ∑_i f(^A_i(t)). We can restate <cit.> algorithm as follows: When a job j arrives at time t, we permanently assign the job j to the machine i that incurs the minimum increment of the residual optimum f̅(^A(t^-)). Then, we process the jobs on each machine using gradient descent, i.e. follow the residual optimum schedule on the machine. For the analysis, we use the following potential function, where f̅(^A(t) || ^O(t)) is defined analogously for jobs in A and O together. Φ(t) = 2/(f̅(^A(t)) - 1/2f̅(^A(t) || ^O(t) )) While we only consider the non-migratory optimum for brevity, it can be extended to the migratory case, following the approach described in <cit.>. Continuous Changes. If we focus on machine i, we only need to consider the change of 2/( f(^A_i(t)) - 1/2 f(^A_i(t) || ^O_i(t) )). This is exactly the same as the single machine scheduling where we can pretend that we only have jobs in A_i(t) and the adversary has jobs in O_i(t). Thus, using Corollary <ref>, we have that / tΦ(t) restricted to machine i is at most - ∑_j ∈ A_i(t) w_j + (1 + 2/) ∑_j ∈ O_i(t) w_j. Summing over all machines we have the following lemma. Consider any time t when no job arrives or is completed by our algorithm or the adversary. When GD is given 1+-speed, / tΦ(t) ≤ -W^A(t) + (1 + 2/) W^O(t). Discrete Changes. The following lemma bounds the potential change due to a job's arrival or completion. Unlike the proofs in the previous section, we take a closer look at the machine to which the job is assigned. If Φ(t) changes discontinuously due to a job's arrival or completion, the change is non-positive. For the same reason we used in the proof of Lemma <ref>, we only need to focus on the arrival case. Suppose a new job j arrives at time t. We use the same notations in Lemma <ref>. So j^A and j^O are job j's copy in the schedule of A and O respectively. We assume wlog that job j^A is assigned to machine 1 by the algorithm. If j^O is assigned to the same machine 1, the change of f̅(^A || ^O) is at least f(^A_1 || p_1j_j^A || ^O_1 || p_1j_j^O) - f(^A_1 || ^O_1) ≥ 2 (f(^A_1 || p_1j_j^A) - f(^A_1) ) where the inequality holds due to the supermodularity of f. Suppose j^O is assigned to another machine, say machine 2. Since the change only occurs on machines 1 and 2, the increase of f̅(^A || ^O) is at least f(^A_1 || p_1j_j^A || ^O_1) - f(^A_1 || ^O_1) + f(^A_2 || ^O_2 || p_2j_j^O) - f(^A_2 || ^O_2) ≥ f(^A_1 || p_1j_j^A) - f(^A_1) + f(^A_2 || p_2j_j^O) - f(^A_2) , where the inequality follows from f's supermodularity. Recall that the algorithm assigns job j to the machine i that gives the minimum increment of f. Thus we have f(^A_1 || p_1j_j^A) - f(^A_1) ≤ f(^A_2 || p_2j_j^O) - f(^A_2) Then, Eqn.(<ref>) is at least 2 (f(_1^A || p_1j_j^A) - f(_1^A) ). Thus, in both cases, we have shown that f̅(^A || ^O) increases by at least 2 (f(_1^A || p_1j_j^A) - f(_1^A) ). Since the change of f̅(^A) is exactly f(^A_1 || p_1j_j^A) - f(^A_1), we can conclude that Φ(t)'s change due to job j's arrival is non-positive. We combine Lemmas <ref> and <ref> as we combined Lemma <ref> and Corollary <ref> at the end of Section <ref>. As a result, we can show that the online algorithm is (1+)-speed O(1/)-competitive for the total weighted flow time objective. § OTHER RELATED WORK Gross substitutes valuations were introduced by Kelso and Crawford in economics <cit.>. Interestingly, the same valuations were introduced in other areas under different names, such as M^♮-concave functions <cit.>, matroidal maps <cit.>, and valuated matroids <cit.>. Thus, there are several equivalent ways to characterize them. The interested reader is referred to the nice survey <cit.> and tutorial talk given in ACM EC 2018 <cit.>, and the extensive survey <cit.>. We discuss two assumptions we make in this paper, clairvoyance and free preemption. Clairvoyant algorithms are assumed to know a job's size upon its arrival—in contrast, non-clairvoyant algorithms do not until completing the job. See <cit.> for some examples of non-clairvoyant algorithms. There have recently been work to design online algorithms when we are given jobs sizes that are not completely accurate <cit.>, where the goal is to obtain a schedule almost as good as what the best clairvoyant algorithm can offer when the (predicted) sizes are not far from the actual sizes. In this paper we allow online algorithms to preempt jobs for free. In fact, the flow time minimization becomes computationally very hard without preemption. Thus, non-preemptive scheduling has been studied with resource augmentation even in the offline setting <cit.>. In this work we assume there are no precedence constraints among jobs. For online precedence constrained scheduling for total (weighted) flow time, see <cit.>. There exists a large literature on the “delay" variants of online problems, where the online algorithm can procrastinate its decision incurring delay cost, e.g., <cit.>. The main difference is that in our problem, we have capacitated resources that we can use to process jobs at each time, while the online algorithms pay extra cost for taking an action such as buying a set in their setting. Deng <cit.> study Walrasian equilibrium for a market where jobs act as agents bidding for CPU times. Their work considers the single-machine setting, where jobs have different valuations depending on their own completion time. The focus is on the existence and computability of the equilibrium. § MISSING PROOFS FOR VALUATION FUNCTIONS §.§ Proof of Lemma <ref> We have v() = min_≥ 0π() + (See <cit.> Theorem 26, the concavity implies this). Let f(, ) = π() + ·. We can show · is supermodular in and when , ≥ 0 using on each coordinate the fact that a b + a' b' ≤max{a, a'}max{b, b'} + min{a, a'}min{b, b'} for any scalars a, a', b, b' ≥ 0. Then, we have f(-, ) = π() - ·, which is submodular in and . By taking the min over all , we know v(-) is submodular for the following reason. Let g(, ) := f(-, ). Let and ' be such that v(-) = g(, ) and v(-') = g(', '). We then have, v(-) + v(-') = g(, ) + g(', ') ≥ g( ∨', ∨') + g( ∧', ∧') ≥ v( - (∨')) + v( -∧'), which proves v(-)'s submodularity in . So, we have v() + v() = v(- (-)) + v((-)) ≥ v(- (-∨-) + v(-(-∧ -)) = v( ∧) + v(∧) §.§ Concave Closure of GS: Proof of Theorem <ref> Consider a fixed gross-substitute valuation v:0, 1^n →.[We interchangeably use v(S) and v(_S)] We first observe that for any rational number δ > 0, we can approximate v by a function v':0, 1^n → such that |v(S) - v'(S)| ≤δ for any S ⊆ [n]. This is because due to <cit.> that shows that v is GS if and only if it satisfies a finite number of linear inequalities, we can set up a linear programming with variables {v'(S)}_S ⊆ [n] satisfying the inequalities together with the approximation requirement for all S. The LP has a feasible solution, and particularly a rational number solution since δ∈. Thus, we assume v:0, 1^n → henceforth. We show that for any ∈^n ∩ [0, 1]^n, we can view v^+() as a convolution of multiples copies of v. The proof is not difficult, but we include it for completeness. Define ṽ^+ : ^n ∩ [0, 1]^n → such that ṽ^+() = sup_ = 1 / k! : k ∈_≥ 0max_ / = _1 + ... + _1 / ∑_i = 1^ 1 / v(_i) where _1, ..., _1 /∈0, 1^n. Then, for any ∈^n ∩ [0, 1]^n, ṽ^+() := v^+() where v^+() = max∑_S v(S) λ_S : ∑_Sλ_S _S = , ∑_Sλ_S = 1, λ_S ≥ 0 is the concave closure of v. Fix a ∈^n ∩ [0, 1]^n. Let λ_S_S be an optimal solution of v^+(); note that λ_S ∈ for all S. Consider an such that λ_S is an integer multiple of for all S. To show ṽ^+ () ≥ v^+(), we assign _S to λ_S / copies of v. Then, ṽ^+ () ≥∑_S (λ_S / ) v(_S) = v^+ (). Conversely, fix an > 0 such that 1 / is an integer, together with any _1, _2, …, _1 / such that / = ∑_i=1^1 / _i. Then, by setting λ_S = |_i = _S_i|, one can immediately show v^+() ≥ṽ^+(). Finally, it is easy to see that ṽ() is well defined as in the definition of ṽ^+(), the quantity monotonically increases as k →∞. Strictly speaking, when we create 1/ copies of v, we also create 1 / copies of each item. Each copy of v is assumed to have no benefit from having more than one copy of any item. It is easy to check that this valuation function is GS over all the item copies. The following corollary is immediate from the proof of the above lemma. For any ∈^n ∩ [0, 1]^n, there exists > 0 such that v^+() = ṽ^+() = v^+(), where v^+() = max_ / = _1 + ... + _1 / ∑_i = 1^ 1 / v(_i) From Lemma <ref>, it is sufficient to show the dual profit of v^+ is submodular. The dual profit π^+() = max_∈^n v^+() - · is submodular. We first assume ∈^n. When ∈^n, we know = max_' ∈^n v^+(') - ·' is in ^n. Thus, from the previous lemma and corollary, we have v^+() = ṽ^+(). Further, for some > 0, we have v^+() = v^+(). Then, we have, π^+() = max_∈_+^n v^+() - · = max_∈_+^nmax_ / = _1 + ... + _1 / ∑_i = 1^ 1 / v(_i) - ·_i = max__1, ..., _1/∈0, 1^n ∑_i = 1^1 / v(_i) - ·_i = ( 1/ ) π() = π() Thus, we have shown that π^+ coincides with π on all rational vectors . Due to continuity, it does on all real vectors . Since v is a gross-substitute valuation, π is submodular. Therefore, π^+ is submodular, as desired. §.§ Generalized Flow is Linear Substitute To show that v is LS, from Lemma <ref>, we know that it is equivalent to showing its dual profit π() is submodular w.r.t price vector ∈^n. The following lemma is due to Fleischer <cit.> based on LP primal-dual. For a combinatorial proof, see <cit.>. Let g(S) denote the (cost of the) min-cost flow with non-negative excess at the set of nodes S ⊆ V ∖ V^-, that is the min-cost flow with excess vector b' defined as b'_v = b_v for v ∈ S ∪ V^-; and b'_v = 0 for the other nodes. Then, g is a supermodular. Intuitively, g measures the cost of the min-cost flow of the underlying network G when we `switch' on a subset S of sources V^-; all the other sources do not generate any flow. As we add more sources, they create more congestion, which causes supermodularity. We show that Lemma <ref> implies the valuation function we consider is LS. The proof idea is relatively simple. Due to Lemma <ref>, it suffices to show the dual profit function is submodular. Towards that end, we simulate increasing the price of a job's processing unit by adding/activating new source nodes associated with the job. We first show submodularity when the prices are rational numbers, i.e., ∈^n, for any > 0 such that 1 / is an integer, and extend it to arbitrary prices. For any ∈ 1 / _+, π^+() = max_∈_+^n v() - · is submodular where ∈_+^n. We construct an auxiliary network G̃ such that the min-cost flow of G̃ captures the dual profit -π^ as follows. We extend G to G̃ by adding nodes and arcs as follows. Without declaration, the flow gain γ_e is 1 and the capacity u_e is ∞ for all new arcs e, and the excess b_v are ∞ for all new nodes. Consider the source node for job j in G. Let q_j be its price. First, add a new node j_0 to G and connect it to j by an arc (j_0,j) with cost - c_j. Then we add a series of arcs: for every j ∈_+, we add a new node j_i, and connect it to j_i-1 with cost . Finally, we set the cost of each arc in G to be 0. Then we let g(S) denote the min-cost flow with nonnegative excess at the set of nodes S, that is the excess b' defined as b_v' = b_v for all v ∈ S ∪ V^-; and b_v' = 0 for the other nodes. Now fix any price vector ∈_+^n. Let S_ be a set of nodes that contains j_q_j / , j_q_j / + 1, ... for each j ∈ J. Then we notice that -π^+() is equal to g(S_). Consider two price vectors , ' ∈_+^n and ≤', and a job j. Note that π^+( + _j) - π^+() = - g(S_ + _j) + g(S_) and S_ + _j + j_q_j / = S_. Because ≤', we have S_'⊆ S_. Then the lemma follows from the supermodularity of g due to Lemma <ref>. The dual profit π : _+^n → is submodular. For any feasible of v, we can write π() = max_ ( - ) ·. Then, it is clear that π is continuous. Suppose that π is not submodular. Let , ∈_+^n be the price vectors that violate the submodularity. Then, we have π() + π() < π(∨) + π(∧) Let Δ be the difference of the above inequality. From the continuity of π, for some infinitesimally small ' > 0, we have |π() - π(')|, |π() - π(')|, |π(∨) - π(' ∨')| and |π(' ∧) - π(' ∧')| are less than ', for some δ such that - '_1 < δ and - '_1 < δ where ',' ∈_+^n for some ∈ 1 / _+. Since Δ - 4 ' > 0, we have π^+(') + π^+(') < π^+(' ∨') + π^+(' ∧') This would contradict the submodularity of π^+ from Lemma <ref>, and thus we have the lemma. § GRADIENT DESCENT IS NOT A PANACEA In this section, we briefly show that gradient descent is not O(1)-competitive with any O(1)-speed. In the (pull-based) broadcast scheduling <cit.>, the server stores some n pages of useful information. At each time some jobs arrive, each asking for a page. For simplicity, assume all pages are unit-sized. When the server broadcasts a page, it satisfies all jobs requesting the same page and it can broadcast only one page at a time. Scalable algorithms are known for this problem <cit.>. Suppose gradient descent has s-speed and s is an integer. Consider the following instance. There are special pages, e_1, e_2, …, e_s. At each time, a total of 2s jobs arrive, two jobs asking for each special page. Further, in every s+1 time steps, one job arrives asking for a distinct page. Suppose the instance ends at time (s+1)T. At each time, gradient descent processes the 2s jobs requesting the special pages, having no time to work on any other jobs. Thus, at time T, it has Ω(T) jobs alive, which have Ω(T) flow time on average. Therefore, gradient descent's total flow time is Ω(T^2). On the other hand, the adversary can repeat the following: Broadcast each of the special s pages and then the unique other page requested. It is easy to see that it satisfies every job within O(1)-time step. Therefore, its total flow time is O(T). § MAKING GRADIENT DESCENT RUN IN POLYNOMIAL TIME The high-level idea is to replace time slots with intervals of exponentially increasing lengths to reduce the number of variables to consider. This idea is commonly used in approximation algorithms to make a time-indexed LP compact. While it is straightforward to see it only losing 1+ factor in the approximation ratio, here we should be more careful as we need to show the change of the approximate LP optimum (residual optimum) is 1+-approximate as opposed to what we obtain with the exact residual optimum. We define time intervals := { I_h := [a_h, a_h+1) | h ≥ 1}, where a_1 < a_2 < … are the integers in [10/ρ^2] ∪{⌊10/ρ^2 (1+ρ)^l ⌋ | l ≥ 1 }, in increasing order, for some small constant ρ > 0. We will set ρ later and we will assume 1/ ρ is an integer. Let |I| denote the length of the interval I; more precisely, the number of integer time steps in I. The following properties of are immediate from the construction. The collection of intervals has the following properties. Let I_H' be the first interval of length greater than 1. * For all h ≥ 1, |I_h | ≤ |I_h+1|. * All intervals I_1, I_2, … I_H'-1 have length 1. * For all h ≥ H', |I_h| ≥1/1+2ρ |I_h+1|. In the residual optimum, we will pretend that time steps in each interval in are all identical. Towards this end, define g(t) to be a_h where I_h := [a_h, a_h+1) is the unique interval in including t. So, we will consider the following LP. f():= min∑_j ∈ A(t) w_j/p_j∑_t̃≥ 1 g(t̃) · z_j(t̃) ∑_t̃≥ 1 z_j(t̃) t̃ = x_j ∀ j ∈ A(t) (t̃) ∈ ∀t̃≥ 1 Note that in reality, g(t̃) is the same for all t̃∈ I_h, so we can aggregate the variables {z_j(t̃)}_t̃∈ I_h into one; this approximate LP is of compact size. However, we take this full expanded view for the sake of analysis, assuming wlog that z_j(t̃) is the same for all t̃ in each I_h, for any fixed job j. It is easy to see that replacing t̃ with g(t̃) does not change the residual objective's supermodularity and monotonicity. The only properties we need to check is that the residual objective's change rate remains the same within 1+ factor, under GD and the adversary, which can be easily offset by extra 1+O()-speed augmentation. So, henceforth we aim to show it. GD remains effective in decreasing the residual optimum. We first show that processing following the optimum solution to the above LP in the first time step decreases the objective by an amount that is almost equal to the total fractional weight of all jobs. Let := ⟨_1, _2, …, ⟩ be the optimum solution to _. We may use inequalities in the subscript to consider a partial schedule of ; for example, _≥ 2 :=⟨_2, _3, …, ⟩. Let := ⟨_1, _2, …, ⟩ be the optimum solution to _. Then, f() - f( - _1) ≥_(_≥ 1) - _ - _1(_≥ 2) ≥1/1+2ρW̃(t). Before beginning proving the lemma, we first make a simple observation, which will be useful throughout this section. Intuitively, one wants to complete more fractional weights earlier to minimize the objective, which is formally stated below. Let V(τ) :=∑_j w_j/p_j z_j(τ). We have V(a_h) ≥ V(a_h+1). Otherwise, swapping (a_h) and (a_h+1) decreases the LP objective, contradicting 's optimality. The first inequality, f() - f( - _1) ≥_(_≥ 1) - _ - _1(_≥ 2), follows from the observation that _≥ 2 is a feasible schedule to _ - _1. Let I_H be the last interval where z_j(a_H) has a non-zero value for some j. We then have, _(_≥ 1) - _ - _1(_≥ 2) = ∑_j w_j/p_j∑_h ∈ [H] (a_h - a_h-1) z_j(a_h) = ∑_j w_j/p_j∑_h ∈ [H] |I_h-1| z_j(a_h) ≥ ∑_j w_j/p_j∑_h ≤ H' |I_h-1| z_j(a_h) + ∑_j w_j/p_j∑_H' < h ≤ H |I_h-1| z_j(a_h) ≥ ∑_j ∑_h ≤ H'w_j/p_j z_j(a_h) + ∑_j w_j/p_j∑_H' < h < H1/1+2ρ|I_h| z_j(a_h) = ∑_τ < a_H' V(τ) + 1/|I_H'|∑_a_H'≤τ < a_H'+1 V(τ) + 1/1+2ρ∑_τ≥ a_H'+1 V(τ) ≥ 1/1+2ρ∑_τ V(τ) = 1/1+2ρW̃(t) The penultimate inequality follows from Claim  <ref> and |I_H'| ≤ 2ρ a_H'. By setting ρ small enough (and using slightly more speed augmentation), we will argue that GD effectively decreases the residual optimum. No need to recompute the residual optimum until a new job arrives. In the above, we only showed the GD effectively decreases the residual objective for one time step. Here, we further argue that we can just process _1, _2, ...., _t̃ until a new job arrives at time t+ t̃. To this end, we pretend that the residual changes by _ - _≤τ -1(_≥τ) - _ - _≤τ(_≥τ + 1) when we process z_τ at time t+τ. This is because until a new job arrives at time τ', the residual decreases by at least _ - _≤ 0(_≥ 1) - _ - _≤τ'(_≥τ' + 1), and we can decompose it as a telescopic sum, where we pretend that the residual decreases by the amount shown in Eqn. (<ref>) at each time τ. No algorithm can decrease the residual optimum much more than GD. We need to argue that no algorithm can decrease the residual optimum by more than W̃(t) (with speed 1). To see this, fix an algorithm and suppose it processes _1 in the first time slot, and the optimum schedule for the remaining jobs sizes, - _1, is ⟨_2, _3, …, ⟩. Our goal is to upper bound, f() - _ - _1 (_≥ 2) ≤ _ - _0 (_≥ 1) - _ - _1 (_≥ 2) ≤ ∑_j w_j/p_j∑_h ∈ [H] (a_h - a_h-1) z_j(a_h) = ∑_j w_j/p_j∑_h ∈ [H] |I_h-1| z_j(a_h) ≤ ∑_j w_j/p_j∑_h ∈ [H] |I_h| z_j(a_h) = ∑_τ V(τ) = W̃(t) No algorithm can decrease the residual optimum by more than W̃(t) at a time t. Putting the Pieces Together. We use the same potential function we used in the proof of Theorem <ref>, except that f is replaced with the approximate function we consider in Eqn. (<ref>). All the other properties such as supermodularity remain to be satisfied with the approximate residual LP. Thus, the former analysis for discrete events such as jobs arrival or completion continues to hold true (Lemmas <ref> and <ref>). The bound in Lemma <ref> becomes / t f(^A(t)) = - s/1+2ρW̃^A(t) thanks to Lemma <ref>. The bound in Lemma <ref> remains unchanged due to Lemma <ref>. Then, assuming the online algorithm is given (1+)(1+2ρ)-speed, the bound in Corollary <ref> becomes d/dtΦ(t) ≤ -W̃^A(t) + 1/((1+)(1+2ρ) + 2) W̃^O(t). We can set ρ to to obtain an algorithm that is (1+)(1+2)-speed O(1/ )-competitive for the fractional objective. By scaling appropriately, we have Theorem <ref>, even with the approximate residual optimum. Thus, we have shown polynomial time algorithms achieving the theorem—more precisely, assuming that we have a poly-time separation oracle for whether ∈ or not. As a result, We also have polynomial time algorithms for Theorem <ref> as the proof of the theorem reduces the resource view to the processing rate view. alpha
http://arxiv.org/abs/2409.02435v1
20240904042559
Uniform-in-time propagation of chaos for second order interacting particle systems
[ "Yun Gong", "Zhenfu Wang", "Pengzhi Xie" ]
math.AP
[ "math.AP", "math.PR" ]
Uniform-in-time propagation of chaos for second order interacting particle systems Yun Gong [Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China. [email protected].] Zhenfu Wang[Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China. [email protected].] Pengzhi Xie[Department of Finance and Control Sciences, School of Mathematical Sciences, Fudan University, Shanghai 200433, China. [email protected].] September 9, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================= We study the long time behavior of second order particle systems interacting through global Lipschitz kernels. Combining hypocoercivity method in <cit.> and relative entropy method in <cit.>, we are able to overcome the degeneracy of diffusion in position direction by controlling the relative entropy and relative Fisher information together. This implies the uniform-in-time propagation of chaos through the strong convergence of all marginals. Our method works at the level of Liouville equation and relies on the log Sobolev inequality of equilibrium of Vlasov-Fokker-Planck equation. § INTRODUCTION equationsection §.§ Framework In this article, we consider the stochastic second order particle systems for N indistinguishable point-particles, subject to a confining external force - ∇ V and an interacting kernel K, { x_i(t) = v_i(t) t, v_i(t) = - ∇ V(x_i(t)) t + 1/N∑_j ≠ iK(x_i(t) - x_j(t)) t - γ v_i(t) t + √(2 σ) B_t^i, . where i = 1,2,...,N. We take the position x_i in Ω, which may be the whole space ^d or the periodic torus 𝕋^d, while each velocity v_i lies in ^d. Note that { (B_·^i ) }_i=1^N denotes N independent copies of Wiener processes on ^d. We take the diffusion coefficient before the Brownian motions √(2 σ) as a constant for simplicity. Usually one can take σ = γβ^-1, where the constant γ > 0 denotes the friction parameter, and β is the inverse temperature. In more general models, those σ may depend on the number of particles N, the position of particles x_i, etc. In many important models, the kernel is given by K = - ∇ W, where W is an interacting potential. A well-known example for W is the Coulomb potential. We recall that the Hamiltonian energy H: (Ω×^d)^N→ associated to the particle system (<ref>) reads as H(x_1,v_1,...,x_N,v_N) = 1/2∑_i=1^N v_i^2 + ∑_i=1^N V(x_i) + 1/2N∑_i=1^N ∑_j ≠ i W(x_i - x_j), which is the sum of the kinetic energy 1/2∑_i=1^N v_i^2 and the potential energy U(x_1,...,x_N) defined by U(x_1,...,x_N):= ∑_i=1^N V(x_i) + 1/2N∑_i=1^N ∑_j ≠ i W(x_i - x_j). We recall the Liouville equation as in <cit.>, which describes the joint distribution f_N of the particle system (<ref>) on (Ω×^d)^N, ∂_t f_N + ∑_i=1^N v_i ·∇_x_i f_N + ∑_i=1^N ( - ∇_x_i V(x_i) + 1/N∑_j ≠ i K(x_i - x_j) ) ·∇_v_i f_N = σ∑_i=1^N Δ_v_i f_N + γ∑_i=1^N∇_v_i· (v_i f_N). We define the associated Liouville operator as L_N f_N = ∑_i=1^N v_i ·∇_x_if_N + ∑_i=1^N ( -∇_x_i V(x_i) + 1/N∑_j ≠ i K(x_i - x_j) ) ·∇_v_if_N - σ∑_i=1^N Δ_v_i f_N - γ∑_i=1^N∇_v_i· (v_i f_N). As the mean field theory indicates, when N →∞, the limiting behavior of any particle in (<ref>) is described by the McKean-Vlasov SDE { x(t) = v(t) t, v(t) = - ∇ V(x(t)) + K ∗ρ(x(t)) t - γ v(t) t + √(2σ) B_t, . where (x,v) ∈Ω×^d, (B_·) denotes a standard Brownian motion on ^d. We then denote its phase space density by f_t = (x(t), v(t)) and the spatial density by ρ(t,x) = ∫_^d f(t,x,v) v. The law f_t further satisfies the mean-field equation or the nonlinear Fokker-Planck equation ∂_t f + v ·∇_x f - ∇ V ·∇_v f + (K ∗ρ) ·∇_vf = σΔ_vf + γ∇_v · (vf). If σ = γβ^-1, one can check that its nonlinear equilibrium satisfies the following equation f_∞ = 1/Z e^- β (V(x) + 1/2v^2 + W ∗ρ_∞), where Z is the constant to make f_∞ a probability density. In the literature, one may use the “formal equilibrium" of Eq.(<ref>) at time t to refer f̂_t = 1/Ẑ e^- β (V(x) + 1/2v^2 + W ∗ρ_t), again where Ẑ is the constant to make f̂_t a probability density. It is well-known that the large N limit of particle system (<ref>) is mathematically formalized by the notion of propagation of chaos originating in <cit.>. See also the surveys for instance <cit.>. Recently, many important works have been done in quantifying propagation of chaos for different kinds of first order interacting particle systems with singualr kernels. See for instance <cit.> for detailed descriptions of recent development. However, for 2nd order systems or Newton's dynamcis for interacting particles, the natural question is to consider the mean field limit of a purely deterministic problem, that is σ=0 as in Eq. (<ref>) or our setting with the diffusion only acts on the velocity variables. Thus the limiting equation (<ref>) has the Laplacian term only in its velocity variable. Consequently, we cannot treat more general kernels K and long time propagation of chaos for 2nd order systems by exploiting entropy dissipation as easy as those in the first-order setting for instance as in <cit.>. The main purposes of this article are two-fold. Firstly, we study the long time convergence of the solution to the N-particle Liouville equation (<ref>) toward its unique equilibrium (i.e. the Gibbs measure) uniformly in N. Secondly, we establish propagation of chaos of (<ref>) (or (<ref>) )toward (<ref>) (or (<ref>) ) uniformly in time. We combine the entropy method developed in <cit.> with the hypocoercivity method refined in <cit.> to overcome the degeneracy of diffusion in position direction. Here we deal with the cases where the interaction kernel is globally Lipschitz and the strength of diffusion σ is large enough. Uniform-in-time propagation of chaos cannot hold in full generality, one of the most critical issues is that the non-linear equilibrium of (<ref>) might not be unique. Furthermore, for systems (<ref>) with singular interactions K, for instance the Coulomb interactions, even the mean-field limit/propagation of chaos for general initial data on a fixed finite time horizon, for instance t ∈ [0, T], is still widely open. The recent progress can be found for instance in <cit.>. Uniform-in-N convergence of (<ref>) toward its equilibrium is seldom investigated for systems even with very mild singularities. Those will be the objects of our further study. Let us briefly describe how we combine the relative entropy with hypocoercivity to establish propagation of chaos. To illustrate the ideas, we pretend all solutions are classical and there is no regularity issue when we take derivatives. We first recall the evolution of relative entropy in <cit.> for systems with bounded kernel K, / t H_N(f^t_N|f_t^⊗ N) ≤ - 1/N∫_(Ω×^d)^N f^t_N R_N Z - σ/N∫_(Ω×^d)^N f^t_N | ∇_V logf^t_N/f_t^⊗ N|^2 Z, where H_N(f^t_N|f_t^⊗ N) is the normalized relative entropy defined as H_N(f^t_N|f_t^⊗ N) = 1/N∫_(Ω×^d)^N f^t_N logf^t_N/f_t^⊗ N Z, and R_N is the error term defined as R_N = ∑_i=1^N ∇_v_ilog f_t(x_i,v_i) ·{1/N∑_j,j≠ i K(x_i - x_j) - K ∗ρ_t(x_i) }. Hereinafter we write that Z= (x_1, v_1, x_2, v_2, ⋯, x_N, v_N) ∈ (Ω×^d)^N, X= (x_1, x_2, ⋯, x_N) ∈Ω^N and V= (v_1, v_2, ⋯, v_N) ∈ (ℝ^d)^N. Then by integration by parts, one can rewrite (<ref>) as / t H_N(f^t_N|f_t^⊗ N) = - 1/N∫_(Ω×^d)^N∇_V logf^t_N/f_t^⊗ N· R^0_N Z - σ/N∫_(Ω×^d)^N f^t_N | ∇_V logf^t_N/f_t^⊗ N|^2 Z, where R^0_N is a dN-dimensional vector defined as R^0_N = {1/N∑_j=1,j≠ i^N K(x_i - x_j) - K ∗ρ_t(x_i) }_i=1^N. Inspired by the strategy used in <cit.> to deal with the first order dynamics on torus, we expect to find the term - 1/N∫_(Ω×^d)^N f^t_N | ∇_X logf^t_N/f_t^⊗ N|^2 Z in some way to use log-Sobolev inequality for the second order system (<ref>). Fortunately, hypocoercivity in entropy sense in <cit.> motivates us to take the derivative of normalized relative Fisher information I^M_N(f^t_N|f_t^⊗ N), which is defined as I_N^M(f^t_N|f_t^⊗ N) = 1/N∫_(Ω×^d)^N f^t_N | √(M)∇logf^t_N/f_t^⊗ N|^2 Z, where M is a 2Nd × 2Nd positive defined matrix. If we choose the matrix M as the form M = ( [ E F; F G ]), where E,F,G are Nd × Nd positive definite matrices to be specified later, then we obtain / t I_N^M(f^t_N|f_t^⊗ N) ≤ - c_1/N∫_(Ω×^d)^N f^t_N |√(E)∇_X logf^t_N/f_t^⊗ N|^2 Z + c_2/N∫_(Ω×^d)^N f^t_N | √(G)∇_V logf^t_N/f_t^⊗ N|^2 Z + , where c_1, c_2 are two positive constants that depend on M, σ, γ, K and V. The concrete form of error terms will be given in Lemma <ref> and Lemma <ref>. Roughly speaking, time derivative of relative entropy only provides us dissipation in the direction of velocity and time derivative of relative Fisher information provides us further dissipation in the direction of position. The reference measure of (<ref>) and (<ref>) could be more general. For insatnce, if we replace f_t^⊗ N by f_N,∞, we can show the convergence from f^t_N towards f_N,∞ as t →∞. Now we combine these two quantities, relative entropy and Fisher information, to form a new “modulated energy" as ℰ^M_N(f^t_N|f̅_̅N̅) = H_N(f^t_N|f̅_̅N̅) + I^M_N(f^t_N|f̅_̅N̅), where f̅_̅N̅ may take f_t^⊗ N or f_N,∞. The remaining argument is to appropriately control error terms in (<ref>). When f̅_̅N̅ = f_N,∞, linear structure of the Liouville equation (<ref>) makes error terms vanish. When f̅_̅N̅ = f_t^⊗ N, error terms in (<ref>) can be written as -2 ∫_(Ω×^d)^N f^t_N ∇R_N, M ∇logf^t_N/f_t^⊗ N Z for suitable selection of M (See Corollary <ref>), here ∇ = (∇_X, ∇_V). However, the terms ∇_x ∇_v log f_t and ∇_v ∇_v log f_t are too difficult to control to obtain some uniform-in-time estimates for (<ref>) (See Remark <ref>). The key observation is that ∇_x ∇_v log f_t and ∇_v ∇_v log f_t are trivial terms if we replace f_t by its non-linear equilibrium, then we can show / tℰ^M_N(f^t_N|f_∞^⊗ N) ≤ - c'_1/N∫_(Ω×^d)^N f^t_N | ∇_X logf^t_N/f_∞^⊗ N|^2 Z - c'_2/N∫_(Ω×^d)^N f^t_N | ∇_V logf^t_N/f_∞^⊗ N|^2 Z + C/N, where c'_1, c'_2 > 0, C > 0 are some constants that only depend on M, σ, γ, K and V (See Theorem <ref>). To control the error between ℰ^M_N(f^t_N|f_t^⊗ N) and ℰ^M_N(f^t_N|f_∞^⊗ N), we use the convergence result from f_t to its equilibrium f_∞. There exist some results studying this type of convergence . In this article, we adapt the arguments as in <cit.> for K with small ∇ K _∞ and <cit.> for the case when K = - ∇ W and the interaction energy is convex to establish the following estimate H(f_t|f_∞) ≤ Ce^-c'_3t, where c'_3 > 0 depends on σ, γ, K and C > 0 depends on initial data f_0. We give the sketch of proof of (<ref>) in Theorem <ref> and Theorem <ref> in the appendix. In the end, we combine (<ref>) with (<ref>) through 2-Wasserstein distance to conclude. Finally, we give some comments about terms ∇_x ∇_v log f_t and ∇_v ∇_v log f_t. Unfortunately we cannot directly obtain uniform-in-time estimates about these terms, otherwise we would have / tℰ^M_N(f^t_N|f_t^⊗ N) ≤ - c'_1/N∫_(Ω×^d)^N f^t_N | ∇_X logf^t_N/f_t^⊗ N|^2 Z - c'_2/N∫_(Ω×^d)^N f^t_N | ∇_V logf^t_N/f_t^⊗ N|^2 Z + C/N, for some c_1', c_2', C > 0 and then finish the proof by Gronwall's inequality. Uniform-in-time propagation of chaos can only be expected when the limiting PDE (<ref>) has a unique equilibrium. Indeed, we obtain Theorem <ref> under the assumptions in Theorem <ref> and some assumption on W which prevents the existence of multiple equilibria of Eq.(<ref>). Similar arguments/estimates have also appeared in the first-order setting. For instance, the authors of <cit.> obtained Li-Yau type growth estimates of ∇^2 logρ_t with respect to the position x for any fixed time horizon in particular in the case of 2d Navier-Stokes equation, with a universal constant growing with time t. Those type estimates are very useful when one treats the mean field limit problem set on the whole space <cit.>. §.§ Main results and examples Let us fix some notations first. We denote z = (x,v) ∈Ω×^d as a phase space configuration of one particle, and X = (x_1,...,x_N), V = (v_1,...,v_N) and Z= (X, V) for configurations in position, velocity and phase space of N particles, respectively. We also use that z_i = (x_i,v_i) and Z = (X,V) = (z_1,...,z_N). As the same spirit, we write operator ∇_X = (∇_x_1,..., ∇_x_N), ∇_V = (∇_v_1,..., ∇_v_N) and Δ_V = ∑_i=1^NΔ_v_i. Those are operators on ℝ^dN. For two probability measures μ, ν on Ω×^d, we denote by Π(μ,ν) theset of all couplings between μ and ν. We define L^2 Wasserstein distance as 𝒲_2(μ,ν) = ( inf_π∈Π(μ,ν)∫_(Ω×^d)^2 |z_1 - z_2|^2 π(z_1,z_2) )^1/2. For a measure μ and a positive integrable function f such that ∫_Ω×^d f|log f| μ < ∞, we define the entropy of f with respect to μ as Ent_μ(f) = ∫_Ω×^d f log f μ - ( ∫_Ω×^d f μ) log( ∫_Ω×^d f μ). For two probability measures μ and ν, we define the relative entropy between μ and ν as H(μ|ν) = { ∫_Ω×^d h log h ν, μ << ν h = μ/ν, + ∞, , . and the normalized version of relative entropy for probability measures μ' and ν' on (Ω×^d)^N as H_N(μ'|ν') = 1/NH(μ'|ν'). We also define relative Fisher information between μ and ν as I^M(μ|ν) = { ∫_Ω×^d M ∇ h, ∇ h/hν, μ << ν h = μ/ν, + ∞ otherwise, . where M(z) is a positive definite matrix-valued function such that M(z) ≥κ I for some κ > 0 independent on z ∈Ω×^d. Similarly, the normalized version of relative Fisher information between two probability measures μ' and ν' on (Ω×^d)^N reads as I^M_N(μ'|ν') = 1/N I^M(μ'|ν'). We abuse the notation of probability measures f_N and f as probability densities if they have densities. Finally, we give definitions of some functional inequalities that we will use in the following. (log-Sobolev inequality) We call that the probability measure μ on Ω×^d satisfies the log-Sobolev inequality if there exists some constant ρ_ls(μ) > 0 such that for all smooth function g with ∫ g^2 μ = 1, it holds that Ent_μ(g^2) ≤ρ_ls(μ) ∫_Ω×^d (|∇_x g|^2 + |∇_v g|^2) μ. (Weighted log-Sobolev inequality) Let H(z) be a smooth function on ^d ×^d. We call that the probability measure μ on ^d ×^d satisfies the weighted log-Sobolev inequality with weight H if there exists some constant θ > 0, ρ_wls(μ) > 0 such that for all smooth function g with ∫ g^2 μ = 1, it holds that Ent_μ(g^2) ≤ρ_wls(μ) ∫_^d ×^d (H^-2θ|∇_x g|^2 + |∇_v g|^2) μ. In the following, we use ρ_ls (ρ_wls) to denote the (weighted) log-Sobolev inequality constant of the equilibrium measure f_∞ of the limiting PDE (<ref>) and ρ_LS to denote the uniform-in-N log-Sobolev inequality constant of the stationary measure f_N,∞ of the Liouville equation (<ref>). Now we detail assumptions about interaction potential W and confining potential V. Suppose that V(x) ∈ C^2(Ω) and there exist λ > 0, M > 0 such that V(x) ≥λ |x|^2 - M. The first condition means that V goes to infinity at infinity and is bounded below. It can be implied by 1/2∇ V (x) · x ≥ 6λ (V(x) + x^2/2) - A, x ∈Ω for some A > 0. This expression implies that the force -∇ V drags particles back to some compact set. Detailed proof can be found in <cit.>. The second assumption implies that the potential V grows at most quadratically on Ω. Suppose that V(x) ∈ C^2(Ω) and there exists C_V > 0 such that ∇^2 V _L^∞≤ C_V < ∞. We also treat more general confining potentials when Ω = ^d. Suppose that V(x) ∈ C^2(^d) and there exist θ > 0 and C_V^θ such that V^-2θ∇^2 V_L^∞≤ C_V^θ < ∞. Moreover, outside a compact domain on ^d, we assume that V satisfies (i) Δ V ≤κ_1 |∇ V|^2 for some κ_1 ∈ (0,1); (ii) |∇ V|^2 ≥κ_2 V^2θ + 1 for some positive constant κ_2 > 0. Those conditions in Assumption 3 have been explored in <cit.> for kinetic Langevin process with confining potentials greater than quadratic growth at infinity. The boundedness of V^-2θ∇^2 V_L^∞ extend the quadratic growth condition and the other two conditions guarantee the weighted log-Sobolev inequality of f_∞ (See Section <ref>). We also use multiplier method developed in <cit.> to deal with this type of confining potentials. Some important examples have been provided in <cit.>. The first kind of examples is V(x) = |x|^k, k ≥ 2. Then we have Δ_x V = (dk + k^2 - 2k)|x|^k-2, |∇_x V|^2 = k^2|x|^2k-2 and V^-2θ∇^2 V _L^∞∼ |x|^k-2kθ - 2. Finally, we take θ = 1/2 - 1/k. Then all conditions above can be satisfied. Another kind of examples is V(x) = e^a|x|^k provided in <cit.>, which shows that the limit growth of V must be below the exponential growth. Observing that Δ_x V ∼ a k^2|x|^2(k-1)e^a|x|^k, |∇_x V|^2 = a^2k^2e^2a|x|^k and V^-2θ∇^2 V _L^∞∼ e^a(1-2θ)|x|^k, the conditions above imply k < 1 and θ = 1/2. Suppose that W(x) ∈ C^2(Ω) and there exists 0 < C_K < 1/2λ such that ∇^2 W_L^∞≤ C_K < ∞. The mean field functional or the interaction energy F: 𝒫_2(^d) → defined as F(ρ) = ∫_Ω×Ω W(x-y) ρ(x) ρ(y) is functional convex, i.e. for every t ∈ [0,1] and every ν_1, ν_2 ∈𝒫_2(^d), F((1-t)ν_1 + tν_2) ≤ (1-t)F(ν_1) + t F(ν_2). The harmonic interaction potential W(x) = L_W/2|x|^2 with L_W ≤λ/2 is covered by Assumption <ref>. The mollified Coulomb potential when d=3, W(x) = a/(|x|^k + b^k)^1/k or W(x) = arctan(|x|/r_0) 1/|x| with some constant a, b>0 or r_0 > 0. The later form satisfies Assumption <ref> (See Section 3 in <cit.>). The harmonic interaction potential does not satisfy Assumption <ref>. Let us take F(ρ) = ∫_Ω×Ω (x-y)^2 ρ(x) ρ(y), then for t ∈ [0,1], F((1-t)ν_1 + tν_2) ≤ (1-t)F(ν_1) + t F(ν_2) ⟺ ∫_Ω×Ω (x-y)^2 (ν_1 - ν_2)^⊗ 2(x,y) ≥ 0 ⟺ 2 (∫_Ω x^2 ν_1 ) (∫_Ω x^2 ν_2 ) ≥(∫_Ω x^2 ν_1 )^2 + (∫_Ω x^2 ν_2 )^2, which does not hold in general. With those specific statement of assumptions as above, we can now state our main results. Suppose that V satisfies Assumption <ref> and <ref>, and W satisfies Assumption <ref> with C_K < 1. Then for initial data f_N^0 of the Liouville equation (<ref>) such that ℰ^M_N(f^0_N|f_N,∞) < ∞, we have ℰ^M_N(f^t_N|f_N,∞) ≤ e^-ctℰ^M_N(f^0_N|f_N,∞), where c = c(f_N,∞, M) > 0 is explicit and independent of N. We can choose matrix M as the following M = ( [ E F; F G ]), where E = diag{δ a^3,...,δ a^3}, F = diag{δ a^2,...,δ a^2}, G = diag{2δ a,...,2δ a}, and two constants a and δ satisfy a ≤2γ/C_K + C_V, δ≤σ/2(4 + 8a γ)^2. Then the constant c can be taken as c = 1/2(1 + ρ_LS)min{3/2δ a^2, σ/2}. By the selection of δ, we observe that c ∼σ, i.e. the larger diffusion strength we have, the faster convergence from f_N^t to f_N,∞. Our second contribution is the uniform-in-N exponential convergence from f_N to f_∞^⊗ N. Suppose that V and W satisfy one of the following two cases, (i) V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref>; (ii) V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref> and ∇ W _L^∞ < ∞. For the first case, we take M_1 as a constant matrix, then for initial data f_N^0 of Eq.(<ref>) such that ℰ_N^M_1(f^0_N|f^⊗ N_∞) < ∞ and σ≥σ^∗_1 > 0, we have H_N(f^t_N|f_∞^⊗ N) ≤ℰ_N^M_1(f^t_N|f^⊗ N_∞) ≤ e^-c_1tℰ_N^M_1(f^0_N|f^⊗ N_∞) + C_1/N, where c_1 = c_1(f_∞, M_1) > 0 and C_1 = C_1(f_∞, σ, γ, C_K, C_V) > 0 are explicit and independent of N. For the second case, we take M_2 as a matrix function on ^2d, then for initial data f_N^0 of Eq.(<ref>) such that ℰ_N^M_2(f^0_N|f^⊗ N_∞) < ∞ and σ≥σ^∗_2 > 0, we have H_N(f^t_N|f_∞^⊗ N) ≤ℰ_N^M_2(f^t_N|f^⊗ N_∞) ≤ e^-c_2tℰ_N^M_2(f^0_N|f^⊗ N_∞) + C_2/N, where c_2 = c_2(f_∞, M_2), C_2 = C_2(f_∞, σ, γ, C_K, C^θ_V) > 0 are explicit and independent of N. For the first case, we also choose M_1 as in Remark <ref>, but two constants a and δ now should satisfy { a ≤min{2 γ/C_K + C_V, 1/4C_K + 2, γ/5120 eρ_ls (C_K+1)^2}, δ≤σ/4[8 + a + 28aγ + 32a^2 γ^2], . then the constant c_1 can be taken as c_1 = δ a^2/16(ρ_ls + 1) which implies c_1 ∼σ, and the lower bound of diffusion constant σ^∗_1 satisfies σ_1^∗≥max{160[10 + 28 γ + 32 γ^2]ρ_lse/a^2 γ, 3200ρ_lseγ}C_K^2. For the second case, we choose M_2 as { E = diag{e(z_1)Id_d × d,...,e(z_N)Id_d × d}, F = diag{f(z_1)Id_d × d,...,f(z_N)Id_d × d}, G = diag{g(z_1)Id_d × d,...,g(z_N)Id_d × d}, . where E,F,G are Nd × Nd diagonal matrices. We choose e(z), f(z) and g(z) as e(z) = δ a^3(H(z))^-3θ, b(z) = δ a^2(H(z))^-2θ, c(z) = 2 δ a(H(z))^-θ, where H(z) = v^2/2 + V(x) + H_0, H_0 > 0, and two constants a and δ satisfy { a ≤min{1/4C_K + 6θ + 2, γ/C_V^θ + C_K, γ/6400 eρ_wls (C_K+1)^2}, δ≤3 σ/8 + 32C_K + m_2', . where m_2' = [4 + 6γ a + 4aθ(2γ + ∇ W_L^∞)]^2 + a[6γ + θ(2γ + ∇ W_L^∞)], then the constant c_2 can be taken as c_2 = δ a^2/16(ρ_wls + 1), which implies c_2 ∼σ, and the lower bound of diffusion constant σ^∗_2 satisfies σ_2^∗≥max{800(40+m_2”)ρ_wlse/a^2 γ, 3200ρ_wlseγ}·max{C_K^2, C_K^3}, where m_2” = [4 + 6γ + 4θ(2γ + ∇ W_L^∞)]^2 + [6γ + θ(2γ + ∇ W_L^∞)]. Theorem <ref> implies that second order particle system (<ref>) not only exponentially converges to its equilibrium, but also converges to the unique mean field equilibrium as N →∞. If we take t →∞, the results (<ref>) and (<ref>) imply that, H_N(f_N,∞|f^⊗ N_∞) ≤ℰ_N^M_i(f_N,∞|f^⊗ N_∞) ≤C/N, i =1,2, which offers us a kind of dynamical approach to prove the concentration of the Gibbs or stationary measure of the second order particle system around the nonlinear equilibrium of the limiting equation (<ref>). Combining the exponential convergence from f_t to f_∞, we could replace f_∞ by f_t in last theorem so that avoid the estimates about ∇log f_t and ∇^2 log f_t. Based on this observation, we establish the uniform-in-time propagation of chaos both in the sense of the Wasserstein distance and relative entropy. Suppose that V and W satisfy one of the following two cases, (i) V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref>. Moreover, we assume either C_K is small or interaction functional F satisfies Assumption <ref>. (ii) V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref> and ∇ W _L^∞ < ∞. Moreover, we assume either C_K is small or interaction functional F satisfies Assumption <ref>. For the first case, for initial data f_N^0 of Eq.(<ref>) such that ℰ_N^M_1(f^0_N|f^⊗ N_∞) < ∞, f^0 of Eq.(<ref>) such that ℰ^M(f_0|f̂_0) < ∞, and σ≥σ_1^∗, we have 𝒲_2^2(f_N,k, f^⊗ k) ≤ C_1'k e^-c'_1t + C_1 k/N, where c'_1 = min{c,c_1} > 0 and C'_1 = C'_1(f_N^0, f_0, f_∞, ρ_LS) > 0 are explicit and independent of N. For the second case, for initial data f_N^0 of Eq.(<ref>) such that ℰ_N^M_2(f^0_N|f^⊗ N_∞) < ∞, f^0 of Eq.(<ref>) such that ℰ^M_2'(f_0|f̂_0) < ∞, and σ≥σ_2^∗, we have 𝒲_2^2(f_N,k, f^⊗ k) ≤ C_2'ke^-c'_2t + C_2k/N, where c'_2 = min{c_2, c_2”} > 0 and C'_2 = C'_2(f_N^0, f_0, f_∞) > 0 are explicit and independent of N. The constants C_1' and C_2' are taken as C_1' = (1+ρ_LS)[ℰ_N^M_1(f^0_N|f^⊗ N_∞) + ℰ^M(f_0|f̂_0)], C_2' = ℰ_N^M_2(f^0_N|f^⊗ N_∞) + ℰ^M'_2(f_0|f̂_0), and c_2”, M_2' can be found in Theorem <ref>. The main difference of assumptions in Theorem <ref> compared with those in Theorem <ref> is that C_K is small or the interaction functional F is convex. Those two conditions make sure that f_t exponentially converges to f_∞. The small condition of C_K comes from Theorem 10 in <cit.>, which obtains the exponential convergence from f_t to f_∞ by uniform log Sobolev inequality of f_N,∞. The convexity condition of F is inspired by Theorem 2.1 in <cit.>, this kind of condition avoid the smallness assumption on C_K. We extend their result to more general confining potentials in the Appendix. The uniform-in-time propagation of chaos of second order particle system (<ref>) has been investigated in <cit.> and <cit.>. Compared with <cit.> that exploits the coupling method, our result applies to more general confining potentials. In terms of Theorem 2.3 in <cit.>, we do not need the uniform-in-m log-Sobolev inequality of measure proportional to e^-δ F/δ m(m,x), which is not very easy to verify. §.§ Related literature Hypocoercivity. Hypocoercivity is an important analytical tool to study the long time behavior of Langevin dynamics and the corresponding kinetic Fokker-Planck equation. It was initiated by Villani <cit.> and and then later advanced by Dolbeault, Mouhot and Schmeiser in <cit.> and <cit.>. However, those now well-known results are only restricted to the one particle dynamics without any interactions. For the N particle system given by the Liouville equation (<ref>), the natural stationary measure is simply the Gibbs measure given by the following form f_N,∞ = 1/Z_N,β e^-β H(z_1,...,z_N). A natural problem is that whether or not the convergence rate from f_N toward f_N,∞ depends on the number of particles N. Many researchers have contributed to this problem. Guillin, etc study the uniform in N functional inequalities in <cit.>. Guillin and Monmarché show uniform-in-N exponential decay rate in <cit.> and <cit.> by “Generalized Γ calculus" developed in <cit.> and uniform log-Sobolev inequality in <cit.>. Guillin, etc also use H^1 type norm to show the uniform-in-N exponential decay rate by hypocoercivity and uniform Poincaré inequality in <cit.>. These result are all restricted to potentials with smallness of ∇^2 U _L^∞. There are also some results that treat systems with singular potentials. Baudoin, Gordina and Herzog showed convergence to equilibrium by Gamma calculus in <cit.> with singular potentials. Lu and Mattingly constructed new Lyapunov function to show egodicity for systems (<ref>) with Coulomb potential in the sense of weighted total variation distance in <cit.>. However, the convergence rates, if they provides one, all depend on N. Propagation of chaos for kinetic Vlasov equation. The main result presented in this article is a further development of the relative entropy method introduced in <cit.>, where Jabin and and the 2nd author proved a quantitative propagation of chaos for Newton's system with bounded interaction kernel in terms of relative entropy. Lacker <cit.> then developed an approach based on the BBGKY hierarchy and the entropy dissipation to optimize the local convergence rate of k-marginals towards the limiting law. Bresch, Jabin and Soler <cit.> exploited the BBGKY hierarchy approach to firstly include the 2d Vlasov-Possion-Fokker-Planck case. More recently, Bresch, Jabin and Duerinckx <cit.> introduced a duality approach to cover the arbitrary square-integrable interaction forces at possibly vanishing temperature. Up to now, the mean field limit or the propagation of chaos results are still very limited for second order particle system with singular interaction forces. See also for the results in <cit.> and the review <cit.> for more detailed discussions. For long time propagation of chaos, Monmarché showed uniform-in-time propagation of chaos in the sense of Wasserstein distance of one marginal for systems with convex potentials, i.e. 𝒲_2(f_N,1,f) ≤C/N^α, for some contant α > 0, C > 0 independent on N and t. The sharp rate with α = 1/2 for the case W(x) = c|x|^2 has also been established there. Guillin and Monmarché <cit.> later improved the convergence result to all marginals but without optimality in terms of of α, i.e. 𝒲_2(f_N,k,f^⊗ k) ≤C √(k)/N^α, for some contant α > 0, C > 0 independent on N and t. Thanks to the reflection coupling method, Guillin, Bris and Monmarché <cit.> proved the optimal convergence rate of N for all marginals with convex or non-convex interaction potentials, i.e. 𝒲_2(f_N,k,f^⊗ k) ≤C √(k)/√(N), for some constant C > 0 independent on N and t, with the smallness assumption of the Lipschitz constant of interaction force K. Recently, Chen, Lin, Ren and Wang <cit.> showed uniform-in-time propagation of chaos with functional convexity condition. Even though they do not need smallness of ∇ K_L^∞ , they require some uniform-in-time Poincaré inequality for the solution of the limiting PDE (<ref>) . To the best of our knowledge, there is no result of uniform-in-time propagation of chaos for second order systems with singular interaction forces yet. We leave this topic for our further study. Equilibrium of VlasovFokker Planck equation. Uniform-in-time propagation of chaos cannot hold in general. One critical obstacle is that the non-linear Vlasov-Fokker-Planck equation (<ref>) may have multiple equilibria and hence exhibit phrase transition. The convergence from f_t towards f_∞ prevents the phrase transition or the presence of multiple equilibria of the limiting system. There are some results about this kind of convergence but with very limited conditions about potentials. See for instance <cit.> and the reference therein. Villani <cit.> proved that f_t converge to the Maxwellian 1/(2π)^d e^-βv^2/2 on 𝕋^d with any polynomial order in the sense of L^1 norm, which requires that W ∈ C^∞ and W _L^∞ is small enough. Guillin and Monmarché showed that f converges to f_∞ in the sense of “mean-field entropy" in <cit.>, which defines as H_W(ν) = E(ν) - inf_μ∈𝒫(Ω×^d) E(μ) for probability measure ν, where E(ν) = H(ν|α) + 1/2 F(ν) and α∝ e^-1/2v^2 -V(x). Baudoin, Feng and Li <cit.> established that f converges to f_∞ with exponential decay rate in the sense of free energy combining with “relative Fisher Information" (by our notation) ℰ^M(f_t|f̂_t) = ℱ(f_t) - ℱ(f_∞) + I^M(f_t|f̂_t), where M is a constant matrix and f̂∝ e^-v^2/2 - V(x) - W ∗ρ_t. The free energy they used is defined as ℱ(f) = 1/2∫_Ω×^d v^2 f x v + ∫_Ω×^d f log f x v + ∫_Ω×^d Vf x v + 1/2 F(f). They used Γ-calculus to overcome the dissipation degeneracy in x direction with convexity and smallness of ∇^2 V and ∇^2 W. Chen, Lin, Ren and Wang also exploited the quantity (<ref>) to prove ℰ^M(f_t|f̂_t) exponentially converges to 0 in <cit.> under conditions ∇^2 (W ∗ρ_t + V)_L^∞ < ∞ and the functional convexity of F (Assumption <ref>). These two groups both used the so called free energy to quantify the convergence from f_t to f_∞, i.e. H_W(f_t) = ℱ(f_t) - ℱ(f_∞). Finally, let us recall the convergence result in <cit.> and extend Theorem 2.1 in <cit.> to more general confining potentials. By <cit.>, we have Suppose that V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref>. Suppose f_N,∞ satisfies uniform log Sobolev inequality with constant ρ_LS. Then for the solution f_t of Eq.(<ref>) with initial data f_0 such that H_W(f_0) < ∞ and ∫_Ω×^d z^2 f_0(z) z < ∞, we have H_W(f_t) ≤ e^-ctH_W(f_0), and 𝒲_2^2(f_t, f_∞) ≤ρ_LS H_W(f_t) ≤ρ_LSℰ^M(f_0|f̂_0) e^-ct. where c>0 is the same as Theorem <ref>. We extend Theorem 2.1 in <cit.> to more general confining potentials, Suppose that V satisfies Assumption <ref> and <ref>, W satisfies Assumption <ref> and Assumption <ref>. Then for solution f_t of Eq.(<ref>) with initial data f_0 such that ℰ^M_2'(f_0|f̂_0) < ∞, we have 𝒲_2^2(f_t, f_∞) ≤ℰ^M(f_t|f̂_t) ≤ e^-c”_2tℰ^M(f_0|f̂_0), where c”_2 = δ a^2/16 + 16ρ_wls with some choice of M_2', δ and a (detailed in the Appendix). We will give the sketch of proofs of these two theorems in the Appendix. §.§ Outline of the article The paper is then organized as follow: In Section <ref>, we develop the basic tools we will use throughout this article. In Section 2.1, we introduce the normalized relative Fisher information and compute its time evolution under the kinetic dynamics (<ref>) and (<ref>). In Section 2.2, we select the nontrivial matrix M for relative Fisher information to deal with the confining potentials greater than a quadratic function at infinity, where the crucial idea “entropy multipliers" is inspired by the one particle case as in <cit.>. In Section 2.3, we introduce the weighted log-Sobolev inequality, which is essentially obtained with the entropy multiplier method. In Section 2.4, we prove a new Law of Large Number estimates for systems with Lipschitz interaction force K. In Section 3, we give the complete proof of our main results, Theorem <ref>, <ref> and <ref>. In the Appendix, we prove the convergence from f to f_∞ under some conditions on V and W. § PRELIMINARY Let us define some linear operators in (Ω×^d)^N we will use in this section. We denote A = (0, ∇_V) on (Ω×^d)^N and A_i = (0, ∇_v_i) on Ω×^d. The operator B collects all of one order part of the Liouville operator L_N in (<ref>), i.e. B = ∑_i=1^N B_i, B_i = v_i ·∇_x_i - ∇_x_iU ·∇_v_i - γ v_i ·∇_v_i, where U is defined in (<ref>). We write the infinitesimal generator of N-particle system (<ref>) as L^∗_N = B + σΔ_V. We recall the time evolution of the relative entropy as in <cit.>. Assume that f_N is a solution of Eq.(<ref>). Assume further that f(t,z) ∈ W^1, ∞(×Ω×^d) solves Eq.(<ref>) with f(t,z) > 0 and ∫_Ω×^d f(t,z)dz = 1. Then d/dt H_N(f_N|f̅_̅N̅) = - σ/N∫_(Ω×^d)^N f_N | ∇_V logf_N/f̅_̅N̅|^2 Z - 1/N∫_(Ω×^d)^N∇_V logf_N/f̅_̅N̅· R^0_N Z, where R^0_N is a dN-dimensional vector defined as R^0_N = {1/N∑_j=1,j≠ i^N K(x_i - x_j) - K ∗ρ_t(x_i) }_i=1^N if we take f̅_̅N̅ = f^⊗ N(t,Z), and the second line of (<ref>) vanishes if we take f̅_̅N̅ = f_N,∞. The computation is standard. We recommend <cit.> and <cit.> for the detailed computation when f_N = f_N,∞ and f^⊗ N. Using Young inequality when f_N = f^⊗ N, we also have d/dt H_N(f_N|f̅_̅N̅) ≤ - 3 σ/41/N∫_(Ω×^d)^N f_N | ∇_V logf_N/f̅_̅N̅|^2 Z + 4/σ1/N∫_(Ω×^d)^N |R^0_N|^2 Z. In the next, we turn to the argument about Fisher information. §.§ Hypocoercivity in entropy sense In this subsection, we extend hypocoercivity in entropy sense in <cit.> to N particle system with nontrivial interaction force. We also use more general reference measure in (Ω×^d)^N —– invariant measure f_N,∞ or N times tensor product of limiting measure f^⊗ N, corresponding to uniform egodicity problem and uniform-in-time propagation of chaos problem. In the following, we use notation h = f_N/f̅_̅N̅ and u = log h for convenience, f̅_̅N̅ may take f_N,∞ or f^⊗ N. Before tedious manipulations, we firstly derive the equation of u. Assume that f_N is a solution of Eq.(<ref>), and assume that f(t,z) ∈ W^2, ∞(×Ω×^d) solves Eq.(<ref>), then ∂_t u = -Bu - σ A^∗Au + σ∇_V log f_N · A u - R_N, where R_N takes R_N = ∑_i=1^N∇_v_ilog f(x_i,v_i) ·{1/N∑_j=1, j≠ i^N K(x_i-x_j) - K⋆ρ(x_i) }, if f̅_̅N̅ = f^⊗ N and R_N = 0 if f̅_̅N̅ = f_N,∞. The proof is direct computation. In terms of Eq.(<ref>), we have ∂_t log f_N = -B log f_N + σΔ_V f_N/f_N + γ Nd, and for Eq.(<ref>), we have ∂_t logf̅_̅N̅ = -B logf̅_̅N̅ + σΔ_V f̅_̅N̅/f̅_̅N̅ + γ Nd + R_N, we could understand R_N as the difference of drift part between particle system (<ref>) and McKean-Vlasov system (<ref>). Combine Eq.(<ref>) and Eq.(<ref>), we have ∂_t u = -Bu + σ{Δ_V f_N/f_N - Δ_V f̅_̅N̅/f̅_̅N̅} - R_N, using identity Δ_V F/F = Δ_V log F + |∇_V log F|^2, therefore, u satisfies the equation ∂_t u = -Bu + σΔ_V u + σ∇_V log(f_N f̅_̅N̅) ·∇_V u - R_N, recall A = (0,∇_V), now we regard f̅_̅N̅ as reference measure and use Proposition 3 of <cit.>, for vector function g: (Ω×^d)^N →^Nd, we have A^∗g = - A · g - ∇_V logf̅_̅N̅, g , now we rewrite Eq.(<ref>) as following ∂_t u = -Bu - σ A^∗Au + σ∇_V log f_N · A u - R_N, we complete the proof. For f̅_̅N̅ = f^⊗ N, the conjugation relationship (<ref>) is in a series of Hilbert space L^2(f̅_̅N̅) depends on time t whose associated measure satisfy Eq.(<ref>). If we take reference measure as equilibrium of Eq.(<ref>), then the equation about u is the same as (<ref>) but R_N = 0: ∂_t u = -Bu - σ A^∗Au + σ∇_V log f_N · A u, this is why we start our computation from Eq.(<ref>). Now let us compute the time derivation of relative Fisher Information. We omit the integration domain (Ω×^d)^N for convenience. Assume that f_N is a solution of Eq.(<ref>). Assume that f(t,z) ∈ W^2, ∞(×Ω×^d) solves Eq.(<ref>) with f(t,z) > 0 and ∫_Ω×^d f(t,z)dz = 1. Let B, C, C' be linear differential operators on (Ω×^d)^N, where B = ∑_i=1^N B_i, B_i = v_i ·∇_x_i - ∇_x_iU ·∇_v_i - γ v_i ·∇_v_i, and C, C' are to be confirmed, then d/dt{∫ f_N ⟨ Cu, C'u⟩} = ∫ f_N ⟨ [B, C]u, C'u ⟩ + ∫ f_N ⟨ Cu, [B, C']u ⟩ - ∫ f_N ⟨ C R_N, C'u ⟩ - ∫ f_N ⟨ Cu, C'R_N ⟩ - 2 σ∫ f_N CAu, C'Au + σ Q_C,A + σ Q_C',A, where Q_C,A = -∫ f_N [C,A^∗]Au, C'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u -∫ f_N [C,A]u, Au ⊗ C'u - ∫ f_N [A,C]u, C'Au , Q_C',A = -∫ f_N Cu, [C',A^∗]Au - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u - ∫ f_N Au ⊗ Cu, [C',A]u - ∫ f_N CAu, [A, C']u . B is a d-tuple differential operator, but C, C' are 2Nd-tuple differential operators. We denote C_i and C_i' as (C_i_j)_1 ≤ i ≤ N, 1 ≤ j ≤ d and (C'_i_j)_1 ≤ i ≤ N, 1 ≤ j ≤ d in the sense of coordinate z_j = (x_j,v_j). We omit the index j for convenience in the following, i.e. C = (C_i)_1 ≤ i ≤ N and C' = (C'_i)_1 ≤ i ≤ N. Each of them can be identified with a vector field D_i, in such a way that C_i f = D_i ·∇ f, so D can be seen as a map valued in (2Nd × 2Nd) matrix. The inner products above should be understood as Cu, C'u = ∑_i=1^N C_i u, C_i'u _^2d, CAu, C'Au = ∑_i,j=1^N C_i A_ju, C_i'A_ju _^2d × 2d. Let us explain the commutators we use. [B,C] is a 2Nd-tuple operator, understood as [B,C]_i_j = [B, C_i_j]. But [C,A] is a operator with 2Nd × 2Nd components, understood as [C,A]_ij = [C_i,A_j], and others follow. If C, C' are commutative with A, the only nontrivial operators are [C,A^∗] and [∇_V logf̅_̅N̅· A, C], we will compute them later. Step1. In this step, we claim d/dt∫ f_N Cu, C'u = ∫ f_N ⟨ [B, C]u, C'u ⟩ + ∫ f_N ⟨ Cu, [B, C']u ⟩ - ∫ f_N ⟨ C R_N, C'u ⟩ - ∫ f_N ⟨ Cu, C'R_N ⟩ + σ Q_σ, here Q_σ collects all terms with coefficient σ and reads as Q_σ = ∫ f_N Δ_V Cu, C'u - ∫ f_N CA^∗A u, C'u + ∫ f_N C(∇_V log f_N · Au), C'u - ∫ f_N Cu, C'A^∗A u + ∫ f_N Cu, C'(∇_V log f_N · Au). There terms comes from diffusion part of Eq.(<ref>) and Eq.(<ref>), we will deal with them in next step. We directly take derivative and split into three terms: d/dt∫ f_N ⟨ Cu, C'u⟩ = ∫∂_t f_N ⟨ Cu, C'u⟩ + ∫ f_N ⟨ C(∂_t u), C'u ⟩ + ∫ f_N ⟨ Cu, C'(∂_t u)⟩. For the first term, we use Eq.(<ref>) and integral by parts, ∫∂_t f_N ⟨ Cu, C'u ⟩ = ∫ -L_N f_N ⟨ Cu, C'u⟩ = ∫ f_N (B ⟨ Cu, C'u⟩) + σ∫ f_N (Δ_V ⟨ Cu, C'u⟩) = ∫ f_N BCu, C'u + ∫ f_N Cu, BC'u + σ∫ f_N (Δ_V ⟨ Cu, C'u⟩). For second term, ∫ f_N ⟨ C(∂_t u), C'u ⟩ = - ∫ f_N ⟨ C(B u), C'u ⟩ - ∫ f_N CR_N, C'u - σ∫ f_N CA^∗A u, C'u + σ∫ f_N C(∇_V log f_N · Au), C'u. Similarly, for third term, ∫ f_N ⟨ Cu, C'(∂_t u) ⟩ = - ∫ f_N ⟨ Cu, C'(Bu) ⟩ - ∫ f_N Cu, C'R_N - σ∫ f_N Cu, C'A^∗A u + σ∫ f_N Cu, C'(∇_V log f_N · Au). Combine (<ref>), (<ref>), (<ref>), we complete the claim in this step. Step2. In this step, we deal with the diffusion part, we claim Q_σ = - 2∫ f_N CAu, C'Au + Q_C,A + Q_C',A, where Q_C,A = -∫ f_N [C,A]u, Au ⊗ C'u - ∫ f_N [A,C]u, C'Au -∫ f_N [C,A^∗]Au, C'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u , Q_C',A = - ∫ f_N Au ⊗ Cu, [C',A]u - ∫ f_N CAu, [A, C']u -∫ f_N Cu, [C',A^∗]Au - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u . The terms Q_C,A and Q_C',A collect all terms including commutators, we will take suitable operators C and C' to simplify these commutators. For the first term of (<ref>), we use integral by parts with respect to Lebesgue measure, ∫Δ_V f_N Cu, C'u = - ∫ f_N A u · A Cu, C'u - ∫ f_N ∇_V logf̅_̅N̅· A Cu, C'u. Recall we denote A = (0,∇_V), we continue the first term of (<ref>) - ∫ f_N A u · A Cu, C'u = -∫ f_N ACu, Au ⊗ C'u - ∫ f_N Au ⊗ Cu, AC'u = -∫ f_N CAu, Au ⊗ C'u - ∫ f_N Au ⊗ Cu, C'Au -∫ f_N [C,A]u, Au ⊗ C'u - ∫ f_N Au ⊗ Cu, [C',A]u. Let us explain the notation we use: ACu, Au ⊗ C'u should be understood as ∑_i,j=1^N C_iA_ju, A_ju, C'_i u, and [C,A]u, Au ⊗ C'u = ∑_i,j=1^N [C_i,A_j]u, A_ju, C'_iu . Moreover, we take the conjugate operator of [C,A] w.r.t measure f̅_̅N̅, we have -∫ f_N [C,A]u, Au ⊗ C'u = - ∫ f_N [C,A]^∗ Au ⊗ C'u = - ∑_i,j=1^N ∫ f_N [C_i,A_j]^∗ A_ju, C'_iu . For the second term of (<ref>), we rewrite it as - ∫ f_N CA^∗A u, C'u = - ∫ f_N A^∗CAu, C'u - ∫ f_N [C,A^∗]Au, C'u, and - ∫ f_N A^∗CAu, C'u = - ∫f̅_̅N̅ A^∗CAu, C'h = - ∫f̅_̅N̅ CAu, AC'h = - ∫f̅_̅N̅ CAu, AhC'u = - ∫ f_N CAu, AC'u - ∫ f_N CAu, Au ⊗ C'u, then we have - ∫ f_N CA^∗A u, C'u = - ∫ f_N CAu, C'Au - ∫ f_N CAu, [A, C']u -∫ f_N [C,A^∗]Au, C'u - ∫ f_N CAu, Au ⊗ C'u. Similarly, for the fourth term of (<ref>), up to the exchange of C and C', we have - ∫ f_N Cu, C'A^∗Au = - ∫ f_N CAu, C'Au - ∫ f_N [A,C]u, C'Au -∫ f_N Cu, [C',A^∗]Au - ∫ f_N Au ⊗ Cu, C'Au. Finally, all of left terms are third and fifth term in (<ref>) and underlined term in (<ref>), we collect them as below, - ∫ f_N ∇_V logf̅_̅N̅· A Cu, C'u + ∫ f_N C(∇_V log f_N · Au), C'u + ∫ f_N Cu, C'(∇_V log f_N · Au). We deal with the first term of (<ref>) as following, - ∫ f_N ∇_V logf̅_̅N̅· A Cu, C'u = - ∫ f_N ∇_V logf̅_̅N̅· A (Cu), C'u - ∫ f_N Cu, ∇_V logf̅_̅N̅· A (C'u) = - ∫ f_N C(∇_V logf̅_̅N̅· Au), C'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u - ∫ f_N Cu, C'(∇_V logf̅_̅N̅· A u) - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u , combine with the last two term of (<ref>), we have - ∫ f_N ∇_V logf̅_̅N̅· A Cu, C'u + ∫ f_N C(∇_V log f_N · Au), C'u + ∫ f_N Cu, C'( ∇_V log f_N · Au) = ∫ f_N C|Au|^2, C'u + ∫ f_N Cu, C'|Au|^2 - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u = 2∫ f_N CAu, Au ⊗ C'u + 2∫ f_N Au ⊗ Cu, C'Au - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u . After gathering (<ref>), (<ref>), (<ref>) and (<ref>), we find - 2∫ f_N CAu, C'Au -∫ f_N [C,A]u, Au ⊗ C'u - ∫ f_N Au ⊗ Cu, [C',A]u - ∫ f_N [A,C]u, C'Au - ∫ f_N CAu, [A, C']u -∫ f_N [C,A^∗]Au, C'u -∫ f_N Cu, [C',A^∗]Au - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u . Based on the conjugate rule (<ref>) for all similar terms, we complete the proof. Let f̅_N = f^⊗ N, if A is commutative with C and C', we have Q_C,A = 2 ∑_i=1^N ∫ f_N (C_i ∇_v_ilog f) A_iu, C_iu_^2d, and Q_C',A = 2 ∑_i=1^N ∫ f_N (C'_i ∇_v_ilog f) A_iu, C'_iu_^2d. We recall that Q_C,A = -∫ f_N [C,A^∗]Au, C'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u . For the second term of (<ref>), we direct compute the commutator [∇_V logf̅_̅N̅· A, C], i.e. - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u = - ∑_i,j=1^N ∫ f_N [∇_v_jlogf̅_̅N̅· A_j, C_i]u, C'_i u = ∑_i,j=1^N ∫ f_N C_i ∇_v_jlogf̅_̅N̅, A_j u ⊗ C_i'u = ∑_i=1^N ∫ f_N (C_i ∇_v_ilog f) A_iu, C_iu_^2d, by C_i ∇_v_jlogf̅_̅N̅ = 0 if i ≠ j. For the first term of (<ref>), we understand the commutator [C,A^∗] as the row of operators [C,A^∗] = ([C_1,A^∗],...,[C_N,A^∗] ), then we have -∫ f_N [C,A^∗]Au, C'u = - ∑_i=1^N ∫ f_N [C_i,A^∗]Au, C_i'u = - ∑_i=1^N ∫ f_N C_i(A^∗Au)- A^∗(C_iAu), C'_iu , here we understand C_iAu as a N-tuple vector reads as (C_iA_1u, ... ,C_iA_Nu), which can be operated by A^∗. Recall A^∗g = - A · g - ∇_V logf̅_̅N̅, g , then we have -∫ f_N [C,A^∗]Au, C'u = ∑_i,j=1^N ∑_k,l=1^d ∫ f_N { C_i_lA_j_kA_j_ku + C_i_l{(∇_v_j_klogf̅_̅N̅) (A_j_ku)}, C'_i_lu} - ∑_i,j=1^N ∑_k,l=1^d ∫ f_N { A_j_kC_i_lA_j_ku + {(∇_v_j_klogf̅_̅N̅) (C_i_lA_j_ku)}, C'_i_lu } = ∑_i=1^N ∫ f_N (C_i ∇_v_ilog f) A_iu, C'_iu_^2d. Up to change the position of C and C', we complete the proof. §.§ Entropy multipliers In order to deal with more general potentials V and W, we develop the method of entropy multipliers for N-particle relative Fisher information. We recommend Part 1, section 8 in <cit.> and <cit.> for “one particle version” without interaction potentials, and they only consider the invariant measure of single particle Fokker Planck equation as reference measure. Now let us consider a 2Nd × 2Nd weight matrix M(t,Z) to distort the relative Fisher Information, i.e. 1/N∫ f_N Cu, M_t C'u dZ, we evolve this quantity along time t in the following lemma, many ideas of manipulation are similar with Lemma <ref>. Assume that f_N is a solution of Eq.(<ref>). Assume that f(t,z) ∈ W^1, ∞(×Ω×^d) solves Eq.(<ref>) with f(t,z) > 0 and ∫_Ω×^d f(t,z)dz = 1. Let B, C, C' be differential operators on (Ω×^d)^N, where B = ∑_i=1^N B_i, B_i = v_i ·∇_x_i - ∇_x_iU ·∇_v_i - γ v_i ·∇_v_i, and C, C' are to be confirmed. Let M_t(Z): × (Ω×^d)^N →^2Nd × 2Nd be a matrix valued function smooth enough for all variables, then d/dt{∫ f_N ⟨ Cu, M_tC'u⟩} = ∫ f_N ⟨ M_t[B, C]u, C'u ⟩ + ∫ f_N ⟨ Cu, M_t[B, C']u ⟩ - ∫ f_N ⟨ CR_N, M_tC'u ⟩ - ∫ f_N ⟨ M_tCu, C'R_N ⟩ - 2 σ∫ f_N CAu, M_tC'Au + σ Q_C,A + σ Q_C',A + ∫ f_N Cu, (∂_t M_t + BM_t + σΔ_V M_t)C'u , where Q_C,A = -∫ f_N [C,A^∗]Au, M_tC'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, M_tC'u -∫ f_N [C,A]u, Au ⊗ M_tC'u - ∫ f_N M_t[A,C]u, C'Au + ∫ f_N [A,C]u, (AM_t)C'u , Q_C',A = -∫ f_N M_tCu, [C',A^∗]Au - ∫ f_N M_tCu, [∇_V logf̅_̅N̅· A, C']u - ∫ f_N Au ⊗ M_tCu, [C',A]u - ∫ f_N CAu, M_t[A, C']u + ∫ f_N (AM_t)Cu, [A,C']u . The notations of inner product and commutator operators appeared above are the same as in Remark <ref> and <ref>. Let us explain the notations associated with M_t we used above. We denote Cu,[B,C]u, CR_N are Nd-tuple vectors, then it is reasonable to multiply M_t with them. CAu, M_tC'Au should be understood as ∑_i,j,l=1^N C_i A_l u, M_i jC'_j A_lu_^2d in the sense of coordinate z_j ∈Ω×^d. The derivative of M_t (i.e. ∂_t M_t, AM_t, BM_t, Δ_V M_t) is taken componentwise. We use the similar argument with Lemma <ref>. We directly take derivative and obtain, d/dt∫ f_N ⟨ Cu, M_tC'u⟩ = ∫∂_t f_N ⟨ Cu, M_tC'u⟩ + ∫ f_N ⟨ C(∂_t u), M_tC'u ⟩ + ∫ f_N ⟨ Cu, M_tC'(∂_t u)⟩ + ∫ f_N Cu, (∂_t M_t)C'u. Observe that the last term ∫ f_N Cu, (∂_t M_t)C'u is new. Next we use the similar step in Lemma <ref> to go on. Step1. In this step, we claim the following d/dt∫ f_N Cu, M_tC'u = ∫ f_N ⟨ [B, C]u, M_tC'u ⟩ + ∫ f_N ⟨ Cu, M_t[B, C']u ⟩ - ∫ f_N ⟨ CR_N, M_tC'u ⟩ - ∫ f_N ⟨ Cu, M_tC'R_N ⟩ + σ Q_σ + ∫ f_N Cu, (∂_t M_t + BM_t)C'u, where Q_σ collects all terms with coefficient σ, which reads as Q_σ = ∫ f_N Δ_V Cu, M_tC'u - ∫ f_N CA^∗A u, M_tC'u + ∫ f_N C(∇_V log f_N · Au), M_tC'u - ∫ f_N Cu, M_tC'A^∗A u + ∫ f_N Cu, M_tC'(∇_V log f_N · Au). Let us explain some notations we use. Since [B,C']u, C'(A^∗Au) and C'(∇_V log f_N · Au) are all 2Nd-tuple vectors, it is reasonable to multiple them with M_t. After that, M_t[B,C']u, M_tC'(A^∗Au) and M_t C'(∇_V log f_N · Au) are 2Nd-tuple vectors. For the first term of (<ref>), recall Liouville operator (<ref>) we have ∫∂_t f_N ⟨ Cu, M_tC'u ⟩ = ∫ -L_N f_N ⟨ Cu, M_tC'u⟩ = ∫ f_N (B ⟨ Cu, M_tC'u⟩) + σ∫ f_N (Δ_V ⟨ Cu, M_tC'u⟩) = ∫ f_N BCu, M_tC'u + ∫ f_N Cu, M_tBC'u + σ∫ f_N (Δ_V ⟨ Cu, M_tC'u⟩) + ∫ f_N Cu,(BM_t)C'u. For the second term and third term of (<ref>), using Eq.(<ref>), we have ∫ f_N ⟨ C(∂_t u), M_tC'u ⟩ = - ∫ f_N ⟨ C(B u), M_tC'u ⟩ - ∫ f_N CR_N, M_tC'u - σ∫ f_N CA^∗A u, M_tC'u + σ∫ f_N C(∇_V log f_N · Au), M_tC'u. ∫ f_N ⟨ Cu, M_tC'(∂_t u) ⟩ = - ∫ f_N ⟨ Cu, M_tC'(Bu) ⟩ - ∫ f_N Cu, M_tC'R_N - σ∫ f_N Cu, M_tC'A^∗A u + σ∫ f_N Cu, M_tC'(∇_V log f_N · Au), Combine terms (<ref>),(<ref>) and (<ref>), we obtain the (<ref>). Step2. In this step, we focus on the term Q_σ as before, whose terms all come from diffusion. Now we claim, Q_σ = - 2∫ f_N CAu, M_tC'Au + Q_C,A + Q_C',A + ∫ f_N Cu, (Δ_V M_t)C'u , where Q_C,A = -∫ f_N [C,A^∗]Au, M_tC'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, M_tC'u -∫ f_N [C,A]u, Au ⊗ M_tC'u - ∫ f_N M_t[A,C]u, C'Au + ∫ f_N [A,C]u, (AM_t)C'u , Q_C',A = -∫ f_N M_tCu, [C',A^∗]Au - ∫ f_N M_tCu, [∇_V logf̅_̅N̅· A, C']u - ∫ f_N Au ⊗ M_tCu, [C',A]u - ∫ f_N CAu, M_t[A, C']u + ∫ f_N (AM_t)Cu, [A,C']u , The terms Q_C,A and Q_C',A collect all terms including commutators, we will take suitable operators C and C' to simplify these commutators. For the first term of (<ref>), using integral by parts with respect to Lebesgue measure, ∫Δ_V f_N Cu, M_tC'u = - ∫ f_N A u · A Cu, M_tC'u - ∫ f_N ∇_V logf̅_̅N̅· A Cu, M_tC'u. Recall we denote A = (0, ∇_V), we continue the first term of (<ref>) - ∫ f_N A u · A Cu, M_tC'u = -∫ f_N ACu, Au ⊗ M_tC'u - ∫ f_N Au ⊗ M_tCu, AC'u - ∫ f_N Au · Cu, (AM_t)C'u = -∫ f_N CAu, Au ⊗ M_tC'u - ∫ f_N Au ⊗ M_tCu, C'Au + ∫ f_N [C,A]u, Au ⊗ M_tC'u + ∫ f_N Au ⊗ M_tCu, [C',A]u - ∫ f_N Au · Cu, (AM_t)C'u. For the second terms of (<ref>), we rewrite it as - ∫ f_N CA^∗A u, M_tC'u = - ∫ f_N A^∗CAu,M_tC'u - ∫ f_N [C,A^∗]Au, M_tC'u, and we continue the first term of (<ref>), - ∫ f_N A^∗CAu, M_tC'u = - ∫f̅_̅N̅ A^∗CAu, M_tC'h = - ∫f̅_̅N̅ CAu, M_t(AC'h) - ∫f̅_̅N̅ CAu, (AM_t)C'h = - ∫ f_N CAu, M_t(AC'u) - ∫f̅_̅N̅ CAu, Ah ⊗ M_tC'u - ∫f̅_̅N̅ CAu, (AM_t)C'h = - ∫ f_N CAu, M_t(AC'u) - ∫ f_N CAu, Au ⊗ M_tC'u - ∫ f_N CAu, (AM_t)C'u , now we have - ∫ f_N CA^∗A u, M_tC'u = - ∫ f_N CAu, M_t(C'Au) - ∫ f_N CAu, M_t[A, C']u -∫ f_N [C,A^∗]Au, M_tC'u - ∫ f_N CAu, Au ⊗ M_tC'u - ∫ f_N CAu, (AM_t)C'u . Similarly, for the fourth term of (<ref>), since M_t is symmetric, using its conjugation in ^dN, - ∫ f_N Cu, M_tC'A^∗Au = - ∫ f_N M_tCu, C'A^∗Au, up to the exchange of C and C', we have - ∫ f_N Cu, M_tC'A^∗Au = - ∫ f_N M_t(CAu), C'Au - ∫ f_N M_t[A,C]u, C'Au -∫ f_N M_tCu, [C',A^∗]Au - ∫ f_N Au ⊗ M_tCu, C'Au - ∫ f_N (AM_t)Cu, C'Au . Finally, all of left terms are third and fifth term in (<ref>), and underlined term in (<ref>), we collect them as below, - ∫ f_N ∇_V logf̅_̅N̅· A Cu, M_tC'u + ∫ f_N C(∇_V log f_N · Au), M_tC'u + ∫ f_N M_tCu, C'(∇_V log f_N · Au). We deal with the first term of (<ref>) as following, - ∫ f_N ∇_V logf̅_̅N̅· A Cu, M_tC'u = - ∫ f_N ∇_V logf̅_̅N̅· A (Cu), M_tC'u - ∫ f_N M_tCu, ∇_V logf̅_̅N̅· A (C'u) - ∫ f_N Cu, (∇_V log f_N · AM_t)C'u = - ∫ f_N C(∇_V logf̅_̅N̅· Au), M_tC'u - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, M_tC'u - ∫ f_N M_tCu, C'(∇_V logf̅_̅N̅· A u) - ∫ f_N M_tCu, [∇_V logf̅_̅N̅· A, C']u - ∫ f_N Cu, (∇_V log f_N · AM_t)C'u , by similar argument with (<ref>) and (<ref>) in Lemma <ref>, we have 2∫ f_N CAu, Au ⊗ C'u + 2∫ f_N Au ⊗ Cu, C'Au - ∫ f_N [∇_V logf̅_̅N̅· A, C]u, C'u - ∫ f_N Cu, [∇_V logf̅_̅N̅· A, C']u - ∫ f_N Cu, (∇_V log f_N · AM_t)C'u . Until now, let us collect all terms don't appear in the Lemma <ref>: - ∫ f_N Au · Cu, (AM_t)C'u - ∫ f_N CAu, (AM_t)C'u - ∫ f_N (AM_t)Cu, C'Au -∫ f_N Cu, (∇_V logf̅_̅N̅· A M_t + ∂_t M_t + BM_t)C'u. We compute the first term more precisely, -∫ f_N Au · Cu, (AM_t)C'u = - ∫f̅_̅N̅ Ah · Cu, (AM_t)C'u, using the conjugate relationship w.r.t. measure f̅_N, we have - ∫f̅_̅N̅ Ah · Cu, (AM_t)C'u = - ∫ f_N A^∗ Cu, (AM_t)C'u, recall A^∗ = - A · g - ∇_V logf̅_̅N̅, g, then we have - ∫ f_N A^∗ Cu, (AM_t)C'u = ∑_i=1^N ∫ f_N A_i · Cu, (A_iM_t)C'u + ∫ f_N ∇_V logf̅_̅N̅· Cu, (AM_t)C'u = ∑_i=1^N∫ f_N A_iCu, (A_iM_t)C'u + ∑_i=1^N ∫ f_N (A_iM_t)Cu, A_iC'u + ∫ f_N Cu, (Δ_V M) C'u + ∫ f_N Cu, (∇_V logf̅_̅N̅· A M_t)C'u = ∫ f_N ACu, (AM_t)C'u + ∫ f_N (AM_t)Cu, AC'u + ∫ f_N Cu, (Δ_V M) C'u + ∫ f_N Cu, (∇_V logf̅_̅N̅· A M_t)C'u. Combine with last three terms of (<ref>), all of remaining terms compared with (<ref>) in the proof of Lemma <ref> are ∫ f_N Cu, (Δ_V M_t)C'u + ∫ f_N [A,C]u, (AM_t)C'u + ∫ f_N (AM_t)Cu, [A,C']u . Together with (<ref>) in Step1, we finish the proof. If we take M as a block positive defined matrix, i.e. M = ( [ E F; F G ]), where E,F,G: × (Ω×^d)^N →^Nd × Nd are matrix-valued functions smooth enough, and take C = C' = (∇_X, ∇_V), then we have d/dt{∫ f_N Cu, M_t Cu} = - ∫ Cu, S_tCu - 2 ∫ f_N CR_N, M_t Cu - 2 σ∫ f_N CAu, M_tC'Au , where S_t = S(t,Z) is a matrix reads as S_t = ( [ 2F - ∂_t E - L^∗_NE D_1 - ∂_t F - L^∗_NF - 2E ∇^2 U + 2 γ F; 2G - ∂_t F - L^∗_NF D_2 - ∂_t G - L^∗_N G - 2F ∇^2 U + 2 γ G ]) and D_1 = -4 σ (F ∇_V ∇_V logf̅_̅N̅ + E ∇_X ∇_V logf̅_̅N̅), D_2 = -4 σ (F ∇_X ∇_V logf̅_̅N̅ + G ∇_V ∇_V logf̅_̅N̅). A easy computation shows that ∇^2 U = [ ∇^2 V(x_1) + 1/N∑_j ≠ 1^N ∇^2 W(x_1 - x_j) ⋯ -1/N∇^2 W(x_1 - x_N); ⋮ ⋱ ⋮; -1/N∇^2 W(x_N - x_1) ⋯ ∇^2 V(x_N) + 1/N∑_j ≠ N^N ∇^2 W(x_N - x_j) ], which is a Nd × Nd matrix. The main idea is to analyse every term in (<ref>). Let us compute what commutator [B,C] is firstly. For each component of operator C = (∇_X,∇_V), we have [∇_X, B]_i = ∑_k=1^N [∇_x_i, B_k] = - ∑_k=1^N [∇_x_i, ∇_x_k U ·∇_v_k] = - ∑_k=1^N ∇^2_x_i,x_k U ·∇_v_k, and [∇_V, B]_i = ∑_k=1^N [∇_v_i, B_k] = ∑_k=1^N [∇_v_i, v_k ·∇_x_k - γ v_k ·∇_v_k] = ∑_k=1^Nδ_ik (∇_x_k - γ∇_v_k) = ∇_x_i - γ∇_v_i. In other word, we have [B,C]^⊤ = - (-∇^2 U ·∇_V, ∇_X - γ∇_V)^⊤ = ( [ 0 ∇^2 U; -Id γ Id ]) ( [ ∇_X; ∇_V ]). Then we regard M_t[B, C]u, Cu + Cu, M_t[B, C]u as a quadratic form with matrix ( [ -2F E ∇^2 U - γ F - G; E ∇^2 U - γ F - G 2F ∇^2 U - 2 γ G ]). Since M_t is a positive defined matrix, - CAu, M_tCAu must be negative, so we just keep it. Moreover, C, C' are commutative with A, only nontrivial terms in (<ref>) and (<ref>) are first line two terms. We recall [C,A^∗] and [∇_V logf̅_̅N̅· A, C] in Corollary <ref>, and we regard Q_C,A + Q_C',A as a quadratic form with following matrix, ( [ E F; F G ]) ( [ 0 4 σ∇_X ∇_V logf̅_̅N̅; 0 4 σ∇_V ∇_V logf̅_̅N̅ ]) = ( [ 0 4 σ (E ∇_X ∇_V logf̅_̅N̅ + F ∇_V ∇_V logf̅_̅N̅); 0 4 σ (F ∇_X ∇_V logf̅_̅N̅ + G ∇_V ∇_V logf̅_̅N̅) ]). For the last line in (<ref>), we recall duality of Liouville operator (<ref>), then we have ∂_t M_t + BM_t + σ M_t = ( [ ∂_t E + L_N^∗E ∂_t F + L_N^∗F; ∂_t F + L_N^∗F ∂_t G + L_N^∗G ]). Combining (<ref>),(<ref>) and (<ref>), we finish the proof. §.§ Weighted log Sobolev inequality In this section, we establish the weighted Log-Sobolev inequality for nonlinear equilibrium f_∞ defined by Eq.(<ref>), then we extend to the weighted N-particle version by verifying tensorized invariance of weighted Log-Sobolev inequality. Before that, let us talk about the situation of first order particle system. Guillin et al. consider the uniform-in-time propagation of chaos with the following limiting equation on 𝕋^d in <cit.>, ∂_t ρ + div_x(ρ K ∗ρ ) = σΔ_x ρ, x ∈𝕋^d, where ρ(t, x): ×Ω→ is a probability density. An very useful observation in <cit.> is that, if there exists some constant λ > 1 such that 1/λ≤ρ_0 ≤λ, for initial data ρ_0 of Eq.(<ref>) on 𝕋^d, then they propagate this property to all time t uniformly, i.e. 1/λ≤ρ(t) ≤λ. After controlling the upper bound of ∇^n ρ_L^∞ by standard energy estimates, they obtain the upper bound of ∇logρ_L^∞ and ∇^2 logρ_L^∞, which is essential for the proof of Theorem 1 in <cit.>. Another important observation in <cit.> is that ρ_t(x) satisfies Log-Sobolev inequality uniformly in t as a result of perturbation of uniform distribution by (<ref>) (See Proposition 5.1.6, <cit.>). These two facts help them obtain uniform-in-time propagation of chaos even for Biot-Savart kernel. But in the case of Vlasov-Fokker-Planck equation (<ref>), the situation becomes totally different. The best we can expect to initial data of Eq.(<ref>) is f_0(x,t) ≥ Ce^-av^2/2, (x,v) ∈Ω×^d for some constant C > 0, a > 0. The lack of positive lower bound of (<ref>) makes the uniform-in-time upper bound of ∇^2 log f _L^∞ fail by the same strategy in <cit.>, that is the reason why we replace the reference measure f by f_∞. Inspired by the argument of Gibbs measure for one particle in <cit.>, dμ = 1/Ze^- β H(x,v) x v, (x,v) ∈Ω×^d, where H(x,v) = v^2/2 + V(x) is the Hamiltonian defined on Ω×^d and Z is partition function. We omit the temperature constant β in the following and recall some related results in <cit.>. μ satisfies the following weighted Log-Sobolev inequality in ^d ×^d if there exists some constant ρ_wls(μ) > 0 s.t. for all smooth enough g with ∫ g^2 dμ = 1: Ent_μ(g^2) ≤ρ_wls(μ) ∫ (H^-2η|∇_x g|^2 + |∇_v g|^2) μ. The weighted Log-Sobolev inequality (<ref>) associates with a new second order operator L_η on ^d ×^d, L_η := H^-2ηΔ_x + Δ_v - H^-2η( 2 η∇_x H/H + ∇_x H ) ·∇_x - ∇_v H ·∇_v, which is symmetric in L^2(μ) and satisfies ∫ f (L_ηg)dμ = - ∫ (H^-2η∇_x f ·∇_x g + ∇_v f ·∇_v g) μ. The following theorem tells us how to verifying the weighted Log-Sobolev inequality for suitable condition of function H. Assume that H goes to infinity at infinity and that there exists a > 0 such that e^aH∈ L^1(μ). (1) If μ satisfies the weighted Log-Sobolev inequality (<ref>), then, there exists a Lyapunov function, i.e. a smooth function W such that W(x,y) ≥ w > 0 for all (x,v), two positive constant λ and b such that L_ηW ≤ - λ H W + b. (2) Conversely, assume that there exists a Lyapunov function satisfying (<ref>) and that |∇ H| ≥ c > 0 for |(x,y)| large enough. Define θ(r) = sup_z ∈∂ A_rmax_i,j=1,...,2d|∂^2 H/∂ z_i ∂ z_j| and assume that θ(r) ≤ ce^C_0r with some positive constants C_0 and c for r sufficiently large. Then μ satisfies the weighted Log-Sobolev inequality (<ref>). A natural choice of Lyapunov function is W(x,v) = e^α(v^2/2 + V(x)) with α∈ (0,1), we refer <cit.> and <cit.> to readers for more details. A simple perturbation argument in <cit.> could extend the weighted Log-Sobolev inequality to f_∞. Assume that μ satisfies the weighted Log-Sobolev inequality (<ref>), let μ_1 be a probability measure with density h with respect to μ such that 1/λ≤ h ≤λ for some constant λ > 0, then μ_1 satisfies Ent_μ_1(g^2) ≤ρ_wls(μ) λ^2 ∫ (H^-2η|∇_x g|^2 + |∇_v g|^2) μ_1. We use the following lemma in <cit.> (Page240, Lemma 5.1.7), Let ϕ: I → on some open interval I ⊂ be convex of class C^2. For every bounded or suitably integrable measurable function f: E → with values in I, ∫_E ϕ(f)dμ - ϕ( ∫_Ef dμ) = inf_r ∈ I∫_E [ϕ(f) - ϕ(r) - ϕ'(r)(f - r)] μ. For the function ϕ(r) = rlogr on I = (0, ∞), we can easily establish (<ref>) by Ent_μ_1(g^2) = inf_r ∈ I∫_E [ϕ(g^2) - ϕ(r) - ϕ'(r)(g^2 - r)] dμ_1 ≤λ Ent_μ(g^2), and λ Ent_μ(g^2) ≤ρ_wls(μ) λ∫ (H^-2η|∇_x g|^2 + |∇_v g|^2) μ ≤ρ_wls(μ) λ^2∫ (H^-2η|∇_x g|^2 + |∇_v g|^2) μ_1. Now we verify the invariance of weighted Log-Sobolev constant, by which we extend weighted Log-Sobolev inequality (<ref>) to the measure of tensorized form μ^⊗ N. Assume that μ_1, μ_2 satisfy the weighted Log-Sobolev inequality (<ref>) with constant ρ_wls(μ_1) and ρ_wls(μ_2), then μ_1 ⊗μ_2 satisfies Ent_μ_1 ⊗μ_2(f^2) ≤ρ_wls∑_i=1^2 ∫_E_1 × E_2 (H^-2η(z_i)|∇_x_i f|^2 + |∇_v_i f|^2) μ_1 ⊗μ_2 with ρ_wls = max{ρ_wls(μ_1), ρ_wls(μ_1)}. We denote g^2(z_1) = ∫_E_2 f^2(z_1,z_2) μ_2(z_2), and of course g ≥ 0, then Ent_μ_1 ⊗μ_2(f^2) = Ent_μ_1(g^2) + ∫_E_1( ∫_E_2 f^2(z_1,z_2) log f^2(z_1,z_2) μ_2(z_2) . . - ∫_E_2 f^2(z_1,z_2) μ_2(z_2) log∫_E_2f^2(z_1,z_2) μ_2(z_2) ) μ_1(z_1). Observe that Eut_μ_2(f^2) = ∫_E_2 f^2(z_1,z_2) log f^2(z_1,z_2) μ_2(z_2) - ∫_E_2 f^2(z_1,z_2) μ_2(z_2) log∫_E_2f^2(z_1,z_2) μ_2(z_2) ≤ ρ_wls(μ_2) ∫_E_2 (H^-2η(z_2 )|∇_x_2f|^2 + |∇_v_2f|^2) μ_2(z_2). Next we estimate Ent_μ_1(g^2), Ent_μ_1(g^2) ≤ρ_wls(μ_1) ∫_E_2 (H^-2η(z_1)|∇_x_1g|^2 + |∇_v_1g|^2) μ_1(z_1), for terms on the right hand side, observe the weight H(z_1) only concerns the first variable, |H^-η(z_1)∇_x_1g|^2 = | H^-η(z_1) ∇_x_1√(∫_E_2 f^2(z_1,z_2) μ(z_2))|^2 ≤(∫_E_2 f (H^-η(z_1) |∇_x_1f|) μ(z_2))^2/∫_E_2 f^2(z_1,z_2) μ(z_2), and use Cauchy-Schwarz inequality, we have |H^-η(z_1)∇_x_1g|^2 ≤∫_E_2 H^-2η(z_1)|∇_x_1f|^2 μ_2(z_2). Similarly, |∇_v_1g|^2 ≤∫_E_2 |∇_v_1f|^2 μ_2(z_2), then we finish the proof. Assume that V satisfies (i) of Assumption <ref>, and W satisfies Assumption <ref>, then f_∞ satisfies weighted Log-Sobolev inequality (<ref>). Moreover, f_∞^⊗ N satisfies weighted Log-Sobolev inequality (<ref>) with the same constant. According to Assumption <ref>, |∇ W(x)| ≤λ/2|x|, then we have |W(x)| ≤ |W(0)| + λ/2|x|^2, hence |W ∗ρ_∞(x)| ≤ |W(0)| + ∫ |y|^2 f_∞(y,v) y v + λ/2|x|^2 ≤ C + λ/2|x|^2, for some constant C > 0 with 𝔼_ρ_∞|x|^2 < ∞. By (i) of Assumption <ref> for V, we have a μ≤ f_∞≤ b μ, for some a, b > 0 and μ = 1/Z e^-V(x) - 1/2v^2. Using Proposition <ref>, we obtain the weighted Log-Sobolev inequality for f_∞, then Proposition <ref> makes sure the same fact about f_∞^⊗ N. §.§ Large deviation estimates In this subsection, we deal with the error terms we mentioned before. For relative entropy, we recall Lemma <ref> and the error term reads as, 4/σ∑_i=1^N ∫_(Ω×^d)^N f^t_N | 1/N∑_j,j≠ iK(x_i-x_j) - K⋆ρ_∞(x_i) |^2 Z. For relative fisher information, we recall Corollary <ref> and we estimate the following term, ∫_(Ω×^d)^N f^t_N CR_N, M_t Cu Z. Our main tool comes from Jabin-Wang's large derivation type estimates for propagation of chaos for singular kernels in series of paper <cit.>, <cit.>. We refer Theorem 3 in <cit.> or Theorem 3 and Theorem 4 in <cit.> to readers for more details to this kind of technique, now we apply it to our cases — bounded or lipschtiz kernels. The following lemma gives more precise analysis to error term (<ref>). Here we take C = (∇_X, ∇_V). Let M: Ω×^d →^2Nd × 2Nd be a positive defined matrix function which takes as (<ref>). Assume that E, F, G are block diagonal positive defined matrices, i.e. E = diag{E^1,...,E^N}, F = diag{F^1,...,F^N} and G = diag{G^1,...,G^N}, where E^i, F^i and G^i are d × d positive defined matrices. Then we have ∫ f_N ∇R_N, M ∇ u≤ ∑_i=1^N 4/σ∫ f_N |√(E^i) R_N,i^1|^2 + 4/σ∫ f_N |(F^i)^3/4R_N,i^1|^2 ∑_i=1^N γ^2/2σ^2∫ f_N |√(E^i)R^3_N,i|^2 + γ^2/2σ∫ f_N |√(F^i) R^3_N,i|^2 ∑_i=1^N γ^2/2σ^2∫ f_N |√(F^i) R^2_N,i|^2 + 4 γ^2/σ^3∫ f_N |G^i R^2_N,i|^2 + C_K ∫ f_N |√(E)∇_V u|^2 + (C_K + 1/2) ∫ f_N |√(F)∇_V u|^2 + σ/4∫ f_N |∇_V u|^2 + (C_K + 1/2) ∫ f_N |√(E)∇_X u|^2 + 1/4∫ f_N |√(F)∇_X u|^2 + ∑_i=1^N σ/16∫ f_N | √(E^i)∇_v_i∇_x_i u|_^2d^2 + σ/16∫ f_N |(F^i)^1/4∇_v_i∇_v_i u|_^2d^2 - ∑_i=1^N ∫ f_N ⟨ R^1_N,i, (∇_v_i E^i) ∇_x_i u ⟩_^2d - ∫ f_N ⟨ R^1_N,i, (∇_v_i F^i) ∇_v_i u ⟩_^2d. where σ is diffusion constant, C_K = ∇ K_L^∞ and R^1_N,i = 1/N∑_j≠ i∇ K(x_i-x_j) - ∇ K⋆ρ_∞(x_i), R^3_N,i = 1/N∑_j, j≠ i∇ K(x_j - x_i) · [v ∗ρ^v_∞(v_i) - (v_i - v_j)], ρ^v_∞(v) = ∫ f_∞ x, R^2_N,i = - 1/N∑_j≠ iK(x_i-x_j) - K⋆ρ_∞(x_i). Here we understand ∇ K or R^1_NX,i as a d × d matrix, ∇ K · v and K are d dimensional vectors, we omit the summation about these components for convenience. The meaning of this lemma is to divide the error term of relative Fisher Information into L^2 norm of R_N^i, i=1,2,3 and small terms we can absorb by negative part in (<ref>). We directly compute ∇R_N and use the Young inequality. For each component i in position direction, we have ∇_x_iR_N = ∇_v_ilog f(x_i, v_i) ·{1/N∑_j=1,j≠ i^N ∇ K(x_i - x_j) - ∇ K⋆ρ(x_i) } + ∇_x_i∇_v_ilog f(x_i,v_i) ·{1/N∑_j=1,j≠ i^N K(x_i-x_j) - K⋆ρ(x_i) } - 1/N∑_j=1,j≠ i^N ∇ K(x_j - x_i) ·∇_v_jlog f(x_j, v_j), we denote these three lines above by ∇_x_iR_N = ∇_v_ilog f(x_i,v_i) · R^1_N,i + ∇_x_i∇_v_ilog f(x_i,v_i) · R^2_N,i + γ/σ R^3_N,i, where R^1_N,i = {1/N∑_j=1,j≠ i^N ∇ K(x_i-x_j) - ∇ K⋆ρ_∞(x_i) } is a d × d matrix, R_N,i^2 = {1/N∑_j=1,j≠ i^N K(x_i-x_j) - K⋆ρ_∞(x_i) } is a d dimensional vector and R^3_N,i = - σ/γ1/N∑_j=1,j≠ i^N ∇ K(x_j - x_i) ·∇_v_jlog f(x_j, v_j) is a d dimensional vector. Recall that we take M as a block matrix M = ( [ E F; F G ]), and E, F, G are d × d diagonal matrices. Hence, we use integral by parts for each component of the first term of (<ref>), we have ∑_i=1^N ∫ f_N ⟨∇_v_ilog f · R_N,i^1, E^i ∇_x_i u ⟩ = - ∑_i=1^N ∫ f_N ⟨∇_v_i u · R_N,i^1, E^i ∇_x_i u ⟩ + ∑_i=1^N ∫ f_N ⟨∇_v_ilog f_N · R_N,i^1, E^i ∇_x_i u ⟩ = - ∑_i=1^N ∫ f_N ⟨∇_v_i u · R_N,i^1, E^i ∇_x_i u ⟩ + ∑_i=1^N ∫⟨∇_v_i f_N · R^1_N,i, E^i ∇_x_i u ⟩ = - ∑_i=1^N ∫ f_N ⟨∇_v_i u · R_N,i^1, E^i ∇_x_i u ⟩ - ∑_i=1^N ∫ f_N ⟨ R^1_N,i, E^i ∇_v_i∇_x_i u ⟩_^2d - ∑_i=1^N ∫ f_N ⟨ R^1_N,i, (∇_v_i E^i) ∇_x_i u ⟩_^2d. Using Young inequality, we have, ∑_i=1^N ∫ f_N ⟨∇_v_ilog f · R_N,i^1, E^i ∇_x_i u ⟩ ≤ C_K ∫ f_N |√(E)∇_V u|^2 + C_K ∫ f_N |√(E)∇_X u|^2 + ∑_i=1^N σ/16∫ f_N | √(E^i)∇_v_i∇_x_i u|_^2d^2 + ∑_i=1^N 4/σ∫ f_N |√(E^i) R_N,i^1|_^2d^2 - ∑_i=1^N ∫ f_N ⟨ R^1_N,i, (∇_v_i E^i) ∇_x_i u ⟩_^2d, similarly, ∑_i=1^N ∫ f_N ⟨∇_v_ilog f · R_N,i^1, F^i ∇_v_i u ⟩ ≤ C_K ∫ f_N |√(F)∇_V u|^2 + ∑_i=1^N σ/16∫ f_N | (F^i)^1/4∇_v_i∇_v_i u|_^2d^2 + ∑_i=1^N 4/σ∫ f_N |(F^i)^3/4 R_N,i^1|_^2d^2 - ∑_i=1^N ∫ f_N ⟨ R^1_N,i, (∇_v_i F^i) ∇_v_i u ⟩_^2d. Actually the computation above is the same if we take f = f_∞, then we finish the estimates. For the third term of (<ref>), by Young inequality, we have ∑_i=1^N ∫ f_N ⟨ R^3_N,i, E^i ∇_x_i u ⟩≤∑_i=1^N γ^2/2σ^2∫ f_N |√(E^i)R^3_N,i|^2 + 1/2∫ f_N |√(E)∇_X u|^2, ∑_i=1^N ∫ f_N ⟨ R^3_N,i, F^i ∇_v_i u ⟩≤∑_i=1^N γ^2/2σ^2∫ f_N |√(F^i) R^3_N,i|^2 + 1/2∫ f_N |√(F)∇_V u|^2, moreover, we rewrite R^3_N,i as R^3_N,i = 1/N∑_j=1, j ≠ i^N ∇ K(x_j - x_i) · [v ∗ρ_∞^v - (v_i - v_j)], ρ_∞^v(v) = ∫ f_∞dx, by ∇_v_ilog f_∞ = -γ/σ v_i. In the end, the second term ∇_x_i∇_v_ilog f · R^2_N,i disappears by ∇_x_i∇_v_ilog f_∞ = 0 if we take f = f_∞. Now we finish the estimates of all error terms in position direction. For each component i in velocity direction, we simply have only one term ∇_v_iR_N = ∇_v_i∇_v_ilog f ·{1/N∑_j≠ iK(x_i-x_j) - K⋆ρ(x_i) } = ∇_v_i∇_v_ilog f · R^2_N,i. We take ∇_v_i∇_v_ilog f(x_i,v_i) = ∇_v_i∇_v_ilog f_∞(x_i,v_i) = - γ/σ Id_d× d and use Young inequality, then we obtain ∑_i=1^N ∫ f_N ⟨∇_v_i∇_v_ilog f_∞· R^2_N,i, F^i ∇_x_i u ⟩ ≤ ∑_i=1^N ∫ f_N |∇_v_i∇_v_ilog f_∞·√(F^i) R^2_N,i|^2 + 1/4∫ f_N |√(F)∇_X u|^2, ∑_i=1^N ∫ f_N ⟨∇_v_i∇_v_ilog f_∞· R^2_N,i, G^i ∇_v_i u ⟩ ≤ ∑_i=1^N 1/σ∫ f_N |∇_v_i∇_v_ilog f_∞· G^i R^2_N,i|^2 + σ/4∫ f_N |∇_V u|^2, now we finish the estimates of error terms on velocity direction. Finally, we collect all terms of (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and(<ref>), we finish the proof. For error terms of relative Fisher Information, the most difficult part is to show the uniform-in-time estimates for the following terms, ∑_i,k=1^N ∫ f_N ⟨∇_v_i∇_v_ilog f · R^2_N,i, M_t^ik C_k u ⟩, ∑_i,k=1^N ∫ f_N ⟨∇_x_i∇_v_ilog f · R^2_N,i, M_t^ik C_k u ⟩. It is not easy to deal with the terms ∇_v_i∇_v_ilog f and ∇_x_i∇_v_ilog f by uniform-in-time way, that is one of the most important reason we select the reference measure as f_∞. It is natural to conjecture that |∇^2 log f| ≤ C(1 + 1/2v^2 + V(x))^k for some positive order k>0. We might explore how to control these terms better in further study. Now let us focus on three error terms in Lemma <ref>. We rewrite them as following as Theorem 3 in <cit.> has talked about, ∑_i=1^N ∫ f_N |R^1_N,i|^2 = ∫ f_N |1/√(N)∑_j,j≠ 1∇ K(x_1-x_j) - ∇ K⋆ρ_∞(x_1) |^2, ∑_i=1^N ∫ f_N |R^3_N,i|^2 = ∫ f_N | 1/√(N)∑_j, j≠ 1∇ K(x_j - x_1) · [v ∗ρ^v_∞(v_1) - (v_1 - v_j)]|^2, ∑_i=1^N ∫ f_N |R^2_N,i|^2 = ∫ f_N | 1/√(N)∑_j,j≠ 1K(x_1-x_j) - K⋆ρ_∞(x_1) |^2, for terms (<ref>), (<ref>) and (<ref>), we define ψ_1(x_1,x_j) = ∇ K(x_1-x_j) - ∇ K⋆ρ_∞(x_1), ψ_2(x_1,x_j) = K(x_1-x_j) - K⋆ρ_∞(x_1), ψ_3(v_1,v_j) = ∇ K(x_j - x_1) · [v ∗ρ^v_∞(v_1) - (v_1 - v_j)]. We also define the 2p moment of f_∞ for any p ≥ 1, M_p = ( ∫ f_∞ (1 + 1/2v^2 + 1/2x^2)^p )^1/p. By similar argument in Section 1.4 in <cit.>, we have sup_p M_p/p := Λ < 1/e λ, where λ satisfies ∫ f_∞ e^λ(1 + 1/2v^2 + 1/2x^2) z < ∞, We easily take λ∈ (0,β) in the following. By Lemma 1 in <cit.>, i.e. Gibbs inequality, we have 1/N∫ f_N |R^k_N|^2 dZ ≤1/νH_N(f_N| f̅_̅N̅) + 1/N∫f̅_̅N̅exp(ν |R^k_N|^2) Z for some ν > 0 and k = 1,2,3, it is sufficient to show that ∫f̅_̅N̅exp( ν/N∑_j_1,j_2=1^N ψ_i(x_1, x_j_1) ψ_i(x_1,x_j_2)) Z < ∞, i = 1,2, and ∫f̅_̅N̅exp( ν/N∑_j_1,j_2=1^N ψ_3(v_1, v_j_1) ψ_3(v_1,v_j_2)) Z < ∞ for some ν > 0. We observe that ψ_1(x_1,x_j) is bounded by Assumption <ref> and ∫ψ_1(z,x) dx = 0, which implies that we can directly use Theorem 3 in <cit.> for this type of ψ_1(x_1 - x_j). We recall the following proposition in <cit.>, Assume that ψ∈ L^∞ with ψ_L^∞ < 1/2e, and for any fixed y, ∫ψ(y,x) f_∞ z = 0, then ∫f̅_̅N̅exp( 1/N∑_j_1,j_2=1^N ψ(x_1, x_j_1) ψ(x_1,x_j_2)) Z < C, where f̅_̅N̅ = f_∞^⊗ N and C > 0 is independent of N. Theorem 3 of <cit.>. Now let ψ = νψ_1 for some ν > 0, we obtain the desired result. For the cases ψ_2 and ψ_3, we prove the following proposition: Assume that |ψ(m,n)| ≤ C(1 + |n|) with C ≤1/8e^2 Λ, and for any fixed m, ∫ψ(m,n) f_∞dz = 0. Here (m,n) takes position pair (y,x) ∈Ω×Ω or velocity pair (u,v) ∈^d ×^d, then we have ∫f̅_̅N̅exp ( 1/N∑_j_1,j_2=1^N ψ(m_1, n_j_1) ψ(m_1,n_j_2)) Z < C, where f̅_̅N̅ = f_∞^⊗ N and C > 0 is independent of N. We use (m,n) to denote position pair (y,x) ∈Ω×Ω or velocity pair (u,v), and we show the proof of these two cases together. Since exp(A) ≤exp(A) + exp(-A) = 2 ∑_k=0^∞1/(2k)! A^2k, it suffices only to bound the series with even terms, 2 ∑_k=0^∞1/(2k)!∫f̅_̅N̅( 1/N∑_j_1,j_2=1^N ψ(m_1, n_j_1) ψ(m_1,n_j_2) )^2k Z, where in general the k-th term can be expanded as 1/(2k)!∫f̅_̅N̅( 1/N∑_j_1,j_2=1^N ψ(m_1, n_j_1) ψ(m_1,n_j_2))^2k Z = 1/(2k)!1/N^2k∑_j_1,...,j_4k=1^N ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z. Now we divide the proof in two different cases: where k is small compared to N and in the simpler case where k is comparable to or larger than N. Case: 3 ≤ 3k ≤ N. In this case, for each term of (<ref>), we have ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k)dZ ≤ C^4k∫f̅_̅N̅(1 + |n_j_1|)...(1 + |n_j_4k|) Z , since we have cancellation condition ∫ψ(m,n) f_∞dz = 0, the whole estimates relies on how many choices of multi-indices (j_1,...,j_4k) leading to a non-vanishing term. By notations of Theorem 3 and Theorem 4 in <cit.>, we could denote the set of multi-indices (j_1,...,j_4k) s.t. ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z ≠ 0, by ℰ_N-1,4k since j_i ≠ 1 for all 1 ≤ i ≤ 4k, we also denote (a_1,...,a_N) the multiplicity for (j_1,...,j_4k), i.e. a_l = |{ν∈{1,...,4k}, j_ν = l}|. Actually, if there exists l ≠ 1 s.t. a_l = 1, then the variable x_l enters exactly once in the integration, assume for simplicity that j_1 = l then ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z = ∫ψ(m_1,n_j_1)... ψ(m_1,n_j_4k)Π_i≠ l f_∞(z_i) ∫ψ(m_1,n_l)f_∞(z_l) = 0, by the assumption of vanishing condition for ψ, provided l = j_1 ≠ 1. Using these notations, we have 1/(2k)!1/N^2k∑_j_1,...,j_4k = 1^N ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z = 1/(2k)!1/N^2k∑_(j_1,...,j_4k) ∈ℰ_N-1,4k∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z ≤ 1/(2k)!1/N^2k∑_(j_1,...,j_4k) ∈ℰ_N-1,4k( ∑_(j_2k+1,...,j_4k) ∈ℰ_N-1,4k 1 ) ∫f̅_̅N̅ |ψ(m_1,n_j_1)|^2... |ψ(m_1,n_j_2k)|^2 Z + 1/(2k)!1/N^2k∑_(j_1,...,j_4k) ∈ℰ_N-1,4k( ∑_(j_1,...,j_2k) ∈ℰ_N-1,4k 1 ) ∫f̅_̅N̅ |ψ(m_1,n_j_2k+1)|^2... |ψ(m_1,n_j_4k)|^2 Z, where ( ∑_(j_2k+1,...,j_4k) ∈ℰ_N-1,4k 1 ) for the first line, we mean that for each fixed former 2k indices in (j_1,...,j_4k) ∈ℰ_N-1,4k, we add all possible cases about later 2k indices in ℰ_N-1,4k, moreover, we have ∫f̅_̅N̅ |ψ(m_1, n_j_1)|^2...|ψ(m_1, n_j_2k)|^2dZ = C^4k∫f̅_̅N̅ [(1 + |n_2|)^2]^a_2...[(1 + |n_N|)^2]^a_N Z = C^4k M_a_1^a_1M_a_2^a_2...M_a_N^a_N≤ (CΛ)^2k a_1^a_1...a_N^a_N, with the convention that 0^0 = 1, then we have 1/(2k)!1/N^2k∑_(j_1,...,j_4k) ∈ℰ_N-1,4k( ∑_(j_2k+1,...,j_4k) ∈ℰ_N-1,4k 1 ) ∫f̅_̅N̅ |ψ(x_1, x_j_1)|^2...|ψ(x_1, x_j_2k)|^2 Z ≤ 1/(2k)!1/N^2k(2C^2Λ)^2k∑_l=1^2k∑_J_4k∈ℰ_N-1,4k, |S(J_2k)| = l( ∑_(j_2k+1,...,j_4k) ∈ℰ_N-1,4k 1 ) a_1^a_1...a_N^a_N, and ( ∑_(j_2k+1,...,j_4k) ∈ℰ_N-1,4k 1 ) ≤∑_a'_1 + ... + a'_l = 2k, a'_1 ≥ 1,...,a'_l ≥ 1(2k)!/a'_1!...a'_l!≤ l^2k, we denote ∑_l=1^2k∑_J_4k∈ℰ_N-1,4k, |S(J_2k)| = l l^2k a_1^a_1...a_N^a_N = ∑_l=1^2k l^2k C_N^l U^l_N,2k, where U_N,2k^l := ∑_a_1 +... + a_l = 2k, a_1 ≥ 1,...,a_l ≥ 1(2k)!/a_1!...a_l! a_1^a_1...a_l^a_l, using (2.13) and Lemma 6 of <cit.>, we have U_N,2k^l ≤ e^2k (2k)! ( ∑_a_1 +... + a_l = 2k, a_1 ≥ 1,...,a_l ≥ 1 1 ) ≤1/√(k) (2e)^2k(2k)!, Insert (<ref>) and (<ref>) into (<ref>), we obtain that 1/(2k)!1/N^2k∑_j_1,...,j_4k = 1^N ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z ≤ (4eC^2Λ)^2k/(2k)!2/N^2k (2k)! 1/√(k)∑_l=1^2k C_N^l l^2k = 2/√(k) (8e^2C^2Λ)^2k∑_l=1^2k C_N^l l^2k N^-2k, furthermore, by Stirling's formula C_N^l l^2k/N^2k = l^2k-l/N^2k-ll^l/N^lN!/(N-l)! l!≤1/√(π l)√(N/N-l)(N/N-l)^N-l, and by assumption l < 3k ≤ N gives that N/N-l≤3/2, thus 1/(2k)!1/N^2k∑_j_1,...,j_4k = 1^N ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z ≤ 4k (8e^2C^2Λ)^2k, we finish the proof of this case. Case: 3k > N. By Cauchy inequality, we have 1/N∑_j_1,j_2 = 1^N ψ(m_1, n_j_1) ψ(m_1, n_j_2) ≤C^2/N∑_j_1, j_2^N (1 + |x_j_1|)(1 + |x_j_2|) ≤ C^2 ∑_j=1^N (1 + |x_j|)^2, then we have 1/(2k)!1/N^2k∑_j_1,...,j_4k = 1^N ∫f̅_̅N̅ψ(m_1,n_j_1)... ψ(m_1,n_j_4k) Z ≤ C^4k/(2k)!∫f̅_̅N̅( ∑_j=1^N (1 + |x_j|)^2 )^2k Z ≤ (5e^2C^2 Λ)^2k, which follows the same argument of Proposition 4 of <cit.>. Finally, we combine (<ref>) and (<ref>), then we complete the proof. Now let us take ψ = νψ_2 or νψ_3, we obtain the desired bound O(1) of (<ref>) and (<ref>) by above proposition. § PROOF OF THE MAIN RESULTS In this section, we prove three main theorems we claimed in Section <ref>. We roughly divide all three proofs into two steps. ∙ Step one is show the uniform-in-N estimates for the time evolution of our new quantity ℰ_N(f_N|f̅_N). The key point is to select suitable weight matrix function M_t to estimate the lower bound of new matrix S_t on the right hand side of (<ref>), and the lower bound of matrix S_t should be independent on particle number N. In the case of uniform-in-time propagation of chaos, we also need to deal with the second term on the right hand side of (<ref>), -2 ∫_(Ω×^d)^N f_N ∇R_N, M ∇logf_N/f^⊗ N Z, this is exactly what we do in Lemma <ref>. ∙ Step two is to establish the Gronwall type inequality for time evolution of our new quantity ℰ_N(f_N|f̅_N) by some functional inequalities developed in Section <ref> and large deviation estimates proved in Section <ref>. In the case of uniform-in-time propagation of chaos, we require Log-Sobolev inequality and weighted Log-Sobolev inequality for non-linear equilibrium f_∞. However, we require that the N particle invariant measure f_N,∞ satisfies uniform-in-N Log-Sobolev inequality for the long time convergence of Eq.(<ref>), which is more harder than the case of uniform in time propagation of chaos which takes the reference measure as the tensor form f_∞^⊗ N. We refer <cit.> for the study of uniform-in-N functional inequalities. §.§ Proof of Theorem <ref> Step1. Let us take f̅_N = f_N,∞ and R_N = 0. We observe that ∇_X ∇_V log f_N,∞ = ∇_V ∇_X log f_N,∞ = 0 and ∇_V ∇_V log f_N,∞ = - 1/2σ^-1γ Id_Nd × Id, then the matrix S in (<ref>) of Corollary <ref> reads as S = ( [ 2F - ∂_t E - L^∗_NE 4γ F - ∂_t F - L^∗_NF + 2E ∇^2 U; 2G - ∂_t F - L^∗_NF 4γ G - ∂_t G - L^∗_N G + 2F ∇^2 U ]). Now we select the weight matrix M as a constant matrix, E = diag{δ a^3,...,δ a^3}, F = diag{δ a^2,...,δ a^2}, G = diag{2δ a,...,2δ a}, where E,F,G are Nd × Nd constant diagonal matrices and a > 0, δ > 0 are to be confirmed, then the matrix S_t reduces to S = ( [ 2F 4γ F + 2E ∇^2 U; 2G 4γ G + 2F ∇^2 U ]). Observe that the top left corner matrix F implies the entropy dissipation in position direction, and right down corner concerns the entropy dissipation in velocity direction. If ∇^2 U has a positive lower bound, i.e.∇^2 U ≥κ Id_dN × dN for some κ > 0, then we have 4 γ G + 2F ∇^2 U ≥ (8 δ a γ + 2δ a^2 κ) Id_dN × dN. However, our assumptions also treat ∇^2 U doesn't have positive lower bound. Since we assume that ∇^2 V _L^∞ < ∞ and ∇^2 W _L^∞ < ∞, we have ∇^2 U _L^∞≤ C_K + C_V < ∞. Now for right down corner, we choose a such that a∇^2 U_L^∞ < 2 γ, then we have a ≤2γ/C_K + C_V, and ⟨∇_V u, (4γ G + 2F ∇^2 U) ∇_Vu ⟩≥ 4 δ a γ|∇_V u|^2. For top right corner and left down corner, we deal with cross terms. It is easy to obtain that ⟨∇_Xu, [4γ F + 2E ∇^2 U + 2G] ∇_Vu ⟩ ≤ δ a[4 + 4aγ + 2 a^2(C_K + C_V)]|∇_V u||∇_X u| ≤ δ a(4 + 8aγ) |∇_V u||∇_X u| ≤ 1/2δ a^2 |∇_X u|^2 + 1/2δ(4 + 8aγ)^2 |∇_V u|^2 by (<ref>) in the third line and Cauchy inequality in the fourth line. Combining (<ref>) and (<ref>), we have - ∫ f_N ∇ u, S∇ u ≤ - 3/2δ a^2 ∫ f_N |∇_X u|^2 - 4δ aγ∫ f_N |∇_V u|^2 + m_1 ∫ f_N |∇_V u|^2, provided m_1 = 1/2δ(4 + 8a γ)^2 > 0. Now we choose δ such that m_1 ≤σ/4, then we have δ≤σ/2(4 + 8aγ)^2, and d/dtℰ^M_N(t) ≤ - c_1/N∫ f_N |∇_X u|^2 - c_2/N∫ f_N |∇_V u|^2, provided c_1 = 3/2δ a^2 and c_2 = 4δ a γ + σ/2. Finally we obtain the entropy dissipation in X and V direction with constants which are independent of N. Step2. We directly use the results in <cit.> in this step, under Assumption <ref> and <ref> with C_K < 1, we have uniform-in-N Log Sobolev inequality of N particle invariant measure f_N, ∞, Ent_μ_N (f^2) ≤ρ_LS∫_^dN |∇ f|^2 dμ_N, f ∈ C^1_b(^dN), by our notation, let μ_N = f_N,∞, we have d/dtℰ^M_N(t) ≤ -c ℰ^M_N(t), where c = 1/2(1+ρ_LS)min{3/2δ a^2, σ/2} is a constant depending on ρ_LS, U, σ, γ, then we finish the proof. §.§ Proof of Theorem <ref> In this subsection, we prove the particle system (<ref>) approximates the unique equilibrium (<ref>) of Vlasov-Fokker-Planck equation (<ref>) when diffusion strength σ is large enough. We deal with two kinds of confining potentials by selecting different weight matrix M. When confining potentials V satisfy Assumption <ref>, we choose E = diag{δ a^3,...,δ a^3}, F = diag{δ a^2,...,δ a^2}, G = diag{2δ a,...,2δ a}, where E,F,G are Nd × Nd constant diagonal matrices and δ, a > 0 are to be confirmed. When confining potentials V satisfy Assumption <ref>, we choose { E = diag{e(z_1)Id_d × d,...,e(z_N)Id_d × d}, F = diag{f(z_1)Id_d × d,...,f(z_N)Id_d × d}, G = diag{g(z_1)Id_d × d,...,g(z_N)Id_d × d}, . where E,F,G are also Nd × Nd diagonal matrices, and e(z), f(z), g(z) are d × d diagonal matrices which should be understood as e(z)Id_d× d, f(z)Id_d× d and g(z)Id_d× d, we omit the symbol “Id_d × d" for convenience. We choose e(z), f(z) and g(z) as e(z) = δ a^3(H(z))^-3θ, b(z) = δ a^2(H(z))^-2θ, c(z) = 2 δ a(H(z))^-θ, where H(z) = v^2/2 + V(x) + H_0, H_0 > 0, and δ, a, θ, H_0 > 0 are to be confirmed. In the next, we establish Gronwall type inequality for the quantity ℰ_N^M(f^t_N|f^⊗ N_∞). Case 1. The confining potentials V satisfy Assumption <ref> and we select M as constant diagonal matrix defined by (<ref>). Step 1. The matrix M is exactly the same as we have used in the last subsection, we immediately have - ∫ f_N ∇ u, S ∇ u ≤ - 3/2δ a^2 ∫ f_N |∇_X u|^2 - 4δ aγ∫ f_N |∇_V u|^2 + m_1 ∫ f_N |∇_V u|^2, provided m_1 = 1/2δ(4 + 8a γ)^2 and a ≤2γ/C_K + C_V. By Lemma <ref>, we have ∫ f_N ∇R_N, M ∇ u≤ n_1 ∫ f_N |∇_X u|^2 + n_2 ∫ f_N |∇_V u|^2 + 8δ a^3/σ∫ f_N |R_N^1|^2 + δ a^2 γ^2(1+a)/2σ^2∫ f_N |R_N^3|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) ∫ f_N |R_N^2|^2 + ∑_i=1^N σ/16∫ f_N | √(E^ii)∇_v_i∇_x_i u|_^2d^2 + σ/16∫ f_N | √(F^ii)∇_v_i∇_v_i u|_^2d^2, provided n_1 = 1/4δ a^2 + (C_K + 1/2)δ a^3 and n_2 = σ/4 + δ a^2(C_K a + C_K + 1/2). By Lemma <ref> and Young inequality, we have d/dt H_N(f^t_N|f^⊗ N_∞) ≤ - 3 σ/41/N∫ f_N|∇_V u|^2 + 1/σ1/N∫ f_N|R^0_N|^2. Combining (<ref>), (<ref>) and (<ref>), we obtain d/dtℰ_N^M(f_N|f_∞^⊗ N) ≤ - c_1/N∫ f_N |∇_X u|^2 - c_2/N∫ f_N |∇_V u|^2 + 8δ a^3/σ1/N∫ f_N |R_N^1|^2 + δ a^2 γ^2(1+a)/2σ^21/N∫ f_N |R_N^3|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) 1/N∫ f_N |R_N^2|^2 + 1/σ1/N∫ f_N|R_N|^2, where c_1 = -1/2δ a^2 + (C_K + 1/2)δ a^3 and c_2 = -σ/2 - 4 δ a γ + δ a^2(C_K a + C_K + 1/2) + m_1. Recall we choose a such that a ≤2 γ/C_K + C_V, now we update a such that a ≤min{2 γ/C_K + C_V, 1/4C_K + 2}, then we have c_1 ≤ -1/4δ a^2 and δ a^2(C_K a + C_K + 1/2) ≤δ a. In the next we choose δ such that δ≤σ/4[8 + a + 28aγ + 32a^2 γ^2], we have c_2 ≤ - σ/4. Finally, we update (<ref>) as d/dtℰ_N^M(f_N|f_∞^⊗ N) ≤ - δ a^2/41/N∫ f_N |∇_X u|^2 - σ/41/N∫ f_N |∇_V u|^2 + 8δ a^3/σ1/N∫ f_N |R_N^1|^2 + δ a^2 γ^2(1+a)/2σ^21/N∫ f_N |R_N^3|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) 1/N∫ f_N |R_N^2|^2 + 1/σ1/N∫ f_N|R_N|^2. Step2. We start from (<ref>). By Proposition <ref>, we have 1/N∫ f_N|R^1_N|^2 ≤1/νH_N(f^t_N|f^⊗ N_∞) + C/N, and by Proposition <ref>, we have 1/N∫ f_N|R^i_N|^2 ≤1/νH_N(f^t_N|f^⊗ N_∞) + C/N, where i = 0,2,3, the constant ν > 0 satisfies (16C^2_Ke^2 Λ) ν < 1 and the constant C > 0 depends on ν, σ, γ, K and V. Using (<ref>) and (<ref>), we have d/dtℰ_N^M(f_N|f_∞^⊗ N) ≤ - δ a^2/41/N∫ f_N |∇_X u|^2 - σ/41/N∫ f_N |∇_V u|^2 + n_3/ν H_N(f_N|f_∞^⊗ N) + C/N, where n_3 = [8δ a^3/σ + δ a^2 γ^2(1+a)/2σ^2 + 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3 + 1/σ]. Now let us determine the value of a > 0, δ > 0 and ν > 0 such that n_3 ρ_ls/ν≤min{δ a^2/8, σ/8}, where ρ_ls is the Log-Sobolev inequality constant of f_∞ (See Corollary <ref>). Recall the constant a > 0 satisfies a ≤min{2 γ/C_K + C_V, 1/4C_K + 2} < 1, now we choose δ = σ/4[10 + 28 γ + 32 γ^2]≤σ/4[8 + a + 28aγ + 32a^2 γ^2] by (<ref>). We also choose 1/ν = 20C_K^2 e σ/γ with Λ≤1/e λ satisfies λ∈ (0, β) in Proposition <ref> and <ref>. Observing that δ a^2 < σ by a < 1 and δ < σ, we only need n_3 ρ_ls/ν≤δ a^2/8. Now we choose a and σ to make every term of n_3 less than δ a^2 ν/40 ρ_ls, { 8δ a^3/σ≤δ a^2 ν/40 ρ_ls⟹ a ≤γ/6400 eρ_ls C_K^2, δ a^2 γ^2/σ^2≤δ a^2 ν/40 ρ_ls⟹σ≥ 3200ρ_lse γ C_K^2, 2 γ^2 δ a^2/σ^2≤δ a^2 ν/40 ρ_ls⟹σ≥ 1600ρ_lse γ C_K^2, γ^2 δ a^2/σ^2≤δ a^2 ν/40 ρ_ls⟹σ≥ 3200ρ_lse γ C_K^2, 1/σ≤δ a^2 ν/40 ρ_ls⟹σ≥160[10 + 28 γ + 32 γ^2]ρ_lseC_K^2/a^2 γ. . We choose a such that a ≤min{2 γ/C_K + C_V, 1/4C_K + 2, γ/6400 eρ_ls C_K^2}, by the first line above and (<ref>), for example, saying a = min{γ,1}/6400 eρ_ls (C_K+1)^2 + 4C_V + 4C_K + 2, then we have the range of diffusion constant in our result, σ≥max{160[10 + 28 γ + 32 γ^2]ρ_lse/a^2 γ, 3200ρ_lseγ}C_K^2. Finally, we update (<ref>) as the following d/dtℰ_N^M(f^t_N|f_∞^⊗ N) ≤ - δ a^2/161/N∫ f_N |∇_X u|^2 - σ/161/N∫ f_N |∇_V u|^2 - δ a^2/16 ρ_lsH_N(f^t_N|f_∞^⊗ N) + C/N ≤ -δ a^2/16(ρ_ls + 1)ℰ_N^M(f^t_N|f_∞^⊗ N) + C/N, we finish the proof of this case. Case 2. The confining potentials V satisfy Assumption <ref> and we select M as diagonal mateix function defined by (<ref>). Step1. We take E,F,G as the following tensorized matrices, E = diag{e(z_1),...,e(z_N)}, F = diag{f(z_1),...,f(z_N)}, G = diag{g(z_1),...,g(z_N)}, we take e,f,g as scalar functions read as e(z) = δ a^3 (H(z))^-3θ, f(z) = δ a^2 (H(z))^-2θ, g(z) = 2 δ a (H(z))^-θ, where H(z) = v^2/2 + V(x) + H_0, H_0 > 0, and δ, a, θ, H_0 > 0 are to be confirmed. We recommend <cit.> to readers for similar argument of one particle version. Now let us estimate the lower bound of matrix S in Corollary <ref>. We recall S = ( [ 2F - ∂_t E - L^∗_NE D_1 - ∂_t F - L^∗_NF - 2E ∇^2 U + 2 γ F; 2G - ∂_t F - L^∗_NF D_2 - ∂_t G - L^∗_N G - 2F ∇^2 U + 2 γ G ]), after we take f̅_̅N̅ = f^⊗ N_∞, we have S = ( [ 2F - L^∗_NE 4 γ F - L^∗_NF + 2E ∇^2 U; 2G - L^∗_NF 4 γ G - L^∗_N G + 2F ∇^2 U ]). For the left top corner element, the diagonal matrix F still offers the dissipation in position direction. Observing that 2F - L^∗_N E is a Nd × Nd diagonal matrix, we could estimate them like scalar. Before that, we firstly estimate the upper bound of L^∗_N E, L^∗_N F and L^∗_N G. For some s > 0, we have L^∗_N (H^-sθ(z_i)) = ∑_j=1^N (v_j ·∇_x_j - ∇_x_jU ·∇_v_j - γ v_j ·∇_v_j + σΔ_x_j)H^-sθ(z_i) = -(sθ) {σ d/H + σ (-sθ -1)v_i^2/H^2 - γ v_i^2/H - (1/N∑_j:j ≠ i v_i ·∇ W(x_i - x_j))H^-1}H^-s θ(z_i). Recall we assume that ∇ W_L^∞ < ∞ in this case, we have |L^∗_N (H^-sθ(z_i))| ≤ sθ(σ d + σ(sθ + 1)/H_0 + γ + ∇ W_L^∞) H^-s θ(z_i), now we choose H_0 large enough, which satisfies σ(d + 3θ + 1)/H_0≤γ, we have |L^∗_N (H(z_i)^-sθ)| ≤ sθ(2 γ + ∇ W_L^∞)H^-s θ. For i-th component of 2F - L_N^∗E, we have 2f(z_i) - L_N^∗(e(z_i)) = 2 δ a^2 H^-2θ(z_i) - δ a^3 L^∗_N (H^-3θ(z_i)) ≥ 2 δ a^2 H^-2θ - 3θδ a^3(2γ + ∇ W_L^∞) H^-3θ, again we choose H_0 such that 3θ a (2γ + ∇ W_L^∞)/H_0^θ≤ 1, we have ⟨∇_Xu, (2F - L_N^∗E)∇_X u ⟩≥δ a^2 ∑_i=1^N H^-2θ(z_i )|∇_x_iu|^2, provided H_0 satisfies H_0 ≥max{σ(d + 3 θ + 1)/γ, [3θ a (2γ + ∇ W_L^∞)]^1/θ}. For the right down corner of S, we recall the formulation of dN × dN matrix ∇^2 U in (<ref>), and we have V^-2θ∇^2 V_L^∞≤ C_V^θ by Assumption <ref>, then we obtain H^-2θ∇^2 U_L^∞≤V^-2θ∇^2 V_L^∞ + C_K/H_0≤ C_V^θ + C_K < ∞, provided H_0 > 1 for convenience. Combining (<ref>) and (<ref>), we immediately have - ∇_V u, (4 γ G - L^∗_N G + 2F ∇^2 U) ∇_V u ≤ δ a[4γ + θ(2γ + ∇ W_L^∞) + 2a(C_V^θ + C_K)]|∇_V u|^2, we choose a such that a(C^θ_V + C_K) ≤γ, we have - ∇_V u, (4 γ G - L^∗_N G + 2F ∇^2 U)∇_V u ≤ δ a[6γ + θ(2γ + ∇ W_L^∞) ]|∇_V u|^2. For cross terms, by (<ref>) and similar argument above, we have ∇_X u, (4 γ F + 2G - 2 L^∗_NF + 2E ∇^2 U) ∇_V u ≤ [4 γ F + 2G + 2δ a^2|L^∗_N(H^-2θ(z_i))| + 2 δ a^3(C_V^θ + C_K)H^-θ]|∇_Vu||∇_Xu| ≤ δ a H^-θ[4 + 4γ a + 4a θ(2γ + ∇ W_L^∞)H^-θ + 2a^2(C_V^θ + C_K)]|∇_Vu||∇_Xu| ≤ δ a H^-θ[4 + 6γ a + 4aθ(2γ + ∇ W_L^∞)]|∇_Vu||∇_Xu| ≤ 1/4δ a^2 H^-2θ|∇_Xu|^2 + δ [4 + 6γ a + 4aθ(2γ + ∇ W_L^∞)]^2|∇_Vu|^2. Now we collect (<ref>), (<ref>) and (<ref>), we have - ∫ f_N∇ u, S ∇ u≤ - 3/4δ a^2 ∑_i=1^N ∫ f_N H^-2θ(z_i)|∇_x_iu|^2 + m_2 ∫ f_N |∇_Vu|^2, where m_2 = δ [4 + 6γ a + 4aθ(2γ + ∇ W_L^∞)]^2 + δ a[6γ + θ(2γ + ∇ W_L^∞)]. Recall we choose H_0 > 1 for convenience, and again by Lemma <ref>, ∫ f_N ∇R_N, M ∇ u≤ n'_1 ∑_i=1^N ∫ f_N H^-2θ(z_i)|∇_x_i u|^2 + n'_2 ∫ f_N |∇_V u|^2 + 8δ a^3/σ∫ f_N |R_N^1|^2 + δ a^2 γ^2(1+a)/2σ^2∫ f_N |R_N^3|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) ∫ f_N |R_N^2|^2 + ∑_i=1^N σ/16∫ f_N | √(E^ii)∇_v_i∇_x_i u|_^2d^2 + σ/16∫ f_N | (F^ii)^1/4∇_v_i∇_v_i u|_^2d^2 + ∑_i=1^N ∫ f_N ⟨ R^1_N,i, (∇_v_i E^ii) ∇_x_i u ⟩_^2d + ∫ f_N ⟨ R^1_N,i, (∇_v_i F^ii) ∇_v_i u ⟩_^2d. provided n'_1 = 1/4δ a^2 + (C_K + 1/2)δ a^3 and n'_2 = σ/4 + δ a^2(C_K a + C_K + 1/2). We only need to estimate the last two term. By the computation (<ref>), we have ∫ f_N ⟨ R^1_N,i, (∇_v_i E^ii) ∇_x_i u ⟩_^2d ≤ 3 θδ a^3/2H^3θ_0∫ f_N |R_N,i^1|^2 + 3 θδ a^3/2∫ f_N H^-2θ(z_i)|∇_x_i u|^2, and ∫ f_N ⟨ R^1_N,i, (∇_v_i F^ii) ∇_v_i u ⟩_^2d ≤ 8θδ^2 a^4/H_0^4θσ∫ f_N |R_N,i^1|^2 + σ/16∫ f_N |∇_v_iu|^2. Now we insert (<ref>) and (<ref>) into (<ref>), and combine with (<ref>), we obtain ∫ f_N ∇ u, M ∇ u≤ n_1 ∑_i=1^N ∫ f_N H^-2θ(z_i)|∇_x_i u|^2 + n_2 ∫ f_N |∇_V u|^2 + (8δ a^3/σ + 8θδ^2 a^4/H_0^4θσ + 3 θδ a^3/2H^3θ_0) ∫ f_N |R_N^1|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) ∫ f_N |R_N^2|^2 + δ a^2 γ^2(1+a)/2σ^2∫ f_N |R_N^3|^2 provided n_1 = - 1/2δ a^2 + (C_K + 1/2 + 3θ/2) δ a^3 and n_2 = 5σ/16 + δ a^2(C_K a + C_K + 1/2) + m_2. By Lemma <ref> and Young inequality, we have d/dt H_N(f^t_N|f^⊗ N_∞) ≤ - 3 σ/41/N∫ f_N|∇_V u|^2 + 1/σ1/N∫ f_N|R^0_N|^2. Finally, we combine (<ref>) and (<ref>), and update the constants a and δ as a ≤min{1/4C_K + 6θ + 2, γ/C_V^θ + C_K} and δ≤3 σ/8 + 32C_K + m_2', where m_2' = [4 + 6γ a + 4aθ(2γ + ∇ W_L^∞)]^2 + a[6γ + θ(2γ + ∇ W_L^∞)] comes from (<ref>), we obtain, d/dtℰ_N^M(f_N^t|f_∞^⊗ N) ≤ - δ a^2/41/N∑_i=1^N ∫ f_N H^-2θ(z_i)|∇_x_i u|^2 - σ/41/N∫ f_N |∇_V u|^2 + (8δ a^3/σ + 8θδ^2 a^4/H_0^4θσ + 3 θδ a^3/2H^3θ_0) 1/N∫ f_N |R_N^1|^2 + ( 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3) 1/N∫ f_N |R_N^2|^2 + δ a^2 γ^2(1+a)/2σ^21/N∫ f_N |R_N^3|^2 + 1/σ1/N∫ f_N |R_N^0|^2. Step2. We start from (<ref>). By similar argument of the last case and using Proposition <ref> and <ref>, 1/N∫ f_N|R^i_N|^2 ≤1/νH_N(f^t_N|f^⊗ N_∞) + C/N, where i = 0,1,2,3, we have d/dtℰ_N^M(f_N|f_∞^⊗ N) ≤ - δ a^2/41/N∫ f_N |∇_X u|^2 - σ/41/N∫ f_N |∇_V u|^2 + n_4/ν H_N(f_N|f_∞^⊗ N) + C/N, the constant ν > 0 satisfies (16C^2_Ke^2 Λ) ν < 1, the constant C > 0 depends on ν, σ, γ, K and V, and the constant ν collects all coefficients of error terms which reads as n_4 = [(8δ a^3/σ + 8θδ^2 a^4/H_0^4θσ + 3 θδ a^3/2H^3θ_0) + δ a^2 γ^2(1+a)/2σ^2 + 2 γ^2 δ a^3/σ^2 + γ^2 δ^2 a^2/σ^3 + 1/σ]. Now let us determine the value of δ, a, H_0, ν > 0 such that n_4 ρ_wls/ν≤min{δ a^2/8, σ/8}, where ρ_wls is the constant of weighted Log-Sobolev inequality constant of f_∞ (See Corollary <ref>). Recall the constant a > 0 satisfies a ≤min{1/4C_K + 6θ + 2, γ/C_K + C^θ_V} < 1 by (<ref>). Now we choose δ = σ/8 + 32C_K + m_2”≤3 σ/8 + 32C_K + m_2' by (<ref>), where m_2” = [4 + 6γ + 4θ(2γ + ∇ W_L^∞)]^2 + [6γ + θ(2γ + ∇ W_L^∞)]. We also choose 1/ν = 20C_K^2 e σ/γ with Λ≤1/e λ satisfies λ∈ (0, β) in Proposition <ref> and <ref>. Observing that δ a^2 < σ by a < 1 and δ < σ, we only need n_4 ρ_ls/ν≤δ a^2/8. Recall the lower bound of H_0 in Step1, H_0 ≥max{σ(d + 3 θ + 1)/γ, [3θ a (2γ + ∇ W_L^∞)]^1/θ, 1}, by (<ref>) and (<ref>), now we choose H_0 such that H_0 ≥max{(8θσ)^1/4θ, (2 θσ)^1/3θ}, we have (8δ a^3/σ + 8θδ^2 a^4/H_0^4θσ + 3 θδ a^3/2H^3θ_0) ≤10 δ a^3/σ. In the next, we make every term of n_4 less than δ a^2 ν/40ρ_wls. For the first term of n_4, 10 δ a^3/σ≤δ a^2 ν/40 ρ_wls⟹ a ≤γ/8000 eρ_wls C_K^2, we obtain the range of constant a a ≤min{1/4C_K + 6θ + 2, γ/C_V^θ + C_K, γ/8000 eρ_wls C_K^2}, for example, we can take a as a = min{γ,1}/8000 eρ_wls C_K^2 + 5C_K + 6θ + C_V^θ + 2. For other terms, we obtain the range of diffusion constant σ by similar argument of (<ref>) in the last case, σ≥max{800(40+m_2”)ρ_wlse/a^2 γ, 3200ρ_wlseγ}·max{C_K^2, C_K^3}, Finally, we update (<ref>) as the following d/dtℰ_N^M(f^t_N|f_∞^⊗ N) ≤ - δ a^2/161/N∫ f_N |∇_X u|^2 - σ/161/N∫ f_N |∇_V u|^2 - δ a^2/16 ρ_wlsH_N(f^t_N|f_∞^⊗ N) + C/N ≤ -δ a^2/16(ρ_wls + 1)ℰ_N^M(f^t_N|f_∞^⊗ N) + C/N, we finish the proof of this case. §.§ Proof of Theorem <ref> In the last subsection, we have shown that f_N converges to f_∞^⊗ N in the sense of quantity ℰ_N^M(f_N^t|f^⊗ N_∞) as t →∞ and N →∞. In this subsection, we eliminate the effect of replacement of f by f_∞ by proving exponential decay from f_t to f_∞. By triangle inequality of Wasserstein distance, the error between 𝒲_2^2(f^t_N, f_t^⊗ N) and 𝒲_2^2(f^t_N, f^⊗ N_∞) is only a time exponential decay factor. Combining with short time propagation of chaos result, we obtain uniform-in-time propagation of chaos. Case1. In this case, we take weight matrix M_1 as (<ref>). We have shown that 𝒲_2^2(f^t_N, f^⊗ N_∞) ≤ H(f^t_N|f_∞^⊗ N) ≤ N e^-c_1tℰ_N^M_1(f^0_N|f^⊗ N_∞) + C in Theorem <ref>, where c_1 = δ a^2/16(1 + ρ_ls) and C > 0 depends on σ, M, C_K and C_V. Moreover, Theorem <ref> implies that 𝒲_2^2(f_t^⊗ N, f^⊗ N_∞) ≤ H(f_t^⊗ N| f^⊗ N_∞) ≤ρ_LS N e^-ctℰ^M(f_0|f̂_0) with some c > 0 under condition C_K < 1. Theorem <ref> also implies the same conclusion under condition that the interaction function F is functional convex (See Assumption <ref>). Hence, by triangle inequality of 2-Wasserstein distance, we have 𝒲^2_2(f^t_N,f_t^⊗ N) ≤ N (1+ρ_LS)[ℰ_N^M_1(f^0_N|f^⊗ N_∞) + ℰ^M(f_0|f̂_0)]e^-c_1't + C, where constant c_1' = min{c_1, c}. If we take chaotic initial data for Liouville equation (<ref>), i.e. f^0_N = f_0^⊗ N, we have 𝒲^2_2(f^t_N,f_t^⊗ N) ≤ 2N (1+ρ_LS)ℰ^M(f_0|f̂_0)e^-c_1't + C. For k-marginal distribution, we have 𝒲^2_2(f^t_N,k, f_t^⊗ k) ≤ 2k(1+ρ_LS)e^-c_1'tℰ^M(f_0|f̂_0) + Ck/N. Case2. In this case, we take weight matrix M_2 as (<ref>). The main argument of this case is the same with Case1. By Theorem <ref>, we have 𝒲_2^2(f^t_N, f^⊗ N_∞) ≤ H(f^t_N|f_∞^⊗ N) ≤ N e^-c_2tℰ_N^M_2(f^0_N|f^⊗ N_∞) + C. provided c_2 = δ a^2/16(1 + ρ_wls) and C = C(σ, M_2, C_K, C_V^θ). By Theorem <ref>, we have 𝒲_2^2(f_t^⊗ N, f^⊗ N_∞) ≤ H(f_t^⊗ N| f^⊗ N_∞) ≤ N e^-c_2”tℰ^M_2'(f_0|f̂_0) with constant c_2” > 0 under condition that the interaction function F is functional convex. We can also use the similar result under condition that the constant C_K is small, which have been proved by Guillin, Le Bris and Momarché in <cit.> (See Theorem 1.1). All in all, we have 𝒲_2^2(f_t^⊗ N, f^⊗ N_∞) ≤ N e^-c”_2tℰ^M(f_0|f̂_0), provided some constant c_2” > 0. Now we take c_2' = min{c_2,c_2”}, we have 𝒲^2_2(f^t_N,f_t^⊗ N) ≤ N [ℰ_N^M_2(f^0_N|f^⊗ N_∞) + ℰ^M'_2(f_0|f̂_0)]e^-c'_2t + C, If we take chaotic initial data for Liouville equation (<ref>), i.e. f^0_N = f_0^⊗ N, we have 𝒲^2_2(f^t_N,f_t^⊗ N) ≤ 2N ℰ^M'_2(f_0|f̂_0)e^-c'_2t + C. For k-marginal distribution, we have 𝒲^2_2(f^t_N,k, f_t^⊗ k) ≤ 2ke^-c_2'tℰ^M'_2(f_0|f̂_0) + C k/N. § APPENDIX In appendix, we give the sketch of proof of Theorem <ref> and Theorem <ref>. The main idea of proof has already appeared in other articles, like <cit.> and <cit.>. For the completeness of the article, we reprove them under the current setting in the following. §.§ Proof of Theorem <ref> The main idea originates from Theorem 10 in <cit.> and Theorem 3 in <cit.>. By the finite time propagation of chaos result for lipschitz force ∇ W and ∇ V, we could conclude that f_N,1(t,z) converges to f(t,z) weakly for each t > 0, then by lower semi-continuity of relative entropy, we have H(f|α) ≤lim inf_N →∞ H(f_N,1|α), where α∝ e^- v^2/2 - V(x), then we get lim inf_N →∞ H_N(f_N|f_N,∞) = lim inf_N →∞{ H_N(f_N|α^⊗ N) + 1/2N^2∫ W(x_1 - x_2) f_N,2 + 1/Nlog Z_N,β}. For the first term, we use Lemma 18 in <cit.>, we have H_N(f_N| α^⊗ N) ≥ H(f_N,1|α). For the second term, we use |W(x)| ≤ C(1 + |x|^2) and f_N,2 weakly converges to f^⊗ 2 for each t > 0, hence lim inf_N →∞1/2N^2∫ W(x_1 - x_2) f_N,2 = 1/2∫ W ∗ρρ, ρ = ∫_^d f(x,v)dv. For third term, we conclude by <cit.> that lim inf_N →∞1/Nlog Z_N, β = - inf_f ∈ℳ_1(Ω) E(f). Hence we have H_W(f) ≤lim inf_N →∞ H_N(f_N|f_N,∞). Now by Theorem <ref> with constant c > 0, we obtain that H_W(f) ≤ e^-ctℰ_N^M(f_0^⊗ N|f_N,∞), where the constant c = c(f_N,∞, M) is the same as Theorem <ref>. Furthermore, we use Lemma 17 and Lemma 23 in <cit.> with α∝ e^-v^2/2 - V(x), let N →∞, we have H_W(f) ≤ e^-ctℰ^M(f_0|f̂_0). Again, by Theorem 3 in <cit.>, we have 𝒲_2^2(f_t, f_∞) ≤ρ_LS H_W(f_t) ≤ρ_LSℰ^M(f_0) e^-ct. §.§ Proof of Theorem <ref> In this subsection, we show the convergence from f to f_∞ for limiting equation (<ref>) with more general confinement potential V. We use the quantity ℰ^M(f_t|f̂_t) to show the convergence, which combines free energy and relative Fisher Information of f_t and f̂_t, i.e. ℰ^M(f_t|f̂_t) = ℱ(f_t) - ℱ(f_∞) + I^M(f_t|f̂_t), where f̂_t ∝ e^-1/2v^2 - V(x) - W ∗ρ_t, the later term is I^M(f_t| f̂_t) = ∫ f_t ∇ u, M ∇ u dz, where ∇ = (∇_x, ∇_v), u = logf_t/f̂_t and M is the weighted matrix which is to be decided. We follow the idea of Theorem 2.1 in <cit.> to deal with the nonlinear term ∇ W ∗ρ_t of Eq.(<ref>). Now let us compute the time evolution of ℰ^M(f_t|f̂_t). We omit the footnote “t" for convenience. For free energy part ℱ(f), we directly use the result in Theorem 2.1 in <cit.>, d/dtℱ(f) = - σ∫ f|∇_v u|^2 dz. For distorted Fisher Information part, we formally write the equation of logf̂ as following ∂_t logf̂ + v ·∇_x logf̂ - [∇ V + (K ∗ρ)] ·∇_v logf̂ = σΔ_v f̂/f̂ + γ (d + v ·∇_v logf̂) - ∂_t (log Z_t + W ∗ρ), by similar computation of Lemma <ref>, we have the equation of u = logf/f̂, ∂_t u = - B_t u - σ A^∗ A + σ∇_v log f · Au + R, where B_t = v ·∇_x - (∇ V + ∇ W ∗ρ_t) ·∇_v - γ v ·∇_v, A = ∇_v, A^∗ = - ∇_v + v which is conjugate operator of A in the sense of L^2(f̂), R = ∂_t (log Z_t + W ∗ρ). Then by Lemma <ref> and Corollary <ref>, if we take M = ( [ e f; f g ]), where e,f,g: Ω×^d →^d × d are smooth matrix-valued functions, then we have d/dt{∫ f ∇ u, M ∇ u }≤ - ∫ f ∇ u, S ∇ u + 2 ∫ f ∇ R, M ∇ u, where S(Z) is a matrix reads as S = ( [ 2e - L^∗ e 4 γ f - L^∗ f - 2e ∇^2 U; 2c - L^∗ f 4 γ c - L^∗ c - 2f ∇^2 U ]), and ∇^2 U = ∇^2 V + ∇^2 W ∗ρ, and L^∗ = B + σΔ_v. Now we follow the similar argument of the Case 2 of Theorem <ref>, we select e(z) = δ a^3 (H(z))^-3θ, f(z) = δ a^2 (H(z))^-2θ, g(z) = 2δ a(H(z))^-θ where δ > 0, a > 0 are to be confirmed. We firstly estimate the upper bound of L^∗e, L^∗f and L^∗g. For some S>0, we have L^∗(H^-sθ(z)) = -sθ{σ d/H + σ (-3θ -1)v^2/H^2 - γ v^2 + v ·∇ W ∗ρ(x)/H}H^-sθ(z), then we have |L^∗(H^-sθ(z))| ≤ sθ(σ d + σ(sθ + 1)/H_0 + γ + 1 ) H^-s θ(z). Now we can replace the norm ∇ W_L^∞ by 1 in the Case 2 in Theorem <ref> and use completely the same argument of (<ref>)-(<ref>), we have - ∫ f∇ u, S ∇ u≤ - 3/4δ a^2 ∫ f H^-2θ(z)|∇_xu|^2 + m_2 ∫ f |∇_vu|^2, where m_2 = δ [4 + 6γ a + 4aθ(2γ + 1)]^2 + δ a[6γ + θ(2γ + 1)] and a, H_0 satisfy H_0 ≥max{σ(d + 3 θ + 1)/γ, [3θ a (2γ + 1)]^1/θ,1 }, a ≤γ/C^θ_V + C_K. For second term of (<ref>), we have ∇ R = ∇ W ∗ (∂_t ρ), but ∂_t ρ + ∇_x · (v_t^x ρ_t) = 0 where v_t^x(x) = ∫ vf(x,v)dv/∫ f(x,v)dv = ∫ (∇_v u) f(x,v)dv/∫ f(x,v)dv. By (4.14) of <cit.>, we then have |∇ W ∗ (∂_t ρ)| = |∇^2 W ∗ (v_t^x ρ_t)| = | ∫∇^2 W(x-y) · v_t^yρ(y)dy | ≤ C_K ∇_v u _L^2(f), therefore ∫ f ∇ R, M ∇ u≤ C_K ∇_v u _L^2(f) M ∇ u _L^2(f) ≤ σ/4∫ f |∇_v u|^2 + C^2_K/σ∫ f ∇ u, M^2 ∇ u ≤ σ/4∫ f |∇_v u|^2 + δ^2 a^2 C^2_K/σ[(a^2 + 4)+2(a^2 + 2)^2]∫ f |∇_v u|^2 + δ^2 a^4 C^2_K/σ(a^2 + 3)∫ f H^-2θ(z)|∇_x u|^2. Combining (<ref>), (<ref>) and (<ref>), we have d/dtℰ^M(f|f̂) ≤ (-3σ/4 + m_2)∫ f |∇_v u|^2 + δ^2 a^2 C^2_K/σ[(a^2 + 4)+2(a^2 + 2)^2]∫ f |∇_v u|^2 + [-3/4δ a^2 + δ^2 a^4 C^2_K/σ(a^2 + 3)]∫ f H^-2θ(z)|∇_x u|^2. Now we choose δ such that { δ^2 a^2 C^2_K/σ[(a^2 + 4)+2(a^2 + 2)^2] ≤σ/4, δ^2 a^4 C^2_K/σ(a^2 + 3) ≤1/4δ a^2, m_2 ≤σ/4, . i.e. δ≤min{σ/2aC_K √([(a^2 + 4)+2(a^2 + 2)^2]), σ/4a^2C^2_K(a^2 + 3), σ/4m_2'}, where m_2' = [4 + 6γ a + 4aθ(2γ + 1)]^2 + a[6γ + θ(2γ + 1)], then we have d/dtℰ^M(f|f̂) ≤ -σ/4∫ f |∇_v u|^2 - 1/4δ a^2 ∫ f H^-2θ(z)|∇_x u|^2. By weighted Log Sobolev inequality of f̂ and δ a^2 < σ, we have d/dtℰ^M(f_t|f̂_t) ≤ - δ a^2/8 I^M(f|f̂) - δ a^2/8 ρ_wlsH(f|f̂), furthermore, by convexity of functional F, we have entropy sandwich inequality ℱ(f_t) - ℱ(f_∞) ≤ H(f_t|f̂_t), we have d/dtℰ^M(f_t|f̂_t) ≤ - δ a^2/16 + 16 ρ_wlsℰ^M(f_t|f̂_t), then we finish the proof. §.§ Acknowledgements Zhenfu Wang was partially supported by the National Key R&D Program of China, Project Number 2021YFA1002800, NSFC grant No.12171009 and Young Elite Scientist Sponsorship Program by China Association for Science and Technology (CAST) No. YESS20200028. plain
http://arxiv.org/abs/2409.02969v1
20240904074443
LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorch
[ "Xiaoyuan Zhang", "Liang Zhao", "Yingying Yu", "Xi Lin", "Zhenkun Wang", "Han Zhao", "Qingfu Zhang" ]
cs.MS
[ "cs.MS", "cs.LG", "math.OC" ]
Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal   Dacheng Tao^8 3 September 2024 ====================================================================================================== § ABSTRACT Multiobjective optimization problems (MOPs) are prevalent in machine learning, with applications in multi-task learning, learning under fairness or robustness constraints, etc. Instead of reducing multiple objective functions into a scalar objective, MOPs aim to optimize for the so-called Pareto optimality or Pareto set learning, which involves optimizing more than one objective function simultaneously, over models with millions of parameters. Existing benchmark libraries for MOPs mainly focus on evolutionary algorithms, most of which are zeroth-order methods that do not effectively utilize higher-order information from objectives and cannot scale to large-scale models with millions of parameters. In light of the above gap, this paper introduces , the first multiobjective optimization library that supports state-of-the-art gradient-based methods, provides a fair benchmark, and is open-sourced for the community. § INTRODUCTION MultiObjective Optimization problems (MOPs) are ubiquitous in machine learning. For instance, trustworthy machine learning (TML) includes fairness problems balancing the fairness level and the accuracy level <cit.>; in robotics, it is necessary to balance several objectives (e.g., forward speed and energy consumption)  <cit.>; similarly, recommendation systems face potentially conflicting objectives, such as novelty, accuracy, and diversity <cit.>. For all the applications above, the underlying optimization problem involves an MOP with m objectives and can be (informally) defined as: min_θ(θ) = ( L_1(θ), …, L_m(θ) ), where L_1(θ), …, L_m(θ) denote m (potentially) conflicting objectives and we denote the size of the model parameter as N |θ|. Note that as informally defined above, <Ref> is a vector optimization problem that does not necessarily admit a total ordering. Nevertheless, we use the formulation above to emphasize that the underlying problem involves m objectives and we will formally define the goal as follows. For non-trival multiobjective problems, no single solution can attain the minimum of all objectives simultaneously. To compare two solutions for an MOP, we introduce the concepts of dominance and Pareto optimality <cit.>. Dominance occurs when solution θ^(a) satisfies L_i(θ^(a)) ≤ L_i(θ^(b)) for all 1 ≤ i ≤ m, with at least one strict inequality L_j(θ^(a)) < L_j(θ^(b)). A solution is Pareto optimal if no other solution in the feasible region dominates it. The set of all Pareto optimal solutions is the Pareto set (PS), and its image is the Pareto front (PF). Over the last few decades, multiobjective evolutionary algorithms (MOEAs) emerged as a widely used methodology for addressing MOPs. Due to their population-level designs, MOEAs can identify a diverse set of Pareto optimal solutions that approximate and represent the entire PF of a given MOP. Several libraries have emerged as standard platforms for fair comparison of MOEAs, including PlatEMO  <cit.> (in Matlab supporting over 160 MOEAs), Pagmo  <cit.> (in C++), and Pymoo  <cit.> (in Python). Compared to MOEAs, gradient-based multiobjective optimization (MOO) methods are particularly suitable designed for large-scale machine learning tasks involving thousands to millions of neural network parameters. While these methods can find Pareto stationary solutions—solutions that cannot be locally improved in any objective—they are also effective approximators of Pareto solutions for nonconvex models, including deep neural networks. In this paper, we roughly divide recent gradient-based MOO methods into two main groups. The first one focuses on finding a finite set of Pareto solutions to approximately represent the entire Pareto front, exemplified by the Upper Bounded Multiple Gradient Descent Algorithm (MGDA-UB) <cit.> and Exact Pareto Optimization (EPO) <cit.>. The second, known as Pareto set learning (PSL) <cit.>, develops a single neural model parameterized by ϕ to represent the entire Pareto set. Since the parameter space of ϕ is large, thus PSL is usually optimized by gradient-based methods. With the growing use of gradient-based MOO methods in neural networks <cit.>, there is a pressing need for a standard library to benchmark comparisons and developments. To fulfill this demand, we introduce , a gradient-based Multi-Objective Optimization Library supporting over twenty state-of-the-art methods. We summarize our contributions as follows: * We propose the first modern gradient-based MOO library, called .  is implemented under the PyTorch framework <cit.> and carefully designed to support GPU acceleration.  supports synthetic problems and a wide range of real-world multiobjective machine learning tasks (e.g., fair classification and multiobjective classification problems). *  supports over twenty state-of-the-art (SOTA) gradient-based MOO methods for constructing Pareto optimal solution sets, including those using finite solutions to approximate the whole PS/PF <cit.>, PSL methods <cit.> aimed at approximating the entire Pareto set with a single neural network, and MOBO methods that designed to avoid frequent function evaluations. *  has already been open-sourced at <https://github.com/xzhang2523/libmoon> with document at <https://libmoondocs.readthedocs.io/en/latest/>, offering a reliable platform for comparing SOTA gradient-based MOO approaches and providing standard multiobjective testing benchmarks. Beyond examining its source code,  can be installed via as an off-the-shelf gradient-based multiobjective tool. Notation. In this paper, bold letters represent vectors (e.g., for preference vectors, (·) for vector objective functions), while non-bold letters represent scalars or network parameters (e.g., θ). The number of objectives is denoted by m, and K represents the size of finite solution set. The preference vector lies in the (m-1)-dimensional simplex, satisfying ∑_i=1^m λ_i=1 and λ_i ≥ 0. The decision network parameter θ has a size of n. Refer to <Ref> for the full notation table. § RELATED WORKS In this section, we review some methods for gradient-based multiobjective optimization in <Ref> and existing multiobjective optimization libraries in <Ref>. §.§ Gradient-based multiobjective optimization Gradient-based MOO has a long research history. For instance, the famous convex optimization book <cit.>Chap 4 discusses using linear scalarization to convert MOO into a single-objective optimization (SOO) problem. Historically, gradient-based methods were not mainstream for multi-objective optimization, with evolutionary algorithms (MOEAs) being more prevalent due to their ability to maintain a population, aligning well with the Pareto set. However, gradient-based MOO has gained popularity recently with advancements in machine learning. A seminal work is MGDA-UB <cit.>, which introduced MOO concepts into deep learning by framing MTL as MOO. Since then, numerous approaches like EPO <cit.>, Pareto MultiTask learning (PMTL) <cit.>, and MOO with Stein Variational Gradient Descent (SVGD) <cit.>, including methods for learning the entire Pareto set <cit.>, have emerged. This paper aims to integrate these developments into a unified framework. §.§ Multiobjective optimization libraries There a number of multiobjective library before our work. The difference between  and previous libraries is that is designed to support gradient-based MOO methods. Comparison can be found in <Ref>. LibMTL. <cit.> LibMTL is a Python-based library for multitask learning. The key difference between and LibMTL is that LibMTL focuses on finding a single network to benefit all tasks, such as optimizing direction or network architecture. In contrast,  addresses inherent trade-offs, where improving one objective inevitably worsens others, and explores the distribution of Pareto solutions. jMetal. <cit.> jMetal is a comprehensive Java framework designed for multi-objective optimization, leveraging the power of metaheuristics. It offers a highly flexible and extensible platform, making it accessible for users across various domains. With its user-friendly interface and robust architecture, jMetal has become a popular choice for researchers and practitioners in diverse application areas. Pymoo. <cit.> Pymoo is a Python-based framework for multiobjective optimization problems with many local optima. It supports mainstream MOEA methods such as NSGA-II <cit.>, NSGA-III <cit.>, MOEA/D <cit.>, and SMS-EMOA <cit.>. Pymoo allows flexible algorithm customization with user-defined operators and includes tools for exploratory analysis and data visualization. It also offers multi-criteria decision-making strategies to help users select optimal solutions from the Pareto set. PlatEMO <cit.>. PlatEMO is a MATLAB-based multiobjective optimization tool supporting over 160 evolutionary algorithms and a comprehensive set of test problems. It supports diverse problem types, including sparse, high-cost, large-scale, and multimodal. PlatEMO includes performance metrics for evaluating optimization efficacy and offers tools for visualization and interaction during the optimization process. Pagmo <cit.>. Pagmo, a C++ library, executes massively parallel multiobjective global optimization using efficient evolutionary algorithms and gradient-based techniques like simplex, SQP, and interior-point methods. It supports concurrent algorithm execution and inter-algorithm collaboration through asynchronous generalized island models, addressing a wide range of optimization challenges, including constrained, unconstrained, single-objective, multiobjective, continuous, integer, stochastic, and deterministic problems. EvoTorch <cit.> and EvoX <cit.>. EvoTorch accelerates evolutionary algorithms in PyTorch, while EvoX scales them to large, complex problems with GPU-accelerated parallel execution for single and multiobjective tasks, including synthetic and reinforcement learning. These libraries highlight a trend toward using GPU parallelization to manage larger problem scales and decision variables, which is also of the interest of this paper. This work focuses on creating a platform for efficient gradient-based multiobjective methods to solve large-scale machine learning problems in PyTorch. § : A GRADIENT-BASED MOO LIBRARY IN PYTORCH To introduce  , we first introduce its framework in <Ref>, and briefly introducing its supporting problems and metrics. Then introduce supported solvers in <Ref>. §.§ Framework <Ref> demonstrates the framework of the proposed  library.  supports three categories of solvers: MOO solvers aiming to find a finite set of Pareto solutions satisfying certain requirements, PSL solvers aiming to learn whole PS with a single model, and MOBO solvers aiming to solve expensive problems. Each solver category is designed highly modulized and new solvers are easy to plugin  by rewriting only a small portion of code, e.g., the way of gradient manipulation [An example of adding a new method is given in <https://libmoondocs.readthedocs.io/en/latest/develop/add_method.html>.]. MOO and PSL solvers support all synthetic, MTL, and realworld (RE) problems, while MOBO solvers support synthetic and RE problems. Supported problems.  currently supports three categories of methods, synthetic problems, MTL problems, and RE problems. Supported metrics.  supports a wide range of metrics namely, (1) hypervolume (HV), (2) inverted general distance (IGD), (3) fill distance (FD), minimal distance (l_min), (4) smooth minimal distance (sl_min), (5) Spacing, (6) Span, (7) penalty-based intersection (PBI), (8) inner product (IP), (9) cross angle (ϑ). Full descriptions are provided in <Ref>. §.§ MOO solvers Aggregation function. To find Pareto solutions, a straightforward way is to use an aggregation function g_λ^agg : ℝ^m →ℝ to transform a multi-objective problem (MOP) into a single-objective optimization problem (SOP). This involves minimizing: min_θ g_λ^agg((θ)), where (θ) represents the objective vector and g_λ^agg is an aggregation function determined by preference λ. If g_λ^agg is decreasing with respect to (θ) (i.e., g_λ^agg((θ^(a))) < g_λ^agg((θ^(b))) when _i(θ^(a)) ≤_i(θ^(b)) for all i with at least one strict inequality), the optimal solution of <Ref> corresponds to a Pareto optimal solution of the original MOP (<Ref>) <cit.>. The linear scalarization (LS) function, g_λ^agg((θ)) = ∑_i=1^m λ_i L_i(θ), is the most commonly used aggregation function. Other notable functions include the (Smooth) Tchebycheff (Tche) <cit.>, modified Tchebycheff (mTche) <cit.>, Penalty-Based Intersection (PBI) <cit.>, and COSMOS <cit.>. For detailed formulations, see <Ref>. Aggregation-based methods can be efficiently optimized in  using single objective backpropagation. Aggregation-based method update the parameter θ iteratively as follows: θθ - η = θ - η∂ g_λ^agg((θ))/∂θ, where is the updating direction. Generalized aggregation function. In addition to optimizing the single-objective aggregation function, multiple gradient manipulation methods adjust gradients and compute update directions to ensure solutions satidfy specific properties. These methods can be summarized as follows: the update direction is a dynamically weighted sum of gradient vectors <cit.>, expressed as = ∑_i=1^m α̃_i ·∇ L_i (θ). Equivalently, updating the direction involves optimizing a generalized aggregation function. g̃_λ((θ)) = α̃_i · L_i(θ). Multiple gradient manipulation methods. As listed in <Ref>, notable methods include EPO <cit.> for finding `exact Pareto solutions' (intersection points of Pareto front and preference vectors), HVGrad <cit.> for maximizing hypervolume, MGDA-UB <cit.> for finding directions beneficial to all objectives, MOO-SVGD <cit.> for diverse solutions, PMGDA <cit.> for Pareto solutions satisfying specific user requirement, and PMTL <cit.> for sector-constrained Pareto solutions. In  , multiple gradient manipulation methods are implemented by first calculating dynamic weight factors α̃_i and then performing backpropagation on the generalized function g̃_λ((θ)) to update the neural network parameters. Those weight factors α̃_i are typically computed by solving a linear programming (LP) problem (e.g., <cit.>[Eq. 24], <cit.>[Eq. 16]), a quadratic programming (QP) problem (e.g., <cit.>[Eq. 14], <cit.>[Eq. 3]), or through other methods like hypervolume gradient estimation <cit.>. Some methods use user-specific preference vectors as input, making the aggregation function dependent on preference , while others do not (preference-free optimizers). A summary of whether an optimizer is preference-based or preference-free is provided in <Ref>. Zero-order optimization.  not only supports first-order optimization when the gradients of those objective functions are known, but also supports zero-order optimization methods with estimated gradients ∇̂ L_i(θ) using methods such evolutionary strategy (ES) <cit.>. §.§ Pareto set learning solvers In addition to the existing methods for finite Pareto solutions,  supports Pareto Set Learning (PSL), which trains a model with parameter ϕ to generate an infinite number of approximate Pareto solutions. We denote the model output as θ and represent the model as θ_ϕ(·) : Δ_m →ℝ^n, where Δ_m is the m-dim preference simplex and ℝ^n is the entire decision space of θ. PSL can be deemed as learning a model that find solutions under infinitely many preference well. The loss function of PSL is formulated as follows: min_ϕℓ_psl = _∼Unif(Δ_m)g̃_(( θ_ϕ() )), θ_ϕ(·) : Δ_m ^n. The function g̃_(·): ℝ^m →ℝ is a generalized aggregation function (as introduced in <Ref>), which converts an m-dimensional objective vector into a scalar based on specific preferences. Since <Ref> involves preference expectation, all preference-based finite Pareto solvers (<Ref>) can serve as basic Pareto solution solvers, while preference-free solvers are unsuitable for PSL. For PSL, we explain the function (θ_ϕ()) with more details as follows: In synthetic problems, the decision variable is directly the model output θ_ϕ, and θ_ϕ is evaluated by the loss function. In multi-task learning (MTL) problems, θ_ϕ() represents the parameters of a target network, and (θ_ϕ()) is evaluated using the dataset (x_i, y_i) with this target network. See <Ref> for a visualization of these architectures. The gradient of ℓ_psl can be estimated by the chain rule: ∂ℓ_psl/∂ϕ_1 × D = _∼Unif(Δ_m)g̃__A: (1 × m)·θ_B: (m × n)θϕ_C: (n × D). The expectation is empirically estimated using a batch of K preferences. The gradient vector A=(α̃_1, …, α̃_m) is computed by the finite Pareto solution solver. The gradient matrix C can be calculated either via backpropagation (when gradients are easily obtained) or using a zero-order method. In <Ref>, C is always estimated through backpropagation, since θ is a continuous function of ϕ. Gradient calculation by existing PSL methods is summarized in <Ref>. After calculating the gradient of ℓ_psl, the parameter ϕ is iteratively updated as ϕϕ - η∂ℓ_psl/∂ϕ. §.§ Multiobjective Bayesian Optimization solvers When the evaluation of objective functions is costly, multiobjective Bayesian optimization (MOBO) is often the preferred methodology for tackling such challenges. While there are several existing libraries, such as Botorch <cit.> and HEBO <cit.>, that facilite MOBO, they largely overlook algorithms that leverage decomposition techniques like PSL. To bridge this gap,  also includs three decomposition-based MOBO algorithms, including PSL-MOBO <cit.>, PSL-DirHVEI <cit.>, and DirHV-EGO <cit.>. In each iteration, these methods build Gaussian process (GP) models for each objectives and generate a batch of query points for true function evaluations. PSL-MOBO is an extension of PSL method for expensive MOPs. It optimizes the preference-conditional acquisition function α(θ|) over an infinite number of preference vectors to generate a set of promising candidates: min_ϕℓ_psl = _∼Unif(Δ_m) [α(θ_ϕ()|)], θ_ϕ() : Δ_m ^N. Currently, our library supports two representative preference-conditional acquisition functions: the Tchebycheff scalarization of lower confidence bound (Tch-LCB) <cit.> and the expected direction-based hypervolume improvement (DirHV-EI) <cit.>. We note that DirHV-EI can be regarded as an unbiased estimation of a weighted expected hypervolume improvement. Our library also supports DirHV-EGO, which employs a finite set of predetermined reference vectors as in <cit.>. § SNIPPETS OF  EXAMPLES For the reader's convenience, we demonstrate some snippets to show how  solves synthetic problems efficiently by just a few lines of codes. For example codes solving multitask learning problems, please refer to <Ref>.  can be installed by . [style=pythonstyle, caption=Finding a size-K (K=5) Pareto solutions with four lines of code in .] from libmoon.solver.gradient.methods import EPOSolver from libmoon.util import synthetic_init, get_problem, uniform_pref problem = get_problem(problem_name='ZDT1') prefs = uniform_pref(n_prob=5, n_obj = problem.n_obj, clip_eps=1e-2) solver = EPOSolver(problem, step_size=1e-2, n_iter=1000, tol=1e-2) res = solver.solve(x=synthetic_init(problem, prefs), prefs=prefs) [style=pythonstyle, caption=Pareto set learning in a problem with three lines of solving problem and two lines of evaluating the results in .] from libmoon.solver.psl.core_psl import AggPSLSolver from libmoon.util import synthetic_init, get_problem, uniform_pref problem = get_problem(problem_name='ZDT1') # agg list ['ls', 'tche', 'mtche', 'pbi', ... ] solver = AggPSLSolver(problem, agg='ls') model = solver.solve() prefs = uniform_pref(n_prob=100, n_obj=problem.n_obj, clip_eps=1e-2) eval_y = problem.evaluate(model(Tensor(prefs).cuda())) Code in Listing 1 demonstrates solving a synthetic problem efficiently with four lines of  code: 1. Define the problem. 2. Specify the preference vectors. 3. Specify the solver. 4. Obtain the results. The result `res' contains five elements: `x_opt' (optimal Pareto solution), `y_opt' (Pareto objective), `x_history' (decision history), `y_history' (objective history), and `hv_history' (hypervolume history). Codes in Listing 2 show that a Pareto set model can be trained by three lines of codes by assigning a user-specific aggregation function and produce predicted Pareto solutions by two lines of code. § EMPIRICAL STUDIES In this section, we study empirical results of . Codes are basically run on a person computer with an i7-10700 CPU, and a 3080 GPU with 10G GPU memory. And we also conduct some analysis regarding to GPU ranging from 3080, 4060, and 4090. The empirical results contain five parts, §.§ MOO solvers for synthetic problems We illustrate the performance of various finite Pareto solution solvers using the VLMOP2 problem, formulated as follows: min_θ f_1(θ) = 1 - exp(-∑_i=1^n(θ_i - 1/√(n))^2), min_θ f_2(θ) = 1 - exp(-∑_i=1^n(θ_i + 1/√(n))^2). This problem has been widely studied in the literature <cit.> due to its PF has a non-convex shape [In MOO, a PF is convex or non-convex based on whether the objective space is convex or non-convex. The PF represents the non-dominated boundary of the objective space.]. The visualization results are shown in <Ref> and the numerical results are shown in <Ref>. We present the following key findings: * Agg-COSMOS produces approximate `exact' Pareto solutions (the corresponding Pareto objectives aligned with preference vectors). This is due to a cosine similarity term in Agg-COSMOS encouraging the Pareto objectives be close to preference vectors. * Agg-LS can only find two endpoints on this problem since PF of VLMOP2 is of non-convex shape. Different preference vectors correspond to duplicated Pareto solutions. * Agg-PBI generates 'exact' Pareto solutions when the coefficient of d_2 exceeds a specific value, as guaranteed by PBI <cit.>. However, this parameter is challenging to tune and is influenced by the Pareto front's curvature. Additionally, PBI can transform a convex multi-objective optimization problem into a non-convex one, making Agg-PBI less recommended. * Agg-Tche generates diverse solutions and produces exact Pareto solutions corresponding to the inverse of the preference vector. Both Agg-Tche and Agg-SmoothTche retain the convexity of objectives, meaning their aggregation functions remain convex when all objectives are convex. However, the solution distribution of Agg-SmoothTche cannot be precisely determined due to the involvement of a temperature coefficient h. * HVGrad updates the decision variable using the hypervolume gradient, resulting in the largest hypervolume. PMTL, a two-stage method, can struggle with tuning warm-up iterations, leading to poor initialization and less diverse solutions. MOO-SVGD's update direction has two conflicting goals: promoting diversity and ensuring convergence. This conflict makes MOO-SVGD difficult to converge, often requiring 10 times more iterations than other methods, resulting in poor convergence behavior. * Among the methods, MGDA-UB, Random, Agg-PBI, and MOO-SVGD exhibit relatively large deviations. MGDA-UB and Random generate arbitrary Pareto solutions due to their computation nature. Agg-PBI results in non-convex aggregation functions, leading to variable solutions based on different random initializations. MOO-SVGD has high standard deviation because its optimization process is unstable, and its balance between convergence and diversity is unclear. §.§ Pareto set learning on synthetic problems In this section, we present PSL results using the synthetic VLMOP2 problem as an example. Visualizations of the predicted Pareto solutions are shown in <Ref>, and numerical results are reported in <Ref>. From the table and figure, we have the following key findings: * PSL with the COSMOS aggregation function fails to find all marginal Pareto solutions because COSMOS does not guarantee the discovery of the entire Pareto set/front. PSL with linear scalarization function could not fit the two endpoints of the PF. * PSL with the Smooth Tchebycheff function finds diverse Pareto solutions due to the unclear relationship between preference vectors and Pareto objectives. PSL using Tche., EPO, and PMGDA as base solvers can discover the entire PS/PF, as all three methods effectively find exact Pareto solutions. By traversing the entire preference space, the Pareto model fits the entire Pareto set accurately. §.§ MOO solvers for MTL problems In this section, we investigate the performance of different finite Pareto solution solvers on the Adult dataset, which is a multitask learning task called fairness classification. For this task, the decision variable θ is the parameter of a fully-connected neural network, with |θ|=28033. The first objective function is the cross entropy loss, and the second objective is the hyperbolic tangent relaxation of the difference of equality of opportunity (DEO) loss <cit.>[Eq. 6]. The visualization on the Adult problem is shown in <Ref> and the numerical result is shown in <Ref>. From the table and figure, we have the following key findings: * Agg-LS has two main drawbacks: it cannot identify the non-convex parts of the Pareto front (as previously mentioned), and the relationship between preference vectors and Pareto objectives is unknown, making it challenging to select appropriate preference vectors. Agg-PBI and Agg-COSMOS only find a small portion of the PF due to it is hard to set the coefficients. * Agg-SmoothmTche and Agg-mTche perform well on this task, as they can accurately find Pareto solutions. Once the Pareto front range is known, diverse solutions can be easily found using uniform preference vectors. * The Random method and MGDA-UB only find a single Pareto solution, since the position of this solution cannot be controlled by these methods. * Among the three methods that directly find a set of Pareto solutions (MOO-SVGD, PMTL, and HV-Grad), HV-Grad produces the most diverse solutions with the largest hypervolume. PMTL, being a two-stage method, can encounter issues when solutions fall outside the sector due to stochastic factors. MOO-SVGD optimizes both convergence and diversity but is generally unstable based on our tests. §.§ Pareto set learning on MTL This section presents the Pareto set learning results on the MO-MNIST problem using a hypernet architecture. The model was trained for 20 epochs, optimizing approximately 3.24M parameters, with the first and second objectives being the cross-entropy losses for the top-right and bottom-left images. EPO-based PSL and PMGDA-based PSL are not very suitable for this task since manipulating gradient of 3.2M dim is not efficient, the visualization results are reported in <Ref> and numerical results in <Ref>. From the table and figure, we have the following key findings: * The Pareto Front (PF) of this task is nearly convex, indicating that Local Search (LS) can efficiently find the entire Pareto front. Therefore, it is not necessary to employ complex Multi-Objective Optimization (MOO) methods. * Agg-LS significantly outperforms other methods on this task, as evidenced by the substantially higher Hypervolume (HV). Furthermore, the training losses across other methods show minimal reduction, indicating that for this challenging task, the convergence of other non-linear scalarization methods is extremely slow. * Agg-SmoothTche and Agg-Tche produce duplicate marginal Pareto solutions because selecting appropriate weight factors is challenging. §.§ MOBO for synthetic and real-world problems In this section, we test three MOBO algorithms in  on three benchmark problems, including ZDT1, RE21, VLMOP1 and VLMOP2. To ensure a fair comparison, we generate 11n-1 initial samples using Latin Hypercube Sampling for each method. The maximum number of function evaluations is set as 200. Our experimental results, illustrated in Figure <ref>, clearly demonstrate the rapid convergence capabilities of all three methods. DirHV-EGO, PSL-DirHV-EI, and PSL-MOBO not only efficiently navigate the solution space but also quickly reach optimal solutions. This highlights the robustness and effectiveness of our implemented algorithms in handling different types of MOPs. §.§ GPU acceleration We evaluate  performance on Pareto set learning for the MO-MNIST problem across various platforms (CPU, RTX 3080, 4060, 4090). Running times are detailed in <Ref> and visualized in <Ref>. The table and figure show a significant reduction in time (about one-third) when using a personal GPU compared to a CPU. The RTX 4090 further reduces time by approximately 25% compared to the RTX 4060. § CONCLUSION, LIMITATIONS, AND FURTHER WORKS Conclusion. We introduced the first gradient-based multiobjective optimization framework called , which is implemented based on the PyTorch environment for the research community's convenience.  supports mainstream gradient-based MOO methods; the modular design of  further allows the library to address various MOPs, including discrete Pareto solution optimization, Pareto set learning, multiobjective Bayesian optimization, etc., in a plug-and-play manner.  can thus be leveraged to quickly yet robustly test new MOO ideas. Limitations. The limitations of  include: (1) the rapid development of gradient-based MOO methods makes it impossible to incorporate all relevant techniques, so some effective methods may be missing; (2) the codebase, primarily based on works from Prof. Qingfu Zhang's group, may not easily accommodate new methods. Future Work. Future work includes: (1) maintaining a user and development community to address issues promptly, and (2) adding newly published or well-known methods to the library as quickly as possible. § ACKNOWLEDGE  was made possible with the valuable feedback and comments from a lot of reseachers. We appreciate the help from Hongzong Li, Xuehai Pan, Zhe Zhao, Meitong Liu, Weiduo Liao, Weiyu Chen, Dake Bu, Yilu Liu, Song Lai, Cheng Gong, Prof. Jingda Deng, Prof. Ke Shang, Prof. Genghui Li, Prof. Zhichao Lu, and Prof. Tao Qin. We sincerely thank them for their early feedback and contributions. § APPENDIX §.§ Full name and notation tables This section lists the full names of optimization methods and terms for clarity (<Ref>) and provides notation in <Ref>. §.§ Aggregation functions Aggregation function convert an MOP into a single-objective optimization problem under a specific preference vector . Some popular aggregation functions are: * COSMOS: g^cosmos(θ) = ^⊤(θ) - μ^⊤(θ)/(θ), where μ is a positive weight factor to align the objective vector (θ) with the preference vector . * Linear scalarization (LS): g^LS(θ) = ∑_i=1^m λ_i L_i(θ). * Tchebycheff (Tche): g^Tche(θ) = max_1 ≤ i ≤ m{λ_i (L_i(θ) - z_i) }, where is a reference point, usually set as the nadir point the minimal value in each objectives. * Modified Tchebycheff (mTche): g^mTche(θ) = max_1 ≤ i ≤ m{L_i(θ) - z_i/λ_i}, where z is a reference point, the same as the point used in the Tchebycheff scalarization function. * Smooth Tchebycheff (STche): g^STche(θ) = 1/hlog∑_1 ≤ i ≤ mexp(h ·λ_i (L_i(θ) - z_i)). The Smooth Tchebycheff function uses a relaxed Smooth max operator. The advantage of this approach is that g^STche(θ) becomes a Smooth function if each objective function L_i(θ) is Smooth, unlike the non-Smooth g^Tche(θ). Smooth functions generally have faster convergence compared to non-Smooth ones. Similarly, we can define the Smooth modified Tchebycheff function. * Penalty-Based Intersection (PBI): g^PBI(θ) = 1/·∑_i=1^m λ_i L_i(θ)_d_1 + μ(θ) - d_1/·_d_2, where μ is a positive weight factor that encourage a objective to align with a preference vector . * p-norm: g^pnorm(θ) = (θ) - _p. * Augmented Achievement Scalarization Function (AASF): g^aasf(θ) = g^mtche(θ) + ρ g^LS(θ), where ρ is small positive coefficient, usually set as 0.1. The contour curve for this functions for a bi-objective case can be found in <https://libmoondocs.readthedocs.io/en/latest/gallery/aggfuns.html>. §.§ Metrics Metrics can be categorized into two groups. The first group evaluates the quality of a set of solutions = y^(1), …, y^(N), with specific metrics such as IGD and FD relying on the known Pareto front for accuracy. The second group of metrics assesses the quality of individual solutions y when a preference vector λ is provided. Group 1: Metrics for Sets of Solutions * Hypervolume (HV) (↑) <cit.>: This metric evaluates both the convergence to the Pareto Front (PF) and the diversity of solutions. A low HV value indicates poor convergence, while high HV values imply better performance. The hypervolume is calculated as the volume dominated by at least one solution in the set with respect to a reference point : HV_ = Vol( | ∃' ∈, ' ≼≼ ). * Inverted Generational Distance (IGD) <cit.>: The IGD measures the average distance between points in a reference set and the nearest solutions in the set : IGD() = 1/||( ∑_i=1^||min_y' ∈ρ(y^(i), y')^2 )^1/2. * Fill Distance (FD) <cit.>: This metric calculates the covering radius of a set of solutions , defined as the maximum minimum distance from any point in the reference set to the nearest solution in : FD() = max_y' ∈min_y ∈ρ(y, y'). * Minimal Distance (l_min): This metric captures the smallest pairwise distance among all objectives: l_min = min_1 < i < j < kρ(y^(i), y^(j)) where ρ denotes the Euclidean distance. * Smooth Minimal Distance (sl_min): This metric is a “smooth-min" version of the minimal distance function, defined as: slmin = -1/h · k(k-1)log( ∑_1 < i < j < Nexp(-h ρ(y^(i), y^(j))) ). * Spacing: This metric measures the standard deviation of the minimal distances from one solution to others, with lower values indicating a more uniform distribution of objective vectors: spacing = 1/N∑_i=1^N (d_i - d̅)^2, d̅ = 1/N∑_i=1^N d_i, d_i = min_i ≠ jρ(y^(i), y^(j)). * Span: This metric evaluates the range (span) of solutions in their minimal dimension, defined as: Span = min_1 ≤ i ≤ mmax_1 ≤ k < l ≤ K | y_i^(k) - y_i^(l)|. Group 2: Metrics for Individual Solutions * Penalty-based Intersection (PBI): This metric is a weighted sum of two distance functions d_1 and d_2, given by PBI = d_1 + μ d_2, where d_1 = ⟨ y - z, λ⟩/λ, d_2 = y - (d_1λ + z). * Inner Product (IP): This metric measures the alignment of the objective vector y with the preference vector λ: IP = ⟨ y, λ⟩. * Cross Angle (ϑ): For bi-objective problems, this metric measures the angular difference between the objective vector and the preference vector: ϑ = arctan(y_2 / y_1) - arctan(λ_2 / λ_1). Those metrics are summarized in <Ref>: §.§ Example codes solving multitask learning problems The following code example demonstrates solving a multiobjective fairness classification problem in four steps: defining the hyper-parameter dictionary, setting preference vectors, initializing the multitask optimizer, and solving the problem. [style=pythonstyle, caption=An example of searching for a set of Pareto solutions.] import torch from libmoon.problem.mtl.core.pref_set_mtl import MTL_Pref_Solver, MTL_Set_Solver from libmoon.util_mtl.util import get_mtl_prefs kwargs = 'batch_size' : 128, 'lr' : 1e-3, 'epoch' : 10, 'solver' : 'pmgda', 'dataset_name' : adult, 'obj_normalization' : False, 'n_prob' : 10, 'sigma' : 0.6, 'h_tol' : 1e-4, 'device' : torch.device('cuda' if torch.cuda.is_available() else 'cpu'), mtl_solver = MTL_Pref_Solver(**kwargs) pref_mat = get_mtl_prefs(args.dataset_name, kwargs['n_prob'], obj_normalization=kwargs['obj_normalization']) res, res_history, pref_mat = mtl_solver.solve(pref_mat) §.§ License and usage of The license used for Adult/Compas/Credit follows Creative Commons Attribution 4.0 International (CC BY 4.0), Database Contents License (DbCL) v1.0, and CC0: Public Domain, respectively. For academic use of , please cite our paper or GitHub. Commercial use requires author permission. unsrt
http://arxiv.org/abs/2409.02861v1
20240904164046
Central limit theorems for the monkey walk with steep memory kernel
[ "Erion-Stelios Boci", "Cécile Mailler" ]
math.PR
[ "math.PR" ]
Central limit theorems for the monkey walk with steep memory kernel Erion-Stelios BociDepartment of Mathematical Sciences, University of Bath, Claverton Down, BA2 7AY Bath, UK.Email: e.boci/[email protected] Cécile Mailler^* September 9, 2024 =================================================================================================================================================================== § ABSTRACT The monkey walk is a stochastic process defined as the trajectory of a walker that moves on ℝ^d according to a Markovian generator, except at some random “relocation” times at which it jumps back to its position at a time sampled randomly in its past, according to some “memory kernel”. The relocations make the process non-Markovian and introduce a reinforcement effect (the walker is more likely to relocate in a Borel set in which it has spent a lot of time in the past). In this paper, we focus on “steep” memory kernels: in these cases, the time sampled in the past at each relocation time is likely to be quite recent. One can see this as a way to model the case when the walker quickly “forgets” its past. We prove limit theorems for the position of the walker at large times, which confirm and generalise the estimates available in the physics literature. § INTRODUCTION The “monkey walk” is a non-Markovian stochastic process that was first defined in the physics literature as a model for animal (monkeys, in particular) foraging behaviour. The original model of Boyer and Solis-Salas <cit.> is a random walk on ℤ^d that evolves like the simple symmetric random walk except at some random “relocation” times at which it jumps to a site it has already visited in the past, chosen with probability proportional to the number of past visits at that site. Equivalently, at relocation times, the walker chooses a time uniformly at random in its past and jumps to the site it visited at that random time. We call the intervals between relocation times the “run-lengths”. In the original model of <cit.>, the run-lengths are i.i.d. random variables, geometrically distributed with some parameter q>0. Boyer and Solis-Salas <cit.> proved a central limit theorem for the position of the walker at large time; they showed in particular that the variance of this position is or order (log t)^d at large time t. The monkey walk thus diffuses much slower than the simple symmetric random walk, which diffuses at speed t^d. This is because the random relocations, which make sites that have been visited often in the past more likely to be visited again in the future, introduce a reinforcement (rich-gets-richer) effect. Boyer, Evans and Majumdar <cit.> later generalised this model by adding memory, i.e. making the walker more likely to relocate to sites it visited more recently. Indeed, in their model, at relocation times, the walker chooses a time in its past according to some possibly non-uniform probability distribution, and then jumps to the site where it was at that random time. More precisely, the idea is that, if a relocation happens at time t>0, then the random time chosen by the walker in its past has density μ(x)dx/(∫_0^t μ), where μ : [0,∞) → [0,∞) is a non-negative function called the memory kernel. (We ask that μ satisfies ∫_0^t μ>0 for all t>0.) Boyer, Evans and Majumdar <cit.> showed a central limit theorem for the position of the walker at large times, for a large class of possible memory “kernels”. Mailler and Uribe-Bravo <cit.> later generalised the model even further by allowing the underlying motion (the walk between relocation times) to be any Markov process (instead of the simple symmetric random walk), possibly in continuous time, and the run-lengths to be a sequence of i.i.d. random variables of any distribution (instead of the geometric distribution). They also allowed the memory to be non-uniform as in <cit.> and proved that, for a large class of memory kernels, “if the underlying Markov process satisfies some central limit theorem, and if the run-lengths distribution have moments of high-enough order, then the associated monkey walk also satisfies a central limit theorem”. For technical reasons, the results of <cit.> only hold for relatively “flat” memory kernels, i.e. only if the walker does not forget its past too fast. More precisely, they consider two classes of memory kernels: μ_1(x) = α/x (log x)^α-1e^β (log x)^α, (α>0, β≥ 0), and μ_2(x) = γδ x^δ-1e^γ x^δ, (γ>0, δ∈ [0,1/2]). The aim of this paper is to prove limit theorems for steeper memory kernels, i.e. for μ = μ_2 with δ>1/2. We are able to prove precise asymptotic results for the position of the walker at large times, which, in particular, confirm the predictions of <cit.> (made in the case when the underlying Markov process is the standard Brownian motion). In <cit.>, the proofs rely on the analysis of the “genealogical tree of the runs”. When the memory kernel is flat enough, this tree is close enough to a random recursive tree. When the memory kernel becomes steeper, the tree becomes less and less “fat”; in particular, as mentioned in <cit.>, the fact that the last common ancestor of two nodes taken uniformly, independently at random in the tree has constant-order height is no longer true, we believe, when δ≥1/2. Because of this, we are not able to prove convergence of the occupation measure for steep memory kernels, while this could be done in <cit.> for flatter memory kernels. The literature on the “monkey walk” extends much beyond the works of Boyer and Solis-Salas <cit.>, Boyer, Evans and Majumdar <cit.>, and Mailler and Uribe Bravo <cit.>, which we have discussed so far. Indeed, Boyer and Pineda <cit.> proved limiting theorems for the position of the walker at large times in the case when the underlying Markov process is a random walk with heavy-tailed increments (a case which falls under the more general, more recent framework of <cit.>). Large deviations for the position of the walker at large times were established in the work of Boci and Mailler <cit.>. In <cit.>, Boyer, Falcón-Cortés, Giuggioli and Majumdar exhibit an interesting localisation phenomenon when the probability to relocate is larger at the origin than at the other sites of ℤ^d. Recently, Boyer and Majumdar <cit.> have introduced a continuous-time variant of the model in which, between relocations, the particle moves at constant speed on a one-dimensional line with a telegraphic noise (i.e. the sign of the particle's speed changes at constant rate). They get explicit formulas for the distribution of the position of the particle at all times and show large deviation results. The case when the walker resets to a fixed position (eg. the origin) at relocation times has also been studied in the literature (see, e.g. <cit.> for a literature review on the subject), but, unsurprisingly, it leads to a drastically different behaviour. §.§ Mathematical definition of the model and notation The monkey walk X = (X_t)_t≥ 0 is a stochastic process on ℝ^d whose distribution depends on three parameters: * a semi-group P = (P_t)_t≥ 0 on ℝ^d (the distribution of the underlying Markov process); * a probability distribution ϕ on [0,∞) (the run-length distribution); * a function μ : [0,∞) → [0,∞) such that ∫_0^t μ >0 for all t>0. The process X is defined as follows (see Figure <ref>): first sample (L_n)_n≥ 1 a sequence of i.i.d. random variables of distribution ϕ and let T_n = ∑_i=1^n L_i for all n≥ 0. Then, let (X(s))_0≤ s<T_1 be the Markov process of semi-group P started at the origin. Then, for all n≥ 1, given (X(s))_0≤ s<T_n, * let R_n be a random variable on [0, T_n) whose distribution has density μ/(∫_0^T_nμ) on [0,T_n), * let (X(s))_T_n≤ s<T_n+1 be the Markov process of semi-group P started at X(R_n). For all n≥ 1, we call L_n the “length of the n-th run” and T_n the “n-th relocation time”. In this paper, we assume that μ(x) = γδ x^δ-1e^γ x^δ for some γ>0 and δ>1/2. Note that this framework also includes the case of a discrete-time underlying process. Indeed, in that case, we make the underlying process defined on [0,∞) by making it constant on each interval [n, n+1), n≥ 0, and we round the relocation times to the integer above. §.§ Statement of the results Our main result holds under the following assumptions on the underlying Markov process of semi-group P: (A1) There exist two functions a : [0,∞) →ℝ^d and b : [0,∞) → (0,∞), and a probability distribution ν on ℝ^d such that, for all z∈ℝ^d, if Z = (Z(t))_t≥ 0 is the Markov process of semi-group P started at z, then Z(t)-a(t)/b(t)⇒ν, in distribution as t↑∞. (We say that Z is (a, b)-ergodic, following the terminology used in <cit.>, and before that in <cit.>.) (A2) For all x∈ and for all functions ε : [0,∞)→ℝ satisfying ε(t) = o(√(t)) as t↑∞, the following limits exist and are finite: f(x)=lim_t→∞a(t+x√(t)+ε(t))-a(t)/b(t) and g(x)=lim_t→∞b(t+x√(t)+ε(t))/b(t). (A2_δ) For all functions ε : [0,∞)→ℝ satisfying, as t↑∞, ε(t) = o(t^3/2-δ) if δ∈ (1, 3/2) 𝒪(1) if δ>3/2, we have 0=lim_t→∞a(t+ε(t))-a(t)/b(t) and 1=lim_t→∞b(t+ε(t))/b(t). The results of <cit.> hold under Assumptions (A1-2); recall that they cover the case of δ∈(0,1/2]. For δ∈ (1/2, 1], we use the same assumptions. However, for δ>1, i.e. for the steepest memory kernels, we need to replace Assumption (A2) by (A2_δ). To state our main result, we introduce the following notation: For all t≥ 0, we set s(t) = γδ t^δ/𝔼 [L]∑_k=0^⌊δ/2 - 2δ⌋(-γδ)^k𝔼[L^k+2]/(k+2)!(δ-(1-δ)k)· t^k(δ -1) if δ∈(0, 1), (𝔼[L] - 1-𝔼[e^-γ L]/γ) t if δ = 1, t-t^2-δ/γδ (2-δ) if δ∈ (1, 2], t if δ>2. Roughly speaking, our main result says that, in distribution, X(t)≈ Z(s(t)), where Z is the Markov process of semi-group P started at the origin. One can see that, as δ increases, and thus as the memory kernel becomes steeper and steeper, s(t) becomes larger and larger. We comment on the formula for s(t) when δ∈(1/2, 1). For δ∈ (1/2, 2/3), the sum has only one summand. Indeed, 0≤δ/(2δ -2)< 1 if and only if δ∈ (1/2, 2/3). In that case, s(t) = γ𝔼[L^2] /2𝔼 [L]· t^δ. Similarly, if δ∈ [2/3, 3/4), the sum has two summands, and s(t) = γ𝔼[L^2] /2𝔼 [L]· t^δ - (γδ)^2 𝔼[L^3]/6(2δ -1)𝔼 [L]· t^2δ -1. More generally, for all integers i≥ 1, for all δ∈ [i/i+1, i+1/i+2), the sum has i summands, each of them of the form “constant times a power of t”, where the power of t deacreases with the index k of the sum. Note in particular that, for all δ∈ (1/2, 1), as t↑∞, s(t)∼γ𝔼[L^2] /2𝔼 [L]· t^δ. The following three theorems are our main results: Let X=(X(t))_t≥ 0 be the monkey process of semigroup P, run-length distribution ϕ and memory kernel μ(x)=γδ x^δ -1e^γ x^δ, where γ>0 and δ∈ (0,1). Let L be a random variable of distribution ϕ. We assume that (A1-2) hold and assume that 𝔼[L^p]<∞, where p=max{8, ⌊1/1-δ⌋ +1}. Then, in distribution as t↑∞, X(t)-a(s(t))/b(s(t))⇒ f(Ω) + Λ g(Ω), where (s(t))_t≥ 0 is defined as in (<ref>), and where Ω∼𝒩(0, 2𝔼[L^3]/(3𝔼[L^2])) and Λ∼ν are two independent random variables. We have included the case δ∈ (0, 1/2] in Theorem <ref>. This case is proved in <cit.>. In this paper, we only prove Theorem <ref> in the case δ∈(1/2, 1). Although the general idea of the proof is the same as in <cit.>, the proof is more technical because the expansion of s(t) up to order t^δ/2 (which is the accuracy we need) has more and more terms as δ↑1. Let X=(X(t))_t≥ 0 be the monkey process of semigroup P, run-length distribution ϕ and memory kernel μ(x)=γe^γ x, where γ>0. Let L be a random variable of distribution ϕ. We assume that (A1-2) hold and assume that 𝔼[L^4]<∞. Then, in distribution when t↑∞, X(t)-a(s(t))/b(s(t))⇒ f(Ω) + Λ g(Ω), where s(t) is defined as in (<ref>), Λ∼ν and Ω are independent, and Ω = Ω_1 + Ω_2 + Ω_3 is the sum of three dependent Gaussian random variables whose variances and covariances are as follow: Var(Ω_1) = 1/γ^2+𝔼[L^2-(L+e^-γ L/γ)^2], Var(Ω_2) = Var(L- 1-e^-γ L/γ) = Var(L + e^-γ L/γ), Var(Ω_3) = Var(L)𝔼[L-(1-e^-γ L)/γ]^2/𝔼[L^2], Cov(Ω_1, Ω_2) = Var(L-1-e^-γ L/γ), Cov(Ω_1, Ω_3) = Cov(Ω_2, Ω_3) = Cov(L, L+e^-γ L)𝔼[L-(1-e^-γ L)/γ]/γ𝔼[L]. Let X=(X(t))_t≥ 0 be the monkey process of semigroup P, run-length distribution ϕ and memory kernel μ(x)=γδ x^δ-1e^γ x^δ, where γ>0 and δ>1. Let L be a random variable of distribution ϕ. We assume that (A1) and (A2_δ) hold and assume that 𝔼[L^4]<∞. Then, in distribution as t↑∞, X(t)-a(s(t))/b(s(t))⇒ν. We summarise here our moment conditions on the run-length distribution, for increasing values of δ: * for δ<7/8, we ask that 𝔼[L^8]<∞; * for δ∈ [7/8, 8/9) we ask that 𝔼[L^9]<∞; etc * for δ≥ 1, we ask that 𝔼[L^4]<∞. Interestingly, one can note that we ask for more and more control on the upper tail of the run-length distribution as δ↑1. We believe that this is a caveat of the expansion in the expression for s(t); if the run-lengths had heavier tails, a limiting result would hold, but there would be no “nice” formula for s(t). §.§ One example Before discussing our main result further, we apply it to a simple example: let ϕ be the standard exponential distribution and P be the semi-group of the one dimensional Brownian motion with constant drift equal to 1. We first check Assumptions (A1-2) and (A2_δ): If Z is the one dimensional Brownian motion with constant drift equal to 1, then, in distribution as t↑∞, Z(t)-t/√(t)⇒𝒩(0,1). In other words, Assumption (A1) holds with a : t ↦ t, b : t↦√(t), and ν = 𝒩(0,1). Also, for all x∈ℝ and t≥ 0, a(t+ x√(t) + ε(t))- a(t)/b(t) = x√(t) + ε(t)/√(t)→ x, as t↑∞, as long as ε(t) = o(√(t)). Similarly, b(t+ x√(t) + ε(t))/b(t) = √(t+x√(t) + ε(t)/t)→ 1. Thus, Assumption (A2) holds with f(x) = x and g(x) = 1, for all x∈ℝ. Finally, if ε(t) = o(√(t)), then a(t+ε(t))-a(t)/b(t) = ε(t)/√(t)→ 0, and b(t+ε(t))/b(t) = √(t+ε(t)/t)→ 1, as t↑∞, which implies that Assumption (A2_δ) also holds in this case. Also note that, if L is a standard exponential random variable, then 𝔼 L^k = k! <∞ for all k≥ 1. Thus the assumptions of Theorems <ref>, <ref> and <ref> hold for all δ∈ (0, ∞). We now look at the conclusions of these theorems for different values of δ. For all of these, one can check that the diffusive speed of the monkey walk coincides with the variance estimates of <cit.> (made for the same model: runs of Brownian motion and exponential run-lengths – the only difference is that they consider the case with no drift, but this does not affect the variance estimates). ∙ Assume that δ∈ (0, 2/3). In that case (see Remark <ref>), s(t) = γ𝔼[L^2] /2𝔼 [L]· t^δ =γ t^δ. Thus, by Theorem <ref>, X(t) - γ t^δ/√(γ) t^δ/2⇒Ω + Λ, where Ω∼𝒩(0, 2) (because 2𝔼[L^3]/3𝔼[L^2] = 2) and Λ∼𝒩(0, 1). Because Ω and Λ are independent, we get X(t) - γ t^δ/√(γ) t^δ/2⇒𝒩(0, 3), or equivalently, X(t) - γ t^δ/t^δ/2⇒𝒩(0, 3γ). ∙ Assume that δ∈ (2/3, 3/4). The only difference with the previous case is in the definition of s(t): as discussed in Remark <ref>, in this case, s(t) = γ𝔼[L^2] /2𝔼 [L]· t^δ - (γδ)^2 𝔼[L^3]/6(2δ -1)𝔼 [L]· t^2δ -1 = γ t^δ + (γδ)^2/2δ -1 · t^2δ -1. Theorem <ref> thus gives X(t) - γ t^δ - (γδ)^2/2δ -1· t^2δ -1/t^δ/2⇒𝒩(0,3γ). It is important to note that one cannot neglect the second order term of s(t) in the numerator above because t^2δ -1≫ t^δ/2. Indeed, 2δ -1>δ/2 as soon as δ>2/3. This is the reason why, in the definition of s(t) for δ<1, we need to keep more and more terms in the sum as δ increases. ∙ Assume that δ = 1. In that case, Theorem <ref> applies. We have s(t) = (𝔼[L]-1-𝔼[e^-γ L]/γ)t = γ/γ+1· t. Thus, X(t) - γ t/(γ+1)/√(γ t/(γ+1))⇒Ω + Γ, where Γ∼𝒩(0,1) and Ω = Ω_1+Ω_2+Ω_3 are independent, with Ω_1, Ω_2, and Ω_3 three Gaussian whose variances and covariances are given by Var(Ω_1) = (γ-1)^2/2γ^2(γ+1)^2, Var(Ω_2) = γ^2(2γ+5)/(γ+1)^2(2γ+1), Var(Ω_3) = γ/γ+1, Cov(Ω_1, Ω_2) = γ^2(2γ+5)/(γ+1)^2(2γ+1), and Cov(Ω_1, Ω_3)=Cov(Ω_2, Ω_3) = γ^2+2γ+2/(γ+1)^3. ∙ Assume that δ>1. In that case, Theorem <ref> applies and gives that, if δ∈ (1, 3/2], then X(t)-t+t^2-δ/(γδ(2-δ))/√(t)⇒𝒩(0,1), and if δ>3/2, X(t)-t/√(t)⇒𝒩(0,1). In the case when δ>3/2, the memory kernel is so steep that the monkey walk satisfies the same central limit theorem as the standard Brownian motion (with no relocations). §.§ Plan of the paper After some preliminary results in Section <ref>, we prove Theorems <ref>, <ref> and <ref> in Sections <ref>, <ref> and <ref>, respectively. § PRELIMINARIES TO THE PROOFS As in <cit.>, our proofs rely on the fact that, for all t≥ 0, in distribution, X(t) = Z(S(t)), where Z is the Markov process of semi-group P started at the origin, and S(t) is a random variable, independent of Z. The idea is that, as t↑∞, S(t)∼ s(t) in probability (where s(t) is as defined in (<ref>)). To define S(t), one need to introduce the following notation: for all 1≤ i< j, we say that the i-th run is the parent of the j-th run if R(j)∈ [T_i-1, T_i). This implies a genealogy on runs, and we let i≺ j denote the event that the i-th run is an ancestor of the j-th run in this genealogical tree. For any t, we let i(t) be the integer such that X(t)∈ [T_i(t)-1, T_i(t)), meaning that X(t) belongs to the i(t)-th run of the process. We also let A(t)= t-T_i(t)-1. Finally, we set S(t) = A(t) + ∑_i=1^i(t)-1 F_i 1_i≺ i(t), where, given the run-lengths (L_i)_i≥ 1, (F_i)_i≥ 1 is a sequence of independent random variables such that, for all i≥ 1, ℙ_ L(F_i≤ x) = ∫_0^x μ(T_i-1+u) du/∫_0^L_iμ(T_i-1+u) du, where we let ℙ_ L denote ℙ( · |(L_i)_i≥ 1). Note that, for all i≥ 1, T_i-1 + F_i is distributed as R(n) conditioned to belong to the i-th run. Interestingly, and this is crucial in our proofs, the distribution of R(n) conditionally on R(n)∈ [T_i-1, T_i) is the same for all n≥ 1. With these definitions, we have For all t≥ 0, in distribution, X(t) = Z(S(t)), where Z is the Markov process of semi-group P started at the origin, and S(t) is defined as in (<ref>). The intuition behind this lemma is given in Figure <ref>: the idea is that, by the Markov property of the underlying process, X(t) can be seen at the end-point of the bold purple trajectory, which is distributed as Z run for the amount of time S(t). The plan of the proof of Theorems <ref>, <ref>, and <ref> all follow the same plan: * We first prove a limiting theorem for Φ(n):=∑_i=1^n F_i 1_i≺ n, as n↑∞. * We use renewal theory to get that i(t) ≈ t/𝔼[L] and A(t) = o(u(t)) in probability for any u(t)↑∞ as t↑∞. This gives a limiting theorem for S(t). * We then compose the limiting theorem for S(t) with the limiting theorem for Z(t) given by Assumption (A1). In Sections <ref>, <ref>, and <ref>, we prove Theorems <ref>, <ref>, and <ref>, respectively. We now state a few preliminary results that will be useful in all three proofs: Lemma <ref> will be useful for Step 1., while Lemmas <ref> and <ref> will be useful in Step 2. Conditionally on L = (L_i)_i≥ 1, ( 1_i≺ n)_1≤ i≤ n-1 is a sequence of independent Bernoulli-distributed random variables of respective parameters W_i/W̅_i, where, for all i≥ 1, W_i = ∫_T_i-1^T_iμ and W̅_i = ∑_j=1^i W_j = ∫_0^T_iμ. The proof of Lemma <ref>, and the intuition behind all the results in <cit.>, is based on the fact that the genealogical tree of the runs as defined by the relationship ≺ is a “weighted (random) recursive tree” with random weights (W_i)_i≥ 1. For more literature on weighted recursive trees, we refer the reader to, e.g., <cit.>. In probability as t↑∞, A(t) = 𝒪(1) (i.e., for all ε>0, there exists M = M(ε) and t(ε) such that, for all t≥ t(ε), ℙ(|A(t)|≥ M)≤ε). In particular, for any u : [0,∞)→ℝ such that u(t)↑∞ as t↑∞, A(t)/u(t) → 0 in probability as t↑∞. This is a straightforward consequence of the fact that, by renewal theory, T_i(t)+1-T_i(t) converges in distribution to an almost surely finite random variable (see, e.g. <cit.>). The result follows, because 0≤ A(t)≤ T_i(t)+1-T_i(t) almost surely for all t≥ 0. Almost surely as t↑∞, i(t) = t/𝔼L + 𝒪(√(tlog t)). This is a straightforward consequence of the law of the iterated logarithm. First note that i(t)↑∞ almost surely as t↑∞. Indeed, it is almost surely non-decreasing, and thus either i(t)↑∞, or i(t)↑ i(∞)<∞. In the latter case, we would get that, for all t≥ 0, t≤∑_i=1^i(∞) L_i, which is impossible. Thus, i(t)↑∞ almost surely as claimed. Because, by assumption, 𝔼L<∞, the strong law of large numbers gives i(t)∼ t/𝔼L, almost surely as t↑∞. Now, by assumption, Var(L)<∞, thus, -∞<lim inf_n↑∞∑_i=1^n L_i - n𝔼L/√(nlog n)≤lim sup_n↑∞∑_i=1^n L_i - n𝔼L/√(nlog n)<∞, which implies -∞<lim inf_t↑∞∑_i=1^i(t) L_i - i(t)𝔼L/√(i(t)log i(t))≤lim sup_t↑∞∑_i=1^i(t) L_i - i(t)𝔼L/√(i(t)log i(t))<∞. Because i(t)∼ t/𝔼L almost surely, we get -∞<lim inf_t↑∞∑_i=1^i(t) L_i - i(t)𝔼L/√(tlog t)≤lim sup_t↑∞∑_i=1^i(t) L_i - i(t)𝔼L/√(tlog t)<∞. By definition, ∑_i=1^i(t)-1 L_i - i(t)𝔼L/√(tlog t)≤t - i(t)𝔼L/√(tlog t) < ∑_i=1^i(t) L_i - i(t)𝔼L/√(tlog t). Taking limits, we thus get that, almost surely, -∞< lim inf_t↑∞t - i(t)𝔼L/√(tlog t)≤lim sup_t↑∞t - i(t)𝔼L/√(tlog t)<∞, which concludes the proof. § PROOF OF THEOREM <REF> In this section, we assume that δ<1. As discussed in Section <ref>, we start by proving a limit theorem for Φ(n) = ∑_i=1^n F_i 1_i≺ n: Assume that δ∈ (1/2, 1) and that the assumptions of Theorem <ref> hold. We let L = (L_i)_i≥ 1 and F = (F_i)_i≥ 1 (as defined in (<ref>)). Conditionally on (L,F), (L,F)-almost surely, Φ(n)-σ(n)/√(γ n^δ)⇒𝒩(0,𝔼[L^3]/3𝔼[L]^1-δ), in distribution as n→∞, where σ(n) = γδ n^δ∑_k=0^⌊δ/2 - 2δ⌋(-γδ)^k 𝔼[L^k+2]/(k+2)!(δ - k(1-δ)) 𝔼[L]^(1-δ)(k+1)· n^-k(1-δ). Before proving Proposition <ref> we show how to use it to prove Theorem <ref>: We first aim at proving a limit theorem for S(t). Recall that, by definition (see (<ref>)), S(t) = Φ(i(t)-1) + A(t). By Lemma <ref>, i(t)↑∞ almost surely as t→∞. Thus, Proposition <ref> implies that, in distribution as t↑∞, Φ(i(t)-1)-σ(i(t)-1)/√(γ i(t)^δ)⇒𝒩(0,𝔼[L^3]/3𝔼[L]^1-δ). Now, by Lemma <ref>, i(t)= t/𝔼L + 𝒪(√(tlog t)), almost surely as t↑∞. Thus, σ(i(t)-1) = γδ∑_k=0^⌊δ/2 - 2δ⌋(-γδ)^k 𝔼[L^k+2]/(k+2)!(δ - k(1-δ)) 𝔼[L]^(1-δ)(k+1)· i(t)^-k(1-δ)+δ, For all 0≤ k≤⌊δ/2 - 2δ⌋, i(t)^-k(1-δ)+δ = (t/𝔼L + 𝒪(√(tlogt)))^-k(1-δ)+δ = (t/𝔼L)^-k(1-δ)+δ ( 1+ 𝒪(√(logt/t))) = (t/𝔼L)^-k(1-δ)+δ + 𝒪(t^-k(1-δ)+δ-1/2√(logt)) = (t/𝔼L)^-k(1-δ)+δ + o(t^δ/2), where, in the last equality, we have used the fact that, for all k≥ 0, -k(1-δ)+δ-1/2≤δ-1/2<δ/2. Using this in (<ref>) gives σ(i(t)-1) = γδ∑_k=0^⌊δ/2 - 2δ⌋(-γδ)^k 𝔼[L^k+2]/(k+2)!(δ - k(1-δ)) 𝔼[L]· t^-k(1-δ)+δ + o(t^δ/2) = s(t) + o(t^δ/2), by definition of s(t) is this case (see (<ref>) for δ <1). Using this and the fact that i(t)∼ t/𝔼L in (<ref>), we get Φ(i(t))-s(t) + o(t^δ/2)/√(γ (t/𝔼L)^δ)⇒𝒩(0,𝔼[L^3]/3𝔼[L]^1-δ), which implies Φ(i(t)-1)-s(t)/t^δ/2⇒𝒩(0,γ𝔼[L^3]/3𝔼[L]). Recall that S(t) = Φ(i(t)-1) + A(t) (see (<ref>)); using the fact that, by Lemma <ref>, A(t)/t^δ/2→ 0 in probability, we get that, in distribution as t↑∞. S(t)-s(t)/t^δ/2⇒𝒩(0,γ𝔼[L^3]/3𝔼[L]). By Skorokhod's representation theorem, there exists a probability space on which there exist S̃(t) distributed as S(t) for all t≥ 0, and Ω̂∼𝒩(0,𝔼[L^3]/(3𝔼[L])), such that almost surely when t→∞, S̃(t) = s(t) + Ω̂√(γt^ δ) + o(t^δ/2) = s(t)+ Ω̂√( 2 𝔼[L] / 𝔼[L^2] s(t) ) + Ω̂(√(γt^δ) - √( 2 𝔼[L] / 𝔼[L^2] s(t) )) + o(t^δ/2) = s(t) + Ω̂√( 2 𝔼[L] / 𝔼[L^2] s(t)) + o(√(s(t))). Indeed, we have used the fact that, as t↑∞, s(t)∼γ𝔼[L^2]t^δ/(2𝔼 [L]) (see Remark <ref>). By Assumption (A1), on the same probability space, there exists (Z̃(t))_t≥ 0 independent of (S̃(t))_t≥ 0 and a random variable Λ, independent of Ω̂ such that Z̃(t) has the same distribution as Z(t) for all t≥ 0, Λ has distribution ν, and Z̃(t)-a(t)/b(t)→Λ almost surely as t↑∞. Therefore, by assumption (A2), almost surely as t→∞, Z̃(S̃(t))-a(s(t))/b(s(t)) = b(S(t))/b(s(t)) ·Z̃(S̃(t))-a(S̃(t))/b(S̃(t)) +a(S̃(t))-a(s(t))/b(s(t)) →f(Ω)+ Λg(Ω), where Ω = Ω̂√(2𝔼[L]/𝔼[L^2])∼𝒩(0, 2𝔼[L^3]/(3𝔼[L^2])). Because Z̃(S̃(t)) = Z(S(t)) = X(t) in distribution for all t≥ 0 (by Lemma <ref>), this concludes the proof. The rest of the section is devoted to proving Proposition <ref>. The idea of the proof is as follows: we first reason conditionally on L and F and apply Linderberg's central limit theorem to prove that Φ(n)-𝔼_ L, F[Φ(n)]/√(Var_ L, F(Φ(n)))⇒(0,1) in distribution as n↑∞. This is done in Section <ref>. To prove Proposition <ref>, we thus need to find asymptotic equivalents for 𝔼_ L, F[Φ(n)] and Var_ L, F(Φ(n)). To do so, we apply Lemma <ref> to get that 𝔼_ L, F[Φ(n)] =∑_i=1^nF_iW_i/W̅_i and Var_ L, F(Φ(n)) =∑_i=1^nF_iW_i/W̅_i(1-W_i/W̅_i). In Lemma <ref>, we prove that, conditionally on L, we can almost surely approximate 𝔼_ L, F[Φ(n)] and Var_ L, F(Φ(n)) by their expectations conditionally on L, i.e., 𝔼_ L, F[Φ(n)]≈∑_i=1^n𝔼_ L[F_i]W_i/W̅_i and Var_ L, F(Φ(n))≈∑_i=1^n𝔼_ L[F_i]W_i/W̅_i(1-W_i/W̅_i). To estimate these expectations, we note that, by Definition of (F_i)_i≥ 1 (see (<ref>), see also (<ref>) for the definition of (W_i)_i≥ 1), 𝔼_ L[F_i] = 1/W_i∫_0^L_i xμ(T_i-1 + x)dx = 1/W_i∫_0^L_iγδ x(T_i-1+x)^δ-1e^γ (T_i-1+x)^δdx. In Section <ref>, we prove preliminary technical results that will be useful in the rest. In Section <ref> we state and prove a rigorous version of (<ref>). Finally, in Section <ref>, we use Linderberg's theorem to prove (<ref>) and conclude the proof of Proposition <ref>. §.§ A preliminary lemma In this section, we prove the following lemma: recall that, for all i≥ 1, T_i = ∑_j=1^i L_i is the time of the i-th relocation. Let g:→ be a function such that 𝔼[g(L)],Var(g(L))<∞. (i) For all ℓ∈(0,1) almost surely as n→∞, ∑_i=1^n g(L_i)/T_i^1-ℓ = 𝔼[g(L)]/ℓ·𝔼[L]^1-ℓ n^ℓ + o(n^ℓ/2). (ii) For all ℓ>1, almost surely as n→∞, ∑_i=1^n g(L_i)/T_i^ℓ = 𝒪(1). (iii) Almost surely as n↑∞, ∑_i=1^n g(L_i)/T_i = 𝔼[g(L)]log n + 𝒪(1). To prove this lemma, we use the following strong law of large numbers, for sums of independent but not identically-distributed random variables: Let (Δ_i)_i≥ 1 be a sequence of independent random variables, and let (a_i)_i≥ 1 be a sequence of real numbers such that a_n→+∞ as n↑∞. Assume that there exists α∈[1,2] such that ∑_i≥ 1𝔼[|Δ_i|^α]/a_i^α<∞. Then, almost surely as n↑∞, 1/a_n∑_i=1^n Δ_i → 0. (i) By the law of the iterated logarithm T_i=𝔼[L] i + 𝒪(√(i log i)) almost surely as i→∞. Hence, ∑_i=1^n g(L_i)T_i^ℓ-1 = ∑_i=1^n g(L_i)( 𝔼[L] i + 𝒪(√(i logi)))^ℓ-1 = 𝔼[L]^ℓ-1 ∑_i=1^n g(L_i) i^ℓ-1 + 𝒪( ∑_i=1^n g(L_i) i^ℓ-3/2 √(logi) ), almost surely as n↑∞. For the first sum in (<ref>), note that ∑_i=1^n g(L_i) i^ℓ -1 = ∑_i=1^n 𝔼[g(L_i)] i^ℓ -1+ ∑_i=1^n (g(L_i)-𝔼[g(L_i)]) i^ℓ -1. The latter sum is a sum of independent random variables to which we can apply Theorem <ref> with α=2 and a_i = i^ℓ/2. Indeed, ∑_i=1^∞𝔼[(g(L_i)-𝔼[g(L_i)])^2]i^2(ℓ-1)-ℓ = Var(g(L)) ∑_i=1^∞1/i^ℓ-2 <∞, because ℓ-2<-1, by assumption on ℓ. Thus, Theorem <ref> implies that, almost surely as n↑∞ ∑_i=1^n (g(L_i)-𝔼[g(L_i)]) i^ℓ -1 = o(n^ℓ/2), and thus, by (<ref>), ∑_i=1^n g(L_i) i^ℓ -1 = ∑_i=1^n 𝔼[g(L_i)] i^ℓ -1+o(n^ℓ/2) = 𝔼[g(L)] ∑_i=1^n i^ℓ -1 + o(n^ℓ/2). Now note that (n+1)^ℓ-1/ℓ= ∫_1^n+1 x^ℓ-1dx≤∑_i=1^n i^ℓ -1≤∫_0^n x^ℓ-1dx = n^ℓ/ℓ, which implies that, as n↑∞, ∑_i=1^n i^ℓ -1 = n^ℓ/ℓ +o(n^ℓ/2). We thus get that, almost surely as n↑∞, ∑_i=1^n g(L_i)i^ℓ -1 = 𝔼[g(L)]/ℓ· n^ℓ + o(n^ℓ/2). For the second sum in (<ref>), we proceed similarly: first note that, for all ε>0, ∑_i=1^n g(L_i) i^ℓ -3/2√(log i) = 𝒪(∑_i=1^n g(L_i) i^ℓ -3/2+ε). We use Theorem <ref> again, with α = 2 and a_n = n^ε. Indeed, we have ∑_i≥ 1(g(L_i)-𝔼[g(L_i)]) i^2ℓ -3+2ε-2ε = Var(g(L)) ∑_i≥ 1 i^2ℓ -3<∞, because 2ℓ-3<-1 by assumption on ℓ. Theorem <ref> thus implies that ∑_i=1^n g(L_i) i^ℓ -3/2+ε = 𝔼[g(L)] ∑_i=1^n i^ℓ - 3/2+ε + o(n^2ε). We now choose ε such that 0<ε<min(ℓ-1/2, ℓ/4, (1-ℓ)/2). Using the fact that 0≤∑_i=1^n i^ℓ -3/2+ε≤∫_1^n+1 x^ℓ -3/2+εdx = 𝒪(n^ℓ-1/2+ε), we get that ∑_i=1^n g(L_i) i^ℓ -3/2+ε = 𝒪(n^ℓ-1/2+ε) + o(n^2ε) = o(n^ℓ/2), because, with our choice of ε, ℓ-1/2+ε<ℓ/2 and 2ε<ℓ/2. We have thus proved that, almost surely as n↑∞, ∑_i=1^n g(L_i) i^ℓ -3/2√(log i) = o(n^ℓ/2). Together with (<ref>) and (<ref>), this gives that, almost surely as n↑∞, ∑_i=1^n g(L_i)T_i^ℓ -1 = 𝔼[g(L)]/ℓ· n^ℓ + o(n^ℓ/2), which concludes the proof of the first claim. (iii) We use the strong law of large numbers, to get that T_i∼ i𝔼[L] almost surely as i↑∞. This implies ∑_i=1^n g(L_i)/T_i^ℓ = 𝒪(1/𝔼[L]^ℓ∑_i=1^n g(L_i)/i^ℓ). Now, we write ∑_i=1^n g(L_i)/i^ℓ = 𝔼[g(L)] ∑_i=1^n 1/i^ℓ + ∑_i=1^n g(L_i)-𝔼[g(L)]/i^ℓ, and note that the second sum is a martingale whose quadratic variation satisfies ∑_i=1^n 𝔼[(g(L_i)-𝔼[g(L)])^2]/i^2ℓ = Var(g(L)) ∑_i=1^n 1/i^2ℓ≤Var(g(L)) ∑_i=1^∞1/i^2ℓ<∞, because ℓ>1. Thus, this martingale converges almost surely as n↑∞, and we get ∑_i=1^n g(L_i)/ i^ℓ = 𝔼[g(L)] ∑_i=1^n 1/i^ℓ + 𝒪(1) = 𝒪(1), because ℓ>1. This concludes the proof of the second claim. (iii) We proceed similarly to the proof of (i) and write that, by the law of the iterated logarithm, ∑_i=1^n g(L_i)/T_i = ∑_i=1^n g(L_i)/(i𝔼L + 𝒪(√(ilogi))) = ∑_i=1^n g(L_i)/i𝔼L (1 + 𝒪(√((logi)/i))) = ∑_i=1^n g(L_i)/i𝔼L + 𝒪(∑_i=1^n g(L_i)√(logi)/i^3/2𝔼L). We now use the fact that log i is negligible in front of any power or i to get that ∑_i=1^n g(L_i)√(log i)/i^3/2𝔼L = 𝒪(∑_i=1^n g(L_i)/i^5/4𝔼L) = 𝒪(1), by (ii). This implies that, almost surely as n↑∞, ∑_i=1^n g(L_i)/T_i = ∑_i=1^n g(L_i)/i𝔼L + 𝒪(1) = 𝔼[g(L)]∑_i=1^n 1/i𝔼L + ∑_i=1^n g(L_i)-𝔼[g(L)]/i𝔼L +𝒪(1) = 𝔼[g(L)]/𝔼L·logn + ∑_i=1^n g(L_i)-𝔼[g(L)]/i𝔼L +𝒪(1). Now, the remaining sum is a martingale whose quadratic variation satisfies ∑_i=1^n 𝔼[(g(L_i)-𝔼[g(L)])^1]/i^2(𝔼L)^2 = Var(g(L))/(𝔼L)^2∑_i=1^n 1/i^2≤Var(g(L))/(𝔼L)^2∑_i≥ 11/i^2<∞. This implies that the remaining sum converges almost surely as n↑∞, which concludes the proof. §.§ Estimating the sums in (<ref>) The aim of this section is to understand the asymptotic behaviour of ∑_i=1^n W_i𝔼_ L[F_i]/W̅_i and ∑_i=1^n W_i𝔼_ L[F^2_i]/W̅_i. We do this in two separate lemmas. Almost surely as n→+∞, ∑_i=1^n W_i𝔼_ L[F_i]/W̅_i = σ(n) + o(n^δ/2). To prove Lemma <ref>, we need the following consequence of Borel-Cantelli lemma: For all δ∈ (0,1), L_i/T_i-1^1-δ→ 0 almost surely as i↑∞. First note that, by the strong law of large numbers, almost surely for all i large enough, T_i-1≥ i𝔼L/2. We take p as in (<ref>) (recall that, in this section, we assume that δ∈(0,1)). Fix ε>0; by Markov inequality, for all i≥ 1, ℙ(L_i>ε (2i𝔼L)^1-δ) ≤𝔼[L^p]/ε^p(i𝔼L/2)^(1-δ)p. By definition of p (see (<ref>)), (1-δ)p>1. and thus, ∑_i≥ 1ℙ(L_i>ε(2i𝔼L)^1-ℓ)<∞. By Borel-Cantelli lemma (because the (L_i)_i≥ 1 is a sequence of independent random variables), almost surely for all i large enough, L_i≤ε (2i𝔼L)^1-ℓ, which concludes the proof (recall that T_i-1≥ i𝔼L/2 almost surely for all i enough). By (<ref>), for all i≥ 1, W_i𝔼_L [F_i] = ∫_0^L_i γδx(T_i-1+x)^δ-1 e^γ(T_i-1+x)^δdx = L_i e^γT_i^δ - ∫_0^L_i e^γ(T_i-1+x)^δdx, by integration by parts. Setting u = (T_i-1+x)^δ, we get W_i𝔼_L [F_i] = L_i e^γT_i^δ - ∫_T_i-1^δ^T^δ_i 1/δu^1/δ-1 e^γudu = L_i e^γT_i^δ - T_i^1-δ/δ∫_T_i-1^δ^T^δ_i e^γudu + 1/δ∫_T_i-1^δ^T^δ_i [T_i^1-δ -u^1/δ-1] e^γudu = L_i e^γT_i^δ - T_i^1-δ/γδ(e^γT_i^δ-e^γT_i-1^δ) + 1/δ∫_T_i-1^δ^T^δ_i [T_i^1-δ -u^1/δ-1] e^γudu thus, ∑_i=1^nW_i𝔼_ L [F_i]/W̅_i = T_n - ∑_i=1^n T_i^1-δ/γδ(1-e^-γ (T_i^δ-T_i-1^δ)) + R_n, where we have set R_n = ∑_i=1^n 1/δ∫_T_i-1^δ^T^δ_i[T_i^1-δ -u^1/δ-1] e^γ udu. We first show that, in probability as n↑∞, R_n = o(n^δ/2). Indeed, we have R_n≥ 0, and R_n ≤∑_i=1^n (T_i^1-δ -T_i-1^1-δ) ∑_i=1^n 1/δ∫_T_i-1^δ^T^δ_i e^-γ(T_i^δ- u)du = ∑_i=1^n T_i^1-δ(1-(1-L_i/T_i)^1-δ) (1-e^-γ(T_i^δ-T_i-1^δ)) =∑_i=1^n T_i^1-δ(1-(1-L_i/T_i)^1-δ) (1-e^-γT_i^δ (1 -(1-L_i/T_i)^δ)) As i↑∞, L_i/T_i → 0 almost surely (see Lemma <ref>), and thus T_i^1-δ(1-(1-L_i/T_i)^1-δ) (1-e^-γ T_i^δ (1 -(1-L_i/T_i)^δ)) ∼ (1-δ)L_iT_i^-δ(1-e^-γδ L_i T_i^δ-1) ∼γδ(1-δ)L^2_i/T_i, because δ<1 and thus L_iT_i^δ -1→ 0 almost surely as i↑∞ (by Lemma <ref>). Because, by Lemma <ref>, ∑_i=1^n γδ(1-δ)L^2_i/T_i = 𝒪(log n), we get that, almost surely as n↑∞ R_n = 𝒪(log n) = o(n^δ/2), as needed. We now look at the second term in (<ref>): we write ∑_i=1^n T_i^1-δ/γδ(1-e^-γ (T_i^δ-T_i-1^δ)) = ∑_i=1^n T_i^1-δ/γδ(1-e^-γδ L_i T_i^δ-1) + R'_n, where R'_n = ∑_i=1^n T_i^1-δ/γδ(e^-γδ L_i T_i^δ-1-e^-γ (T_i^δ-T_i-1^δ)). First note that, for all x∈ [0,1], (1-x)^δ≤ 1-δ x. This implies T_i^δ-T_i-1^δ = T_i^δ(1-(1-L_i/T_i)^δ) ≥δ L_i T_i^δ-1, implying that R'_n≥ 0 for all n≥ 1. We now prove that R'_n = o(n^δ/2) almost surely as n↑∞. To do so, we use the fact that, for all x∈[0,1], (1-x)^δ≤ 1-δ x-δ(1-δ)x^2/2, and thus T_i^δ-T_i-1^δ≤δ L_i T_i^δ-1 + δ(1-δ)L_i^2 T_i^δ-2/2, which, in turn, implies R'_n ≤∑_i=1^n T_i^1-δ/γδe^-γδL_i T_i^δ-1(1-e^-γδ(1-δ)L_i^2 T_i^δ-2/2) ≤∑_i=1^n T_i^1-δ/γδ (1-e^-γδ(1-δ)L_i^2 T_i^δ-2/2) ≤∑_i=1^n (1-δ) L_i^2/2 T_i = 𝒪(logn), by Lemma <ref>. In total, using the fact that R_n = o(n^δ/2) and R'_n = o(n^δ/2) in (<ref>), we get that, almost surely as n↑∞, ∑_i=1^nW_i𝔼_L [F_i]/W̅_i = T_n - ∑_i=1^n T_i^1-δ/γδ(1-e^-γδL_i T_i^δ-1) + o(n^δ/2) = ∑_i=1^n T_i^1-δ/γδ(e^-γδL_i T_i^δ-1-1+γδL_i T_i^δ-1) + o(n^δ/2) = ∑_i=1^n T_i^1-δ/γδ∑_k≥2(-γδL_i T_i^δ-1)^k/k!+ o(n^δ/2) = ∑_k≥2(-γδ)^k/γδk! ∑_i=1^n L_i^k/T_i^(1-δ)(k-1)+ o(n^δ/2) Now note that, for all k≥ k_0 = ⌊3/2-δ/1-δ⌋+1, ∑_i=1^n L_i^k/T_i^(1-δ)(k-1)≤∑_i=1^n L_i^k_0/T_i^(1-δ)(k_0-1)=o(n^δ/2), almost surely as n↑∞. Furthermore, because of the alternating signs, |∑_k≥ k_0(-γδ)^k/γδ k!∑_i=1^n L_i^k/T_i^(1-δ)(k-1)| ≤|(-γδ)^k_0/γδ k_0!∑_i=1^n L_i^k_0/T_i^(1-δ)(k_0-1)| =o(n^δ/2), almost surely. This gives ∑_i=1^nW_i𝔼_L [F_i]/W̅_i = ∑_k= 2^k_0-1(-γδ)^k/γδk! ∑_i=1^n L_i^k/T_i^(1-δ)(k-1)+ o(n^δ/2) = ∑_k= 2^k_0-1(-γδ)^k𝔼[L^k]/γδ𝔼[L]^(1-δ)(k-1)k! ·n^(1-δ)(k-1)+1/(1-δ)(k-1)+1+ o(n^δ/2), which concludes the proof, by definition of σ(n) (see (<ref>)). Almost surely as n→∞, ∑_i=1^n 𝔼_L[F_i^2] W_i/W̅_i = γ𝔼[L^3]/3𝔼[L]^1-δ· n^δ +o(n^δ). By (<ref>), for all i≥ 1, W_i𝔼_L [F_i^2] = ∫_0^L_i γδx^2(T_i-1+x)^δ-1 e^γ(T_i-1+x)^δdx = L^2_i e^γT_i^δ - ∫_0^L_i 2xe^γ(T_i-1+x)^δdx, by integration by parts. This implies W_i𝔼_L [F_i^2]/W̅_i = L^2_i - ∫_0^L_i 2xe^-γ(T_i^δ- (T_i-1+x)^δ)dx = L^2_i - ∫_0^L_i 2xe^-γT_i^δ(1 - (1-(L_i-x)/T_i)^δ)dx = L^2_i - ∫_0^L_i 2(L_i-u)e^-γT_i^δ(1 - (1-u/T_i)^δ)du = L^2_i - ∫_0^L_i 2(L_i-u)(1-γδu T_i^δ-1)du + r_i, where we have set r_i = ∫_0^L_i 2(L_i-u)(1-γδ u T_i^δ-1-e^-γ T_i^δ (1 - (1-u/T_i)^δ))du. Thus, W_i𝔼_ L [F_i^2]/W̅_i = γδ T_i^δ-1∫_0^L_i 2(L_i-u) udu + r_i = γδ L_i^3/3T_i^1-δ + r_i, which, by Lemma <ref>, implies ∑_i=1^n W_i𝔼_ L [F_i^2]/W̅_i = γ𝔼[L^3]/2(𝔼L)^1-δ· n^δ + o(n^δ/2) + ∑_i=1^n r_i. Thus, it only remains to prove that ∑_i=1^n r_i = o(n^δ) almost surely as n↑∞. First note that, almost surely for all i≥ 1, ϖ_i : u↦e^-γ T_i^δ (1 - (1-u/T_i)^δ)+γδ u T_i^δ-1-1 is non-decreasing on [0,L_i]. Indeed, we have, uniformly in u∈ [0,L_i], because L_i/T_i→ 0 almost surely as i↑∞ (see Lemma <ref>), ϖ'_i(u) = γδT_i^δ-1(1-(1-u/T_i)^δ-1e^-γT_i^δ(1 - (1-u/T_i)^δ)) ∼γδT_i^δ-1(1-(1+(1-δ)u/T_i)(1-γδu/T_i^1-δ) ∼(γδ)^2 u, almost surely as i↑∞. Thus, almost surely for all i large enough, ϖ_i is non-decreasing on [0, L_i] as claimed. Because ϖ_i(0) = 0, this implies that 0≤ϖ_i(u)≤ϖ_i(L_i) for all u∈ [0,L_i]. This implies that |r_i| ≤ϖ_i(L_i) ∫_0^L_i 2(L_i-u)du = L_i^2 ϖ_i(L_i) = L_i^2 e^-γ T_i^δ (1 - (1-L_i/T_i)^δ)+γδ L_iT_i^δ-1-1. Because L_i/T_i^δ-1→ 0 almost surely as i↑∞ (see Lemma <ref>), we get that, almost surely as i↑∞, |r_i| ≤ o(L_i^3 T_i^δ -1), which implies ∑_i=1^n r_i = o(∑_i=1^n L_i^3 T_i^δ -1) = o(n^δ), by Lemma <ref>, as claimed. This concludes the proof. §.§ Making (<ref>) rigorous The aim of this section is to prove the following lemma: Almost surely as n→∞, ∑_i=1^n F_i W_i/W̅_i = ∑_i=1^n 𝔼_ L[F_i] W_i/W̅_i + o(n^δ/2), and ∑_i=1^n F_i^2 W_i/W̅_i( 1-W_i/W̅_i) = ∑_i=1^n 𝔼_ L[F_i^2] W_i/W̅_i +o(n^δ). Before proving this lemma, note that Lemmas <ref>, <ref>, and <ref> together give that, almost surely as n↑∞, ∑_i=1^n F_i W_i/W̅_i = σ(n) + o(n^δ/2) and ∑_i=1^n F^2_i W_i/W̅_i( 1-W_i/W̅_i) = γ𝔼[L^3]/3𝔼[L]^1-δ· n^δ +o(n^δ). By Lemma <ref>, L_i/T_i-1→ 0 almost surely as i↑∞. Thus, almost surely as i↑∞, T_i-1^δ-T_i^δ = T_i-1^δ- (T_i-1+L_i)^δ = T_i-1^δ(1-(1+L_i/T_i-1)^δ) =𝒪(T_i-1^δ -1 L_i) = 𝒪(T_i^δ-1L_i). This implies W_i/W̅_i =e^γ T_i^δ-e^γ T_i-1^δ/e^γ T_i^δ =1- exp(-γ(T_i^δ-T_i-1^δ)) =𝒪(T_i^δ -1 L_i). Thus, ∑_i=1^n (F_i-𝔼_L[F_i])W_i/W̅_i = 𝒪( ∑_i=1^n (F_i-𝔼_L[F_i])L_iT_i^δ -1). We apply Theorem <ref> to α = 2, Δ_i = (F_i-𝔼_ L[F_i])L_iT_i^δ -1 and a_i = T_i^δ/2, for all i≥ 1. Its assumption holds because, using the fact that F_i≤ L_i almost surely for all i≥ 1 (see (<ref>) for the definition of (F_i)_i≥ 1), ∑_i≥1Var_L(F_i)L^2_iT_i^2δ-2/T_i^δ≤∑_i=1^∞ L^4_iT_i^δ-2< ∞, by Lemma <ref> (because, by assumption, 𝔼L^4<∞). Thus, Theorem <ref> applies and gives ∑_i=1^n (F_i-𝔼_ L[F_i])L_iT_i^δ -1 = o(T_n^δ/2) = o(n^δ/2) almost surely when n↑∞ (the last equality holds by the strong law of large numbers). We thus get that, almost surely as n↑∞, ∑_i=1^n F_i W_i/W̅_i = ∑_i=1^n 𝔼_L[F_i] W_i/W̅_i + o(n^δ/2), which concludes the proof of the first statement. For the second statement of the lemma, we write ∑_i=1^n F_i^2 W_i/W̅_i( 1-W_i/W̅_i) = ∑_i=1^n F_i^2 W_i/W̅_i - ∑_i=1^n F_i^2 W_i^2/W̅_i^2. For the first sum, we proceed as in the proof of the first statement: we write ∑_i=1^n F_i^2 W_i/W̅_i = ∑_i=1^n 𝔼_ L[F_i^2] W_i/W̅_i + ∑_i=1^n (F_i^2-𝔼_ L[F_i^2]) W_i/W̅_i, and note that, by (<ref>), ∑_i=1^n (F_i^2-𝔼_ L[F_i^2]) W_i/W̅_i = 𝒪(∑_i=1^n (F_i^2-𝔼_ L[F_i^2])L_i T_i^δ-1). We now apply Theorem <ref> to α=2, Δ_i = (F_i^2-𝔼_ L[F_i^2])L_i T_i^δ-1, and a_i = T_i^δ. Its assumption holds because ∑_i≥ 1Var_ L(F_i^2) L_i^2T_i^2δ-2/T_i^2δ≤∑_i≥ 1 L_i^4T_i^-2<∞, by Lemma <ref>. Theorem <ref> thus implies that, almost surely as n↑∞, ∑_i=1^n (F_i^2-𝔼_ L[F_i^2])L_i T_i^δ-1 = o(n^δ), and thus, by (<ref>), ∑_i=1^n (F_i^2-𝔼_ L[F_i^2]) W_i/W̅_i = o(n^δ). For the second sum of (<ref>), we proceed along the same lines (we skip the technical details) and prove that, almost surely as n↑∞, ∑_i=1^n (F_i^2-𝔼_L[F_i^2])W_i^2/W̅_i^2 = 𝒪( ∑_i=1^n (F_i^2-𝔼_L[F_i^2]) L_i^2 T_i^2(δ -1)) = o(n^δ), which implies ∑_i=1^n F_i^2W_i^2/W̅_i^2 = ∑_i=1^n 𝔼_ L[F_i^2]W_i^2/W̅_i^2 +o(n^δ). Now, ∑_i=1^n 𝔼_ L[F_i^2]W_i^2/W̅_i^2 =𝒪(∑_i=1^n 𝔼_ L[F_i^2]L_i^2T_i^2(δ-1)) =𝒪(∑_i=1^n L_i^4 T_i^2(δ-1)). Applying Theorem <ref> to α = 2, Δ_i = L_i^4 T_i^2(δ-1), and a_i = T_i^δ, we get ∑_i=1^n 𝔼_ L[F_i^2]W_i^2/W̅_i^2 = o(T_n^δ) = o(n^δ). This, together with (<ref>), gives that, almost surely as n↑∞, ∑_i=1^n F_i^2W_i^2/W̅_i^2 = o(n^δ). By (<ref>) and (<ref>), this concludes the proof. §.§ Proof of Proposition <ref> As mentioned at the beginning of this section, the proof of Proposition <ref> relies on applying Lindeberg's theorem. We apply it, conditionally on L and F, to the sum Φ(n) - ∑_i=1^n F_iW_i/W̅_i = ∑_i=1^n F_i( 1_i≺ n - W_i/W̅_i). Indeed, this is a sum of centred random variables because 𝔼_ L, F[ 1_i≺ n] = W_i/W̅_i, by Lemma <ref>. To apply Lindeberg's theorem, we need to show that, for all ε>0, 1/a_n^2∑_i=1^n 𝔼_ L, F[F_i^2( 1_i≺ n-W_i/W̅_i)^2 1_|F_i( 1_i≺ n-W_i/W̅_i)|>ε a_n] → 0, as n↑∞, where a_n^2:= ∑_i=1^n Var_ L, F(F_i 1_i≺ n) = ∑_i=1^n F_i^2 Var_ L, F( 1_i≺ n) = ∑_i=1^n F_i^2W_i/W̅_i(1-W_i/W̅_i), by Lemma <ref>. First note that, by definition of (F_i)_i≥ 1 (see (<ref>)), F_i≤ L_i almost surely for all i≥ 1. Also, 1_i≺ n-W_i/W̅_i∈ [-1, 1] almost surely for all i≥ 1. Thus, almost surely, 1/a_n^2∑_i=1^n 𝔼_L,F[F_i^2(1_i≺n-W_i/W̅_i)^21_|F_i(1_i≺n-W_i/W̅_i)|>εa_n] ≤1/a_n^2∑_i=1^n L_i^2 ℙ_L,F(|F_i(1_i≺n-W_i/W̅_i)|>εa_n) ≤1/ε^2 a_n^4∑_i=1^n L_i^2 𝔼_L,F[|F_i(1_i≺n-W_i/W̅_i)|^2], by Markov inequality. Using again the fact that |F_i( 1_i≺ n-W_i/W̅_i)|≤ L_i almost surely for all i≥ 1, we get 1/a_n^2∑_i=1^n 𝔼_L,F[F_i^2(1_i≺n-W_i/W̅_i)^21_|F_i(1_i≺n-W_i/W̅_i)|>εa_n] ≤1/ε^2 a_n^4∑_i=1^n L_i^4 = n𝔼[L^4]/ε^2 a_n^4, almost surely as n↑∞, by the strong law of large numbers. Now, by (<ref>), almost surely as n↑∞, a_n^2 = γ𝔼[L^3]/3𝔼[L]^1-δ· n^δ +o(n^δ). Thus, 1/a_n^2∑_i=1^n 𝔼_ L, F[F_i^2( 1_i≺ n-W_i/W̅_i)^2 1_|F_i( 1_i≺ n-W_i/W̅_i)|>ε a_n] = 𝒪(n^1-2δ) → 0, almost surely as n↑∞ because δ>1/2. Thus, Lindeberg's theorem applies and gives that, conditionally on L and F, in distribution as n↑∞, Φ(n) - ∑_i=1^n F_iW_i/W̅_i/a_n⇒𝒩(0,1). By (<ref>) and (<ref>), this implies Φ(n) - σ(n)+o(n^δ/2)/√(γ𝔼[L^3]/3𝔼[L]^1-δ· n^δ)⇒𝒩(0,1), which concludes the proof of Proposition <ref>. § PROOF OF THEOREM <REF> In this section, we assume that δ = 1. The proof follows the same route as in the case δ <1, and the key ingredient is the following limiting theorem for Φ(n): Under the assumptions of Theorem <ref> and if δ = 1, then, in distribution as n↑∞, Φ(n) - n𝔼[L-(1-e^-γ L)/γ]/√(n)⇒Ω_1 + Ω_2, where Ω_1∼𝒩(0, 1/γ^2 + 𝔼[L^2-(L+e^-γ L/γ)^2]) and Ω_2∼𝒩(0,Var(L- 1-e^-γ L/γ)), and Cov(Ω_1, Ω_2) = Var(L - 1- e^-γ L/γ). Before proving Proposition <ref>, we show how it implies Theorem <ref>: By the strong law of large numbers, i(t)→∞ almost surely as t↑∞. Thus, by Proposition <ref>, in distribution as t↑∞, Φ(i(t)-1)-i(t)𝔼[1-(1-e^-γ L)/γ]/√(i(t))⇒Ω_1+Ω_2. Now, by the central limit theorem, in distribution as n↑∞ ∑_i=1^n L_i - n𝔼L/√(n)⇒Ψ∼𝒩(0, Var(L)). Thus, because i(t)↑∞ almost surely as t↑∞, we get ∑_i=1^i(t) L_i - i(t)𝔼L/√(i(t))⇒Ψ. By standard results in renewal theory, L_i(t) converges in distribution as t↑∞. Because T_i(t)-L_i(t)≤ t<T_i(t), this implies that T_i(t) = ∑_i=1^i(t) L_i = t+𝒪(1) in probability as t↑∞. Thus, in distribution as t↑∞, t - i(t)𝔼L/√(i(t))⇒Ψ. This implies Φ(i(t)-1)- t𝔼[L-(1-e^-γ L)/γ]/𝔼[L]/√(i(t))⇒Ω_1+Ω_2 + Ω_3, where we have set Ω_3 = Ψ𝔼[L-(1-e^-γ L)/γ]/𝔼[L]. Because A(t)/√(i(t))→ 0 in probability, by Lemma <ref>, and because, by definition, S(t) = Φ(i(t)-1)+A(t), this implies that S(t)- t𝔼[L-(1-e^-γ L)/γ]/𝔼[L]/√(i(t))⇒Ω_1+Ω_2 + Ω_3, which implies S(t)- t𝔼[L-(1-e^-γ L)/γ]/𝔼[L]/√(t)⇒1/√(𝔼L)(Ω_1+Ω_2 + Ω_3) =: Ω By Skorokhod's representation theorem, there exist a probability space (which we call Skorokhod's probability space) on which one can define a process (S̃(t))_t≥ 0 and a random variable Ω̃ such that S̃(t) = S(t) in distribution for all t≥ 0, Ω̃= Ω in distribution, and S̃(t)- t𝔼[L-(1-e^-γ L)/γ]/𝔼[L]/√(i(t))→Ω̃, almost surely as t↑∞. In Skorokhod's probability space, there also exist a process (Z̃(t))_t≥ 0 and a random variable Γ̃ of distribution γ such that (Z̃(t))_t≥ 0 is independent of (S̃(t))_t≥ 0, Z̃(t) = Z(t) in distribution for all t≥ 0, and Z̃(t)-a(t)/b(t)→Γ̃, almost surely as t↑∞. Thus, by Assumption (A'2), Z̃(S̃(t))-a(s(t))/b(s(t)) = Z̃(S̃(t))-a(S̃(t))/b(S̃(t))·b(S̃(t))/a(s(t)) + a(S̃(t))-a(s(t))/b(s(t)) → f(Ω̃)+Γ̃g(Ω̃). This implies convergence in distribution on the original probability space. Thus, it only remains to calculate Cov(Ω_1, Ω_3) and Cov(Ω_2, Ω_3); this is done in Section <ref>, where we prove that Cov(Ω_1, Ψ) =Cov(Ω_2, Ψ) =Cov(L, L+ e^-γ L/γ). This concludes the proof of Theorem <ref> because Ω_3 = Ψ𝔼[L-(1-e^-γ L)/γ]/𝔼[L]. As in Section <ref>, the idea to prove Proposition <ref> is to apply Lindeberg's theorem. However, this time, we only condition on L (and not on F). This will give Φ(n)-𝔼_ L[Φ(n)]/√(Var_ L(Φ(n)))⇒𝒩(0,1). By Lemma <ref>, and because, given L, the sequences (F_i)_1≤ i≥ n and ( 1_i≺ n)_1≤ i≤ n are independent, we get 𝔼_ L[Φ(n)] =∑_i=1^nW_i𝔼_ L[F_i]/W̅_i. For the variance, we use the fact that, given L, (F_i 1_i≺ n)_1≤ i≤ n is a sequence of independent random variables, and thus Var_ L(Φ(n)) = ∑_i=1^n Var_ L(F_i 1_i≺ n). Using the total variance law in the first equality and Lemma <ref> in the second one, we get that, for all 1≤ i≤ n, Var_L(F_i 1_i≺n) = Var_L(𝔼_L,F[F_i 1_i≺n]) + 𝔼_L[Var_L,F(F_i 1_i≺n)] = Var_L(F_iW_i/W̅_i) + 𝔼_L[F_i^2W_i/W̅_i(1-W_i/W̅_i)] = (W_i/W̅_i)^2 Var_L(F_i) + W_i/W̅_i(1-W_i/W̅_i)𝔼_L[F_i^2] = W_i/W̅_i 𝔼_L[F_i^2] - (W_i/W̅_i)^2𝔼_L[F_i]^2. We thus get Var_ L(Φ(n)) = ∑_i=1^n(W_i/W̅_i𝔼_ L[F_i^2] - (W_i/W̅_i𝔼_ L[F_i])^2). The rest of the section is dedicated to proving Proposition <ref>: In Section <ref>, we give asymptotic estimates for 𝔼_ L[Φ(n)] and Var_ L(Φ(n)). In Section <ref>, we apply Lindeberg theorem to prove (<ref>) and conclude the proof. §.§ Estimating the expectation and variance of Φ(n) In this section, we prove the following lemma: Almost surely for all n≥ 1, 𝔼_ L[Φ(n)] = ∑_i=1^n(L_i - 1- e^-γ L_i/γ), and Var_ L(Φ(n)) = n/γ^2+ ∑_i=1^n (L^2_i-(L_i+e^-γ L_i/γ)^2). Before proving this result, we note that, by the central limit theorem for 𝔼_ L[Φ(n)] and the law of large numbers for Var_ L(Φ(n)), 1/√(n)(𝔼_ L[Φ(n)] - n𝔼[L - 1- e^-γ L/γ]) ⇒𝒩(0,Var(L - 1- e^-γ L/γ) ), in distribution as n↑∞, and Var_ L(Φ(n)) =n/γ^2+n𝔼[L^2-(L+e^-γ L/γ)^2] + o(n), almost surely as n↑∞. First note that, in the case δ = 1, for all i≥ 1, W_i/W̅_i = W̅_i - W̅_i-1/W̅_i = e^γ T_i - e^γ T_i-1/e^γ T_i = 1- e^-γ L_i. Also, using the definition of (F_i)_i≥ 1 (see (<ref>)) in the first equality and integration by parts in the third one, we get that, for all i≥ 1, W_i 𝔼_L[F_i] = ∫_0^L_i γxe^γ(T_i-1+x)dx = ∫_T_i-1^T_i γ(x-T_i-1)e^γxdx = [(x-T_i-1)e^γx]_T_i-1^T_i - ∫_T_i-1^T_i e^γxdx = L_i e^γT_i - e^γT_i-e^γT_i-1/γ,because, μ(x) = γe^γ x in the case when δ =1. Using (<ref>), we thus get that, for all i≥ 1, W_i 𝔼_ L[F_i]/W̅_i= L_i - 1- e^-γ L_i/γ. Thus, by (<ref>), we get 𝔼_ L[Φ(n)] = ∑_i=1^n(L_i - 1- e^-γ L_i/γ). For the variance, recall that, by (<ref>), Var_ L(Φ(n)) = ∑_i=1^n(W_i/W̅_i𝔼_ L[F_i^2] - (W_i/W̅_i𝔼_ L[F_i])^2). By (<ref>), we have (W_i/W̅_i𝔼_ L[F_i])^2 =(L_i - 1- e^-γ L_i/γ)^2. Also, by definition of (F_i)_i≥ 1, for all i≥ 1, W_i𝔼_L[F_i^2] = ∫_0^L_i γx^2 e^γ(T_i-1+x)dx = ∫_T_i-1^T_i γ(x-T_i-1)^2 e^γxdx = [(x-T_i-1)^2 e^γx]_T_i-1^T_i - 2∫_T_i-1^T_i (x-T_i-1) e^γxdx = L_i^2 W̅_i - 2/γW_i𝔼_L[F_i], where we have used the fact that W_i𝔼_ L[F_i] = ∫_T_i-1^T_iγ(x-T_i-1) e^γ xdx (as seen in (<ref>)). Now using (<ref>), we get W_i𝔼_ L[F_i^2]/W̅_i = L_i^2 - 2/γ(L_i - 1- e^-γ L_i/γ). This, together with (<ref>), gives W_i/W̅_i𝔼_ L[F_i^2] - (W_i/W̅_i𝔼_ L[F_i])^2 =L_i^2 - 2/γ(L_i - 1- e^-γ L_i/γ)-(L_i - 1- e^-γ L_i/γ)^2 = L^2_i+ 1/γ^2-(L_i+e^-γ L_i/γ)^2, which concludes the proof. §.§ Applying Lindeberg's theorem In this section, we prove that, conditionally on L, in distribution as n↑∞, Φ(n)-𝔼_ L[Φ(n)]/√(Var_ L(Φ(n)))⇒𝒩(0,1). To do so, we apply Lindeberg's theorem to the triangular array (F_i 1_i≺ n)_1≤ i≤ n, conditionally on L. For all n≥ 1, we let a_n^2 = ∑_i=1^n Var_ L(F_i 1_i≺ n) = Var_ L(Φ(n)) ∼n/γ^2+n𝔼[L-(1+e^-γ L/γ)^2], almost surely as n↑∞. To apply Lindeberg's theorem, we need to show that 1/a_n^2∑_i=1^n 𝔼_ L[ (F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n])^2 1_|F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n|>ε a_n] → 0, almost surely as n↑∞, for all ε>0. Using the fact that, almost surely, 0≤ F_i 1_i≺ n≤ L_i for all 1≤ i≤ n, we get 1/a_n^2∑_i=1^n 𝔼_L[ (F_i 1_i≺n-𝔼_L[F_i 1_i≺n])^2 1_|F_i 1_i≺n-𝔼_L[F_i 1_i≺n|>εa_n ] ≤1/a_n^2∑_i=1^n L_i^2ℙ_L( |F_i 1_i≺n-𝔼_L[F_i 1_i≺n|>εa_n ) ≤1/a_n^2∑_i=1^n L_i^2·Var_L(F_i 1_i≺n)/ε^2 a_n^2, by Markov's inequality. Now, using again the fact that 0≤ F_i 1_i≺ n≤ L_i almost surely for all 1≤ i≤ n, we get 1/a_n^2∑_i=1^n 𝔼_ L[ (F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n])^2 1_|F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n|>ε a_n] ≤1/ε^2 a_n^4∑_i=1^n L_i^4 ∼𝔼[L^4]n/ε^2 a_n^4→ 0, almost surely as n↑∞, by (<ref>). This concludes the proof of (<ref>). §.§ Proof of Proposition <ref> The proof of Proposition <ref> relies on Equations (<ref>), (<ref>)  and (<ref>). Indeed, we write Φ(n) - n𝔼[1-(1-e^γL)/γ]/√(n) = Φ(n) - 𝔼_L[Φ(n)]/√(Var_L(Φ(n))) ·√(Var_L(Φ(n))/n) + 𝔼_L[Φ(n)] - n𝔼[1-(1-e^γL)/γ]/√(n) ⇒Ω_1 + Ω_2, where Ω_1 and Ω_2 are as in (<ref>). To prove Proposition <ref>, it only remains to calculate Cov(Ω_1, Ω_2). To do so, note that Cov(Φ(n), 𝔼_ L[Φ(n)]) =𝔼[Φ(n) 𝔼_ L[Φ(n)]] -𝔼[Φ(n)] 𝔼[𝔼_ L[Φ(n)]] = 𝔼[𝔼_ L[Φ(n)]^2] -𝔼[𝔼_ L[Φ(n)]]^2, by the tower rule. Thus, Cov(Φ(n), 𝔼_ L[Φ(n)]) =Var(𝔼_ L[Φ(n)]) = Var(∑_i=1^n(L_i - 1- e^-γ L_i/γ)), by Lemma <ref>. By independence, Cov(Φ(n), 𝔼_ L[Φ(n)]) = nVar(L - 1- e^-γ L/γ). Thus, Cov(Ω_1,Ω_2) =lim_n↑∞Cov(Φ(n) - 𝔼_L[Φ(n)]/√(n), 𝔼_L[Φ(n)] - n𝔼[1-(1-e^γL)/γ]/√(n)) = lim_n↑∞ 1/n Cov(Φ(n), 𝔼_L[Φ(n)]) = Var(L - 1- e^-γL/γ), which concludes the proof. §.§ Covariances The aim of this section is to prove Equation (<ref>). Note that, for all n≥ 1, Cov(∑_i=1^n L_i, 𝔼_L[Φ(n)]) = Cov(∑_i=1^n L_i, ∑_i=1^n(1 - 1- e^-γL_i/γ)) = ∑_i=1^n ∑_j=1^n Cov(L_i, (1 - 1- e^-γL_j/γ)) = ∑_i=1^n Cov(L_i, (L_i - 1- e^-γL_i/γ)) = nCov(L, (L - 1- e^-γL/γ)) = nVar(L) + nCov(L, e^-γL/γ). Thus, by definition of Ψ (see (<ref>)) and Ω_2 (see (<ref>)) Cov(Ψ, Ω_2) = lim_n→∞Cov(∑_i=1^n L_i - n𝔼L/√(n), 𝔼_ L[Φ(n)] - n𝔼[1-(1-e^γ L)/γ]/√(n)) = Var(L) + Cov(L, e^-γ L/γ). Finally, by the tower rule, Cov(∑_i=1^n L_i, Φ(n)) = Cov(∑_i=1^n L_i, 𝔼_ L[Φ(n)]), which concludes the proof of (<ref>). § PROOF OF THEOREM <REF> In this section, we assume that δ >1. As in the two previous cases, we need to first understand the asymptotic behaviour of Φ(n): If δ>1, under the assumptions of Theorem <ref>, then, conditionally on L, almost surely as n↑∞, Φ(n) = T_n + 𝒪(1) if δ >2 T_n - ∑_i=1^n 1/γδ T_i^δ-1 + 𝒪(1) if δ∈ (3/2, 2] T_n - ∑_i=1^n 1/γδ T_i^δ-1 + o(n^3/2-δ(log n)^2) if δ∈ (1, 3/2], Before proving Proposition <ref>, we how how it implies Theorem <ref>: Since, by the strong law of large numbers, i(t)↑∞ almost surely as t↑∞, Proposition <ref> implies that, almost surely as t↑∞, Φ(i(t)-1) = T_i(t)-1 + 𝒪(1) if δ >2 T_i(t)-1 - ∑_i=1^i(t)-11/γδ T_i^δ-1 + 𝒪(1) if δ∈ (3/2, 2] T_i(t)-1 - ∑_i=1^i(t)-11/γδ T_i^δ-1 + o(√(i(t))) if δ∈ (1, 3/2] Recall that, by definition, i(t) is the integer such that T_i(t)-1≤ t < T_i(t)-1 + L_i(t) = T_i(t). By standard renewal theory, L_i(t) converges in distribution as t↑∞ (see the proof of Lemma <ref>), and thus, L_i(t) = 𝒪(1) in probability as t↑∞. For δ∈ (1,2], we have, on the one hand, ∑_i=1^i(t)-11/γδ T_i^δ-1≤∑_i=1^i(t)-1∫_T_i-1^T_i1/γδ x^δ-1dx = ∫_0^T_i(t)-11/γδ x^δ-1dx = T_i(t)-1^2-δ/γδ (2-δ) = t^2-δ/γδ (2-δ) + 𝒪(1), and, on the other hand, ∑_i=1^i(t)-11/γδ T_i^δ-1≥∑_i=1^i(t)-1∫_T_i^T_i+11/γδ x^δ-1dx = ∫_T_1^T_i(t)1/γδ x^δ-1dx = T_i(t)^2-δ - T_1^2-δ/γδ (2-δ). Because T_i(t) = t + 𝒪(1) in probability as t↑∞, and because T_1 = L_1 = 𝒪(1) as well, we get ∑_i=1^i(t)-11/γδ T_i^δ-1 = t^2-δ/γδ (2-δ) + 𝒪(1), in probability as t↑∞, if δ∈(1, 2]. Now recall that, in distribution for all t≥ 0, S(t) = Φ(i(t)-1) + A(t) (see (<ref>)). Thus, because T_i(t)-1 + A(t) = t, by definition, we get S(t) = Φ(i(t)) + A(t) = t + 𝒪(1) if δ >2 t - t^2-δ/γδ(2-δ)𝔼[L] + 𝒪(1) if δ∈ (3/2, 2] t - t^2-δ/γδ(2-δ)𝔼[L] + o(t^3/2-δ(log t)^2) if δ∈ (1, 3/2]. By definition of s(t) (see (<ref>) for δ>1), we get Z(S(t))-a(s(t))/b(s(t)) = Z(S(t))-a(S(t))/b(S(t))·b(S(t))/b(s(t)) + a(S(t))-a(s(t))/b(s(t)) ⇒Γ, by Assumption (A2_δ). This concludes the proof because X(t) = Z(S(t)) in distribution for all t≥ 0 (see Section <ref>). §.§ Preliminary lemmas We start with the following preliminary lemmas: For all δ>1, if 𝔼[L^2]<∞, then, for all ε>0, almost surely for all i large enough, L_i/T_i≤ε. We prove this using Borel-Cantelli's lemma. Indeed, first note that, by the strong law of large numbers, almost surely for all i large enough, T_i≥ i𝔼[L]/2. Also, for all i≥ 1, for all ε>0, ℙ(L_i ≥ε (i𝔼[L]/2)) ≤𝔼[L^2]/ε^2 (i𝔼[L]/2)^2, which is summable. Thus, Borel-Cantelli's lemma implies that, almost surely for all i large enough, L_iT_i^-1≤ L_i (i𝔼[L]/2)^-1<ε, as desired. For all δ>1, almost surely ∑_i≥ 1L^2_i/T_i-1^δ<∞. First note that, by the strong law of large numbers, T_i ∼ i𝔼[L] almost surely as i↑∞. Thus, ∑_i≥ 1 L^2_i T_i-1^-δ converges if and only if ∑_i≥ 1 L^2_i i^-δ does. Now, we write ∑_i≥ 1 L^2_i i^-δ = 𝔼[L] ∑_i≥ 1 i^-δ + ∑_i≥ 1 (L^2_i-𝔼[L^2]) i^-δ. The first sum above is convergent because δ >1. For the second sum, note that (M_n := ∑_i=1^n (L^2_i-𝔼[L^2]) i^-δ)_n≥ 1 is a martingale whose quadratic variation satisfies ⟨ M⟩_n =∑_i=1^n Var(L^2)/i^2δ≤𝔼[L^4] ∑_i≥ 11/i^2δ<∞, because δ>1. Almost surely as i↑∞, ∑_i=1^n 1/T_i^δ-1 = 𝒪(1) if δ>2 n^2-δ/(2-δ)𝔼[L]^δ-1 + 𝒪(1) if δ∈ (3/2, 2] n^2-δ/(2-δ)𝔼[L]^δ-1 + 𝒪(n^3/2 - δ√(log n)) if δ∈ (1, 3/2]. Almost surely as i↑∞, ∑_i=1^n e^-γδ L_i T_i^δ-1/T_i^δ-1 = 𝒪(1). First note that, for all δ>2, ∑_i=1^n e^-γδ L_i T_i^δ-1/T_i^δ-1≤∑_i=1^n 1/T_i^δ-1 <∞, by Lemma <ref>, which concludes the proof. We first note that, for all α≥ 1, for all x≥ 0, F_α(x):=x^αe^-x≤ (α/e)^α. Indeed, for all x≥ 0, F'_α(x) = (α - x)x^α-1e^-x and thus F reaches its maximum at x= α and this maximum is F(α) = (α/e)^α, as claimed. Thus, for all α≥ 1, ∑_i=1^n e^-γδ L_i T_i^δ-1/T_i^δ-1≤(α/e)^α∑_i=1^n 1/(γδ L_i)^α T_i^(δ-1)(α+1) For all δ>1, for all x∈[0,1], 1-δ x≤ (1-x)^δ≤ 1-δ x+δ(δ-1) x^2 §.§ Proof of Proposition <ref> To prove Proposition <ref>, we first write Φ(n) = ∑_i=1^n (F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n])+ ∑_i=1^n 𝔼_ L[F_i 1_i≺ n]. We treat the two sums in two separate lemmas: Almost surely as n↑∞, ∑_i=1^n (F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n]) = 𝒪(1). Given L, we let (B_i)_i≥ 1 be a sequence of independent Bernoulli-distributed random variables of respective parameters W_i/W̅_i, independent of (F_i)_i≥ 1. Recall that, by Lemma <ref>, for all n≥ 1, ( 1_i≺ n)_1≤ i≤ n = (B_i)_1≤ i≤ n. Also note that, by definition, conditionally on L, (M_n :=∑_i=1^n (F_i 1_i≺ n-𝔼_ L[F_i 1_i≺ n]))_n≥ 0 is a martingale. In the rest of the proof, we show that its quadratic variation satisfies ⟨ M ⟩_n= ∑_i≥ 1Var_ L(F_iB_i)<∞. For all i≥ 1, by (<ref>), Var_ L(F_i B_i) = W_i/W̅_i𝔼_ L[F_i^2]-(W_i/W̅_i)^2 𝔼_ L[F_i]^2 By definition of F_i (see (<ref>)) in the first equality, and integration by parts in the second, W_i𝔼_L[F_i^2] =∫_0^L_i x^2 γδ(T_i-1+x)^δ-1e^γ(T_i-1+x)^δdx = [x^2e^γ(T_i-1+x)^δ]_0^L_i - ∫_0^L_i 2x e^γ(T_i-1+x)^δdx = L_i^2 W̅_i - ∫_0^L_i 2x e^γ(T_i-(L_i-x))^δdx, because W̅_i = e^γ T_i^δ, by definition (see (<ref>)). We now use the fact that, for all x∈[0,1], (1-x)^δ≥ 1-δ x, and get W_i𝔼_ L[F_i^2] ≤ L_i^2 W̅_i - ∫_0^L_i 2x exp(γ T_i^δ(1-δ·L_i-x/T_i))dx = L_i^2 W̅_i - 2W̅_i ∫_0^L_i x exp(-γδ T_i^δ-1(L_i-x))dx. Using integration by parts again, we get ∫_0^L_i x exp(-γδT_i^δ-1(L_i-x))dx = [x exp(-γδT_i^δ-1(L_i-x))/γδT_i^δ-1]_0^L_i - 1/γδT_i^δ-1∫_0^L_iexp(-γδT_i^δ-1(L_i-x))dx = L_i/γδT_i^δ-1 - 1-e^-γδL_i T_i^δ-1/(γδT_i^δ-1)^2. This implies W_i𝔼_ L[F_i^2]/W̅_i≤ L_i^2 - 2L_i/γδ T_i^δ -1 + 1/(γδ T_i^δ-1)^2. Similarly, W_i𝔼_L[F_i] = ∫_0^L_i x γδ(T_i-1+x)^δ-1e^γ(T_i-1+x)^δdx = [x e^γ(T_i-1+x)^δ]_0^L_i - ∫_0^L_i e^γ(T_i-1+x)^δdx = L_i W̅_i - ∫_0^L_i e^γ(T_i-1+x)^δdx = L_i W̅_i - 1/δ∫_T_i-1^δ^T_i^δ u^1/δ-1 e^γudu, where we have changed variables and set u = (T_i-1+x)^δ. We thus get W_i𝔼_ L[F_i]/W̅_i≥ L_i - 1/δW̅_i T_i-1^δ-1∫_T_i-1^δ^T_i^δe^γ udu = L_i - 1/γδ T_i-1^δ-1. Using (<ref>) and (<ref>), we thus get Var_L(F_i B_i) ≤L_i^2 - 2L_i/γδT_i^δ-1 + 1/(γδT_i^δ-1)^2 - ( L_i - 1/γδT_i-1^δ-1)^2 = 2L_i/γδ (1/T_i-1^δ-1-1/T_i^δ-1) + 1/(γδT_i^δ-1)^2 - 1/(γδT_i-1^δ-1)^2 ≤2L_i/γδT_i-1^δ-1 (1-(1-L_i/T_i)^δ-1) ≤2(δ-1)L_i^2/γδT_i-1^δ-1T_i ≤2(δ-1)L_i^2/γδT_i-1^δ, where we have used the fact that, for all x∈[0,1], (1-x)^δ-1≥ 1-(δ-1) x (since δ>1). We thus get that ∑_i≥ 1Var_ L(F_i B_i) ≤∑_i≥ 12(δ-1)L_i^2/γδ T_i-1^δ <∞, by Lemma <ref>. Thus, (<ref>) holds and implies that (M_n)_n≥ 1 converges almost surely as n↑∞, which concludes the proof. Almost surely as n↑∞, ∑_i=1^n 𝔼_ L[F_i 1_i≺ n] = T_n + 𝒪(1) if δ >2 T_n - ∑_i=1^n 1/γδ T_i^δ-1 + 𝒪(1) if δ∈ (3/2, 2] T_n - ∑_i=1^n 1/γδ T_i^δ-1 + o(n^3/2-δ(log n)^2) if δ∈ (1,3/2]. First note that, for all i≥ 1, 𝔼_ L[F_i 1_i≺ n] = W_i𝔼_ L[F_i]/W̅_i, because, given L, (F_i)_1≤ i≤ n is independent of ( 1_i≺ n), and because, by Lemma <ref>, 1_i≺ n is Bernoulli-distributed with parameter W_i/W̅_i. By definition of (F_i)_i≥ 1 (see Definition (<ref>)) in the first equality, and integration by parts in the second, we get W_i𝔼_L[F_i]/W̅_i = ∫_0^L_i γδx(T_i-1+x)^δ-1 e^-γ[T_i^δ-(T_i-1+x)^δ]dx = L_i - ∫_0^L_i e^-γ[T_i^δ-(T_i-1+x)^δ]dx = L_i - ∫_0^L_i e^-γT_i^δ [1 -(1-u/T_i)^δ]du, where we have changed variables and set u = L_i-x. This means that ∑_i=1^n𝔼_ L[F_i 1_i≺ n] = T_n - ∑_i=1^n ∫_0^L_ie^-γ T_i^δ [1 -(1-u/T_i)^δ]du. Thus, it only remains to show that ∑_i=1^n ∫_0^L_ie^-γ T_i^δ [1 -(1-u/T_i)^δ]du = 𝒪(1) if δ >2 ∑_i=1^n 1/γδ T_i^δ-1 + 𝒪(1) if δ∈ (3/2, 2] ∑_i=1^n 1/γδ T_i^δ-1 + o(n^3/2-δ(log n)^2) if δ∈ (1, 3/2], almost surely as n↑∞. By Lemma <ref>, because u≤ L_i≤ T_i almost surely, ∫_0^L_i e^-γδu T_i^δ-1du ≤∫_0^L_i e^-γT_i^δ [1 -(1-u/T_i)^δ]du ≤∫_0^L_i e^-γδu T_i^δ-1(1-(δ-1)u/T_i)du ≤∫_0^L_i e^-γδu T_i^δ-1(1-(δ-1)L_i/T_i)du.In the last inequality, we have used the fact that u≤ L_i and δ>1. This implies 1-e^-γδ L_i T_i^δ-1/γδ T_i^δ-1≤∫_0^L_ie^-γ T_i^δ [1 -(1-u/T_i)^δ]du ≤1/γδ T_i^δ-1(1-(δ-1)L_i/T_i). Note that, by Lemma <ref>, almost surely, L_i/T_i<1/2(δ-1) for all i large enough. Using the fact that 1/1-u≤ 1+2u for all u∈[0,1/2], we thus get that, almost surely for all i large enough, 1-e^-γδ L_i T_i^δ-1/γδ T_i^δ-1≤∫_0^L_ie^-γ T_i^δ [1 -(1-u/T_i)^δ]du ≤1/γδ T_i^δ-1+2(δ-1)L_i/γδ T_i^δ. This implies that -∑_i=1^n 2(δ-1)L_i/γδ T_i^δ≤∑_i=1^n 1/γδ T_i^δ-1 - ∑_i=1^n ∫_0^L_ie^-γ T_i^δ [1 -(1-u/T_i)^δ]du ≤∑_i=1^n e^-γδ L_i T_i^δ-1/γδ T_i^δ-1. By Lemma <ref>, the left-hand side is 𝒪(1) as n↑∞. Furthermore, by Lemma <ref>, ∑_i=1^n 1/(γδ T_i^δ-1)<∞ if δ>2. Thus, to prove (<ref>), it is enough to prove that ∑_i=1^n e^-γδ L_i T_i^δ-1/γδ T_i^δ-1 = 𝒪(1) if δ>3/2 o(n^3/2-δ(log n)^2) if δ∈ (1,3/2], almost surely as n↑∞. If δ>2, this is implied by Lemma <ref>. For δ∈ (1, 2], we first use the strong law of large numbers to write that T_i≥ i𝔼L/2 almost surely for all i large enough, and thus ∑_i=1^n e^-γδ L_i T_i^δ-1/γδ T_i^δ-1 = 𝒪(∑_i=1^n e^-γδ L_i (i𝔼L/2)^δ-1/γδ (i𝔼L/2)^δ-1) = 𝒪(∑_i=1^n e^-γδ L_i (i𝔼L/2)^δ-1/i^δ-1), almost surely as n↑∞. First note that (M_n := ∑_i=1^n e^-γδ L_i (i𝔼L/2)^δ-1i^1-δ)_n≥ 0 is a martingale whose quadratic variation satisfies ⟨ M⟩_n = ∑_i=1^n 𝔼[e^-2γδ L_i (i𝔼L/2)^δ-1]/i^2(δ-1). If δ>3/2, then 2(δ -1)>1 implying that ⟨ M⟩_n≤∑_i≥ 11/i^2(δ-1)<∞, and thus that (M_n)_n≥ 0 converges almost surely as n↑∞. This, together with (<ref>), concludes the proof of (<ref>) in the case when δ∈ (3/2, 2]. For δ∈ (1, 3/2], we distinguish two cases: First assume that lim_n↑∞⟨ M⟩_n<∞. In his case, M_n converges almost surely as n↑∞, which, together with (<ref>), implies that ∑_i=1^n e^-γδ L_i T_i^δ-1/γδ T_i^δ-1 = 𝒪(1) = o(n^3/2-δ(log n)^2), as claimed in (<ref>). If lim_n↑∞⟨ M⟩_n = ∞, then by, e.g. <cit.>, almost surely as n↑∞, M_n = o(√(⟨ M⟩_n)(log n)^2). By (<ref>), ⟨ M⟩_n≤∑_i=1^n 1/i^2(δ-1) = 𝒪(n^3-2δ), which gives that, almost surely as n↑∞, M_n = o(n^3/2-δ(log n)^2). Together with (<ref>), this implies that ∑_i=1^n e^-γδ L_i T_i^δ-1/γδ T_i^δ-1 = o(n^3/2-δ(log n)^2), which concludes the proof of (<ref>). Note that (<ref>), together with (<ref>), implies (<ref>). To conclude the proof of (<ref>), it only remains to prove that, for all δ>2, almost surely as n↑∞, ∑_i=1^n 1/γδ T_i^δ-1 = 𝒪(1) This is true because T_i ∼ i𝔼L almost surely as i↑∞. By Lemmas <ref> and <ref>, almost surely as n↑∞, Φ(n) = ∑_i=1^n (F_i1_i≺n-𝔼_L[F_i1_i≺n])+ ∑_i=1^n 𝔼_L[F_i1_i≺n] = T_n + 𝒪(1) if δ>2 T_n - ∑_i=1^n 1/γδT_i^δ-1 + 𝒪(1) if δ∈(3/2, 2] T_n - ∑_i=1^n 1/γδT_i^δ-1 + o(n^3/2-δ(logn)^2) if δ∈(1, 3/2], as claimed in Proposition <ref>. plain
http://arxiv.org/abs/2409.03693v1
20240905164533
Phonon-mediated quantum gates in trapped ions coupled to an ultracold atomic gas
[ "Lorenzo Oghittu", "Arghavan Safavi-Naini", "Antonio Negretti", "Rene Gerritsma" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas" ]
,,, § ABSTRACT We study the dynamics of phonon-mediated qubit-qubit interactions between trapped ions in the presence of an ultracold atomic gas. By deriving and solving a master equation to describe the combined system, we show that the presence of the atoms causes the quantum gate quality to reduce because of motional decoherence. On the other hand, we calculate that the gas may be used to keep the ion crystal cold in the presence of external heating due to electric field noise. We show that tuning the atom-ion scattering length allows to tune the cooling rate of the ions and would make it possible to temporarily reduce the effects of the gas during a quantum gate while keeping the ions cold over long timescales. In this way, the trapped ion quantum computer may be buffer gas cooled. The system may also be used for quantum-enhanced measurements of the atom-ion interactions or properties of the atomic bath. Phonon-mediated quantum gates in trapped ions coupled to an ultracold atomic gas Lorenzo Oghittu^1,2, Arghavan Safavi-Naini^1,2, Antonio Negretti^3 and Rene Gerritsma^1,4 September 9, 2024 ============================================================================================= § INTRODUCTION Over the last decade, rapid progress has occurred in controlling laboratory systems that explore the quantum dynamics of mixtures of ultracold atoms and trapped ions <cit.>. These systems offer new opportunities in studying quantum impurity physics, ultracold collisions and quantum chemistry <cit.>. On the theoretical side, efforts have been made to study and characterize ionic polarons <cit.> and bath-mediated interactions between charged impurities <cit.>. Moreover, systems involving ionic impurities in ultracold gases have been proposed as quantum simulators and sensors <cit.>. Combining atom-ion systems with precise quantum control of individual trapped ions makes it possible to study the decoherence of non-classical states of ion qubits and motion <cit.> as well as phonon-mediated interactions between ion qubit states <cit.> inside a bath of ultracold atoms. These mixed atom-ion systems may find applications in quantum-enhanced sensing of atom-ion interactions <cit.> and the properties of the atomic bath <cit.>. Furthermore, the ultracold atomic gas may be used to buffer gas cool the trapped ions, with recently reported buffer gas cooling even outperforming laser (Doppler) cooling <cit.>. The discovery of atom-ion Feshbach resonances has greatly enhanced the possibilities of atom-ion combined quantum systems <cit.>. In particular, these resonances allow for tuning the atom-ion interactions by simple tuning of an external magnetic field, in analogy to the case of Feshabch resonances between neutral atoms <cit.>. In this article, we consider the case where ionic qubits are coupled via laser-driven phonon-mediated interactions inside a quantum bath of ultracold atoms. We study how the atoms change the dynamics of this system and affect the quality of quantum gates between these qubits. We find that the main source of error occurs due to decoherence in the ion motion in phase space during the gate. We consider the benefits of the atomic bath to keep the ions cool in the presence of an external heating source due to electric field noise. We study the role of the atom-ion s-wave scattering length - which may be tuned using Feshbach resonances. We find that it is possible to switch the interactions in such a way, that either the adverse effects of the bath on the quantum gate, or the negative effects of heating may be mitigated. These results show that it is possible to buffer gas cool the trapped ion quantum system <cit.>, while maintaining the quality of quantum operations in the system. The paper is organized as follows: in Sec. <ref> we describe the Hamiltonian of the system, which is used in Sec. <ref> to derive a master equation for the ion chain. In Sec. <ref> we provide some notions about the quantum gate and we define the process fidelity. In Sec. <ref> we comment the results obtained by numerically solving the master equation. Finally, conclusions and outlook are provided in Sec. <ref>. § SYSTEM DESCRIPTION We consider the scheme depicted in Fig. <ref>: it consists of a linear crystal of three ions, where the outer two ions experience a spin-dependent force that couples their (internal) spin states to the (external) vibrational phononic modes of the chain <cit.>. The entire chain is coupled to an external bath by immersing the central ion in a cloud of ultracold atoms. By keeping the two qubits out of the gas, we avoid possible spin changing collisions between atoms and ions such as spin relaxation. The latter, in which the total spin is not conserved, has been observed in atom-ion experiments <cit.>. Additionally, the center of mass mode is subjected to external heating due to electric field noise at room temperature. In this section we describe the Hamiltonian of the system. In Sec. <ref> we focus on the the ion chain and the spin-dependent force acting on the two outer ions, while in Sec. <ref> we describe the interaction of the central ion with the ultracold quantum gas. §.§ Ion chain and spin-dependent force We consider three identical ions in a linear Paul trap, and we assume the confinement along y and z to be much stronger than the confinement along the direction of the trap axes x. In such a configuration, the motion along y and z can be neglected, and the interplay between the harmonic confinement and the Coulomb repulsion along x allows to describe the three ions as a one-dimensional crystalline chain. The motion of the ions is therefore approximated as a displacement around their equilibrium position and decomposed into three normal modes of oscillation <cit.>. On top of the motional (phononic) degrees of freedom, the two outer ions are equipped with two internal spin states and will therefore be referred to as qubits. The phononic modes are coupled to the spin states by a spin-dependent force along the direction perpendicular to the chain. This is done with lasers in a Raman scheme with wave vector k_R <cit.>. When the displacement of the ions is much smaller than the wavelength of the lasers, the corresponding Hamiltonian can be expanded to lowest order in the Lamb-Dicke parameters η_μ=k_R√(ħ/(2Mω_μ)), with ω_μ the frequency of mode μ. In the frame rotating with the Raman beatnote ω_R, we obtain the following spin-phonon Hamiltonian: Ĥ_sp=-ħ/2∑_j=1,3∑_μΩ_μ b_j,μ(â_μ+â_μ^†)σ̂_j^z, where we performed the rotating wave approximation. Here, σ̂_j^z is the z Pauli matrix acting on spin j, while â_μ^(†) annihilates (creates) a phonon of mode μ, whose amplitude vector is defined by 𝐛_μ. The amplitude vectors for the case of three particles in 1D are reported in Tab. <ref>. Note that the sum over the ions in Eq. (<ref>) does not include the central ion j=2, as this is not affected by the spin-dependent force. Finally, the driving strength of mode μ is quantified by Ω_μ=Fη_μ/k_R, where F is the magnitude of the spin-dependent force. In the same frame, the total Hamiltonian of the ion chain with spin-phonon coupling is given by Ĥ_chain=Ĥ_s+Ĥ_p+Ĥ_sp. The spin Hamiltonian is Ĥ_s=0, while the phonon Hamiltonian reads Ĥ_p=-ħ∑_μδ_μn̂_μ where δ_μ=ω_R-ω_μ is the detuning of the Raman beat note from the mode μ. In this work, we consider the parameters reported in Tab. <ref>. The time evolution corresponding to the interaction Hamiltonian in Eq. (<ref>) can be computed analytically. This is done by means of the Magnus expansion <cit.>, which truncates exactly at second order in this specific case. We have Û_I(t,0)= exp{-i/ħ∫_0^tdt' H_sp(t') -i/2ħ^2∫_0^tdt'∫_0^t'dt”H_sp(t')H_sp(t”)} = Û_sp(t)Û_ss(t) where H_sp(t) is the spin-phonon Hamiltonian in the interaction picture with respect to Ĥ_p [Note that we are combining the transformation to the frame rotating with ω_R with the one to the interaction picture with respect to Ĥ_p. This is equivalent to transforming to the interaction picture in the laboratory frame, provided that all Hamiltonian operators are expressed in the same frame.]. The last line shows that the total propagator factorizes as the product of the spin-phonon and spin-spin coupling contributions, which are given, respectively, by Û_sp(t)=exp{∑_j=1,3∑_μ(ϕ_j,μ(t)â_μ^†-ϕ_j,μ^*(t)â_μ)σ̂_j^z}, and Û_ss(t)=exp{-i∑_i,jJ_i,j(t)σ̂_i^zσ̂_j^z}. Here, we used the definitions ϕ_j,μ(t) =Ω_μ/2δ_μ b_j,μ(1-e^-iδ_μ t), J_i,j(t) =1/4∑_μΩ_μ^2b_j,μb_i,μ/δ_μ^2(δ_μ t-sin(δ_μ t)). The term in Eq. (<ref>) corresponds to a spin-dependent displacement and it is responsible for the entanglement between spin and motion. When |δ_μ|≫Ω_μ for all modes, however, we see from Eq. (<ref>) that ϕ_j,μ≈0 and the evolution reduces to the propagator in Eq. (<ref>). In this regime, the interaction Hamiltonian is effectively described by the spin-spin interaction in the exponent of Û_ss at all times. When only the mode ν is driven substantially, i.e. δ_ν≳Ω_ν, the other modes can be neglected and ϕ_j,ν=0 when t is an integer multiple of 2π/δ_ν. At these values of time, the motion is disentangled from the spin, meaning that the internal state does not depend on the external vibrational state <cit.>. Note however, that for larger vibrational state occupation the Lamb-Dicke approximation breaks down. This causes a reduction in the quantum gate quality. In practice, high fidelity quantum gate operation relies on sub-Doppler cooling <cit.> and minimal background heating <cit.>. §.§ Coupling to the quantum gas Atom-ion potential – The atom-ion interaction is represented asymptotically at large separation by the polarization potential V_ai(r)=-C_4/r^4, C_4=α e^2/21/4πϵ_0, where α is the static polarizability of the atom, e is the electronic charge and ϵ_0 the vacum permittivity. The characteristic length and energy scales of the polarization potential are R^⋆=(2μ C_4/ħ^2)^1/2 and E^⋆=ħ^2/[2μ(R^⋆)^2], respectively. For analytical purposes, we consider a regularization of the atom-ion polarization potential <cit.>: V_ai^r(𝐫)=-C_4r^2-c^2/r^2+c^21/(b^2+r^2)^2 where b and c are parameters that can be associated to the s-wave atom-ion scattering length a_ai and the number of two-body bound states supported by the potential. One useful property of Eq. (<ref>) is that its Fourier transform can be computed analytically, allowing the scattering amplitude in the first-order Born approximation to be written as follows: f(q)=c^2π(R^⋆)^2/(b^2-c^2)^2q{e^-bq[1+(b^4-c^4)q/4bc^2]-e^-cq}. This will be used in the following sections. Coupling Hamiltonian – The central ion (labeled with j=2) is immersed in an homogeneous ultracold gas of bosons or fermions confined in a box of length L, which is assumed to be much larger than any of the other lengths involved in the description. The bath Hamiltonian is given by Ĥ_bath=∫_ℝ^3d𝐫_bΨ̂^†_b(𝐫_b)[𝐩̂_b^2/2m+g/2Ψ̂^†_b(𝐫_b)Ψ̂_b(𝐫_b)]Ψ̂_b(𝐫_b) where the subscript b indicates that the quantities refer to the bath, Ψ̂^†_b and Ψ̂_b are the bath field operators and g is the atom-atom interaction strength. In the lab frame, the Hamiltonian accounting for the coupling between the ion chain and the atomic bath reads Ĥ_int=∫_ℝ^3d𝐫_bΨ̂^†_b(𝐫_b)V_ai(𝐫_b-𝐫̂_2)Ψ̂_b(𝐫_b), where 𝐫̂ is the position operator of the ion. Following the procedure and notation adopted in Ref. <cit.>, we express the fluctuations around the condensate in terms of Bogoliubov modes. Let us note that we will refer to phonons in the bath as atomic phonons, in order to avoid confusion with the phonons of the vibrational modes of the ion chain. In the bosonic case, we obtain the following Hamiltonian H_int^B=ħ∑_𝐪(Ŝ_𝐪b̂_𝐪+Ŝ_𝐪^†b̂_𝐪^†)+ħ∑_𝐪,𝐪'Ŝ_𝐪,𝐪'b̂_𝐪^†b̂_𝐪' where b̂_𝐪^† and b̂_𝐪 are the creation and annihilation operators of an atomic phonon excitation with momentum 𝐪 in the bosonic bath and obey the commutation relation [b̂_𝐪,b̂_𝐪'^†]=δ_𝐪,𝐪'. Moreover, we defined Ŝ_𝐪=√(n_0L^3)/ħe^i𝐪·𝐫̂c_𝐪, Ŝ_𝐪,𝐪'=1/ħe^i(𝐪'-𝐪)·𝐫̂c_𝐪'-𝐪, where n_0 is the density of the condensate, and the coefficient c_𝐪 is related to the scattering amplitude f(q) of the atom-ion potential by c_𝐪=-2πħ^2/μ L^3f(q). In the case of the regularized polarization potential that we consider in this work, f(q) is given in Eq. (<ref>). We note that the definitions in Eq. (<ref>) and the second sum in Eq. (<ref>) are obtained by considering that only atomic phonons with energy comparable to ħω_μ will couple to the ion motion. For instance, let us consider the center-of-mass frequency ω_cm=2π·500 kHz, corresponding to an atom velocity √(2ħω_cm/m)≃0.24 m/s. This is much faster than the speed of sound in the ^7Li condensate, which for a condensate density n_0=10^14 cm^-3 is on the order of 0.005 m/s. Therefore, we can safely assume a particle-like dispersion relation ħω_𝐪=ħ^2q^2/(2m) for the atomic phonons, and treat the bath as a non-interacting Bose gas. We refer to Ref. <cit.> for the general expressions before the particle-like approximation. In the case of a Fermi gas, the coupling Hamiltonian simply becomes H_int^F=ħ∑_𝐪,𝐪'Ŝ_𝐪,𝐪'f̂_𝐪^†f̂_𝐪' where the operators f̂_𝐪^† and f̂_𝐪 create and annihilate a fermion with momentum 𝐪 and obey the anticommutation relation {f̂_𝐪,f̂_𝐪'^†}=δ_𝐪,𝐪'. The derivation of the master equation (see Sec. <ref>), requires the system-bath Hamiltonian to be expressed in the same rotating frame of the system Hamiltonian. This is obtained by transforming Eq. (<ref>) and Eq. (<ref>) with the operator 𝒰̂_R(t)=exp[-iω_R∑_μn̂_μ t]. Let us consider the coupling to the condensed fraction of a Bose gas, which is given by the first sum in Eq. (<ref>). The transformation only acts on the phonon operators, and the Hamiltonian has the same form, but with the time-dependent operator Ŝ_𝐪(t)=√(n_0 L^3)/ħe^iq_xx̂_R(t), where we made use of the fact that the chain oscillates along the x direction and we defined the position operator of the central ion in the rotating frame as x̂_R(t)=∑_μb_2,μλ_μ(â_μ e^-iω_Rt+â_μ^† e^iω_Rt), with λ_μ=√(ħ/(2Mω_μ)) the length associated to mode μ. The coupling to the non condensed Bose gas, represented by the double sum in Eq. (<ref>), and to the Fermi gas, reported in Eq. (<ref>), are transformed to the rotating frame in an analogous manner. We finally remark that working in the frame rotating with ω_R allows the ion chain Hamiltonian to be time-independent, as long as the rotating-wave approximation holds. This simplifies the derivation of the master equation, although a time dependence is gained by the coupling to the bath. We refer to Sec. <ref> for more details. § MASTER EQUATION We describe the evolution of the system with a master equation <cit.>, in analogy to what has been done in Ref. <cit.>. Here, we only report the most important steps of the derivation, focusing on the differences with respect to the previous works. In Sec. <ref> we derive the contribution from the coupling to the quantum gas, while in Sec. <ref> the external heating term is considered. §.§ Master equation for the ion chain coupled to the ultracold gas Our open system is represented by the ion chain with spin-phonon coupling, whose Hamiltonian Ĥ_chain is defined in Sec. <ref> [see Eq. (<ref>) and (<ref>)]. As a starting point for the derivation, we consider the Redfield equation: d/dtρ_R(t)=-∫_0^tdt'/ħ^2 Tr_b{[H_int(t),[H_int(t'),ρ_R(t)⊗B̂_0]]} where ρ_R and H_int are, respectively, the reduced density matrix of the system and the system-bath interaction Hamiltonian in the rotating frame and in the interaction picture with respect to Ĥ_chain+Ĥ_bath. Moreover, B̂_0 is the density matrix of the bath and Tr_b{…} represents the trace over the bath degrees of freedom. We note that Eq. (<ref>) relies on the Markov and Born approximations. Let us now focus on the case where the bath is a Bose-Einstein condensate. By explicitly writing H_int and performing the trace, we get d/dtρ_R(t)=-∫_0^tdt' ∑_𝐪{(n_𝐪+1)(e^-iω_𝐪(t-t') S_𝐪(t)S_𝐪^†(t')ρ_R(t)+H. c.) + n_𝐪(e^-iω_𝐪(t-t')ρ_R(t)S_𝐪^†(t')S_𝐪(t)+H. c.)} where ħω_𝐪 is the energy of the atomic phonons in the condensate, H. c. indicates the Hermitian conjugate, S_𝐪 represents the operator in Eq. (<ref>) when transformed to the interaction picture and n_𝐪=[e^β(ħω_𝐪-μ_𝐁)-1]^-1 is the Bose-Einstein occupation number obtained from the thermal averages over the state of the bath B̂_0 Tr_b{b̂_𝐪^†b̂_𝐪'B̂_0 }=n_𝐪δ_𝐪,𝐪', with μ_B the chemical potential of the gas at temperature T and β=1/(k_BT) (k_B the Boltzmann constant). The next step consists of transforming the master equation back to the Schrödinger picture and perform the Lamb-Dicke approximation for small values of q_xx̂. As an example, we consider the first commutator in Eq. (<ref>), from which we get e^iq_xx̂_R(t)e^iq_xx̂_R(t',-τ)ρ̂_R(t) ≃x̂_R(t)x̂_R(t',-τ)ρ̂_R(t) -1/2(x̂_R(t))^2ρ̂_R(t) where we neglected the term proportional to q_x as this vanishes after the sum over 𝐪 due to spherical symmetry of the bath. Here, we defined τ=t-t' and x̂_R(t,-τ)=e^-iĤ_chain-τ x̂_R(t) e^iĤ_chainτ. Because of the time-independence of Ĥ_chain (see Sec. <ref>), it is straightforward to calculate the product in Eq. (<ref>) by means of the Baker-Campbell-Hausdorff identity [Baker-Campbell-Hausdorff identity: e^iĜλÂe^-iĜλ=Â+iλ[Ĝ,Â]+(iλ)^2[Ĝ,[Ĝ,Â]]/2!+…]. The explicit form of x̂_R(t,-τ) is reported in Eq. (<ref>). We refer the interested reader to Ref. <cit.> for more details and for the case of a Fermi gas. We now report the master equation for the ion chain coupled to a Bose-Einstein condensate in the frame rotating with ω_R: d/dtρ̂_R=- i/ħĤ_chainρ̂_R -Γ∑_μ{ (n_q_μx̂_R(α̂_μ+α̂^†_μ)ρ̂_R-ρ̂_R(α̂_μ+α̂^†_μ)+x̂_Rα̂_μρ̂_R-ρ̂_Rα̂_μ^†) +Ω_μ/2δ_μ ∑_j=1,3b_j,μ(cos(ω_R t)x̂_Rσ̂_j^zρ̂_R-ρ̂_Rσ̂_j^z2h_μ^(n_q)+e^-iω_Rtx̂_Rσ̂_j^zρ̂_Rh_μ-e^iω_Rtx̂_Rρ̂_Rσ̂_j^zh_μ)}, where we omitted time dependencies for clarity and we defined Γ=2π m ħ n_0/(3μ^2), α̂_μ(t)=b_2,μλ_μ|f(q_μ)|^2q_μ^3â_μ e^-iω_Rt h_μ^(n_q)=b_2,μλ_μ((n_q_μ)|f(q_μ)|^2q_μ^3-(n_q_R)|f(q_R)|^2q_R^3) and q_μ(R)=√(2mω_μ(R)/ħ). The first line of Eq. (<ref>) is the unitary evolution of the system in absence of the bath, while the second and third lines describe the dissipative contribution due to the coupling to the Bose-Einstein condensate. §.§ Contribution from external heating The external heating is described as a coupling of the center-of-mass mode to an effective bath with mean occupation number N̅ <cit.>. The corresponding contribution to the master equation reads γ/2(N̅+1)(2â_cmρ̂_Râ_cm^†-â_cm^†â_cmρ̂_R-ρ̂_Râ_cm^†â_cm) +γ/2N̅(2â_cm^†ρ̂_Râ_cm-â_cmâ_cm^†ρ̂_R-ρ̂_Râ_cmâ_cm^†), where we omitted the time dependencies. For a bath at room temperature, we have N̅≫1, and the prefactor γ(N̅+1)≈γN̅ can be obtained empirically from experiments. § QUANTUM GATE The spin-dependent Hamiltonian described in Sec. <ref> can be used to perform a geometric phase gate. In this section, we briefly summarize some of the basic features of this operation and we define the process fidelity. §.§ Gate behavior Given an initial 4×4 matrix ρ̂_0 describing the state of the two qubits, the final state after the gate operation in the case of an ideal process represented as Λ is given in the {|↑⟩, |↓⟩} basis by Λ(ρ̂_0)=𝒰̂^†(t_gate) ρ̂_0 𝒰̂(t_gate), with 𝒰̂(t_gate)=Diag{1,-i,-i,1}. For instance, let us consider the initial state to be |++⟩≡|+⟩_1⊗|+⟩_3. The corresponding density matrix ρ̂_0=|++⟩⟨++| in the basis {|↑⟩, |↓⟩} is the 4×4 matrix where all the elements are equal to 1/4. According to Eq. (<ref>), the final density matrix is Λ(ρ̂_0)=[ 1/4 i/4 i/4 1/4; -i/4 1/4 1/4 -i/4; -i/4 1/4 1/4 -i/4; 1/4 i/4 i/4 1/4 ], which corresponds to the state (|++⟩-i|–⟩)/√(2), i.e., the maximally entangled state. §.§ Process fidelity The scenario explained in Sec. <ref> refers to an ideal gate. However, the coupling to the external heating and to the quantum gas introduces an error. To quantify how close the non-ideal process, indicated as Γ, reproduces the ideal one, we define the process fidelity as <cit.> F(Λ,Γ)=1/d^2∑_i=1^d^2Tr{Λ(ρ̂_i^†)Γ(ρ̂_j)}, where {ρ̂_i} is a complete set of mutually orthonormal operators in the Hilbert space of the two qubits with dimension d=4. In the case of a two-qubit gate, a natural choice is to define each operator ρ̂_i as the one of the possible tensor products σ̂_1^ξ⊗σ̂_3^ζ, where σ̂_j^ξ (ξ=0,x,y,z) are the Pauli matrices in the subspace of ion j. Note that these operators do not represent, in general, a quantum state. Therefore, in the case of an experiment, a different definition of the fidelity has to be considered. We refer the interest reader to Ref. <cit.> for a complete treatment of the topic. § RESULTS In this section, we report the results obtained by solving the master equation described in Eq. (<ref>). Specifically, we study the dynamics of the system by deriving a set of equations for the expectation value of the position and momentum associated to the center of mass mode, as well as for the expectation value of the kinetic energy. Moreover we study how the process fidelity of the gate described in Sec. <ref> is affected by the heating and the ultracold gas. Unless stated differently, we consider the frequencies in Tab. <ref> and we fix the atom-ion s-wave scattering length a_ai≃ R^⋆. The ions are ^174Yb^+, while the bath is an ultracold gas of either ^6Li (Fermi) or ^7Li (Bose) with density n=10^13 cm^-3 and temperature T=200 nK. We remark that the choice of the frequencies would allow us to neglect the stretching and wobbling mode in the description of the gate operation, strongly reducing the computational effort. In fact, δ_μ≫Ω_μ for said modes (see Sec. <ref>). The gate, therefore, relies on the entanglement of the spin with the center-of-mass mode and the gate time is given by 2π/δ_cm=0.25 ms. However, the wobbling mode has to be taken into account in order to properly describe the motion of the central ion inside the bath. The sums over the modes μ in all of the derived equations will hence be restricted to μ=cm,wb. §.§ Cooling dynamics We consider the temperature T_chain of the ion chain and we study how this is affected by the coupling to the ultracold quantum gas and to the external heating. The temperature is defined as T_chain=1/k_B1/2M∑_μ⟨p̂_μ^2⟩, where p̂_μ=i√(ħ Mω_μ/2)(â_μ^†-â_μ) is the momentum operator corresponding to mode μ. Note that, in neglecting the stretching mode, we are not considering its contribution to the chain temperature. However, this is uncoupled from the other modes and is not affected by either heating or the quantum gas, meaning that it would add to the total temperature as a mere shift. The spin of the two outer ions is set to the state |↑↑⟩, while the central ion is initialized in the approximate thermal motional state c_0|0⟩⟨0|_μ+c_1|1⟩⟨1|_μ for both the center-of-mass and wobbling mode, with c_0=0.9 and c_1=0.1. Here, |n⟩_μ represents the n-th oscillator state of mode μ, which contributes to the initial temperature with c_nħω_μ(n+1/2)/(2k_B), where the factor 1/2 is due to the equipartition theorem. In Fig. <ref>(a), we compare the behavior of the temperature in different scenarios. We first note that, in absence of spin-dependent force acting on the outer ions, the time evolution of T_chain is monotonous (dashed lines). On the other hand, when the spin-dependent force explained in Sec. <ref> is considered, each mode is driven with frequency ω_R and the ion temperature oscillates accordingly. In particular, we observe that this oscillation has a slow and a fast component [see Fig. <ref>(b)], corresponding to the momentum of the center-of-mass mode oscillating with frequencies δ_cm/2 and ω_R+δ_cm/2, respectively <cit.>. We remark that the wobbling mode has an analogous behavior, but its contribution to the oscillation of T_chain is not perceptible due to the much weaker driving strength compared to the center-of-mass mode. In the case of unitary evolution (gray), the system occupies the same state after every gate cycles. Hence, the temperature retrieves its initial value at every multiple of t_gate. In the presence of heating (green) a similar behavior is observed. In this case, however, the temperature increases linearly with a rate proportional to γN. Interestingly, when the chain is coupled to the ultracold quantum gas, the latter guarantees the cooling of the chain. The dashed blue line shows that the ion temperature without spin-dependent force and without heating converges to a final value which is given by the ground state energy of the ion in the trap. In our case, it corresponds to ħ(ω_cm+ω_wb)/(4k_B)≃20.45 μK. Note that, if micromotion was considered, the ion would converge to a higher temperature <cit.>. When both the heating and ultracold gas are considered (red), a dynamical equilibrium is reached where the ion temperature stabilizes at a value which is higher than the ground state temperature. We observe that even in the case of high heating rate compared to typical values in experiments, the cooling action of the gas is remarkably fast. For the parameters considered in Fig. <ref> and without spin-dependent force, the final temperature is reached in around 0.5 ms, i.e. the time of two gate cycles. The blue and red shaded areas in Fig. <ref>(a) show that the coupling to the gas damps the oscillation due to the spin-dependent driving force. In other words, the system behaves effectively as a driven damped harmonic oscillator and converges to a steady state where the temperature oscillates with frequency ω_R and constant amplitude. §.§ Effect on process fidelity The error induced on the gate operation by the coupling to the ultracold gas can be understood by observing the phase space behavior of the system. As shown in Fig. <ref>, the unitary evolution (gray) in the frame rotating with ω_R corresponds to a circle in the phase space of the center-of-mass mode. On the other hand, the presence of the gas (blue) deforms the trajectory and does not allow the circle to close after the gate time t_gate=0.25 ms, indicating decoherence of motional states. Nevertheless, this behavior can be mitigated by properly tuning the atom-ion scattering length. In Fig. <ref>, we observe that for a_ai≃-1.3 R^⋆, the effect of the gas on the process fidelity is strongly suppressed. The same behavior is observed for both a ^7Li Bose-Einstein condensate (blue circles) as well as a spin-Polarized ultracold Fermi gas of ^6Li atoms (orange triangles), although a better fidelity can be achieved in the bosonic environment. This difference may be due to the stronger coupling of the ion to a fermionic bath, which is also responsible for the lower temperatures predicted by our model for a Paul-trapped ion immersed in ^6Li compared to ^7Li <cit.>. Moreover, we note that the interaction tune-out value of the scattering length does not correspond to a_ai=0 due to the interplay between molecular and confinement-induced resonances <cit.>. This result, combined with the cooling dynamics presented in Sec. <ref>, may have interesting applications in experiments. There, the atom-ion scattering rate can be controlled by tuning a magnetic field close to a Feshbach resonance <cit.>, making it possible to alternate from the regime were the coupling to the bath is suppressed and gate operations are performed, to the one where the effect of the ultracold gas is maximized and the ions are cooled. § CONCLUSIONS AND OUTLOOK We studied phonon-mediated interactions between ionic qubits coupled to an ultracold quantum gas and external heating. We derived a master equation for the reduced density matrix of the chain. We find that the presence of the atomic bath has a negative effect on the quality of quantum gates between the ion qubits. On the other hand, we calculate that the atomic bath can be used to keep the ion crystal cool in the presence of external heating due to electric field noise. Tuning of the atom-ion scattering length using a magnetic field allows to set the cooling rate of the atoms. In particular, the adverse effects during quantum gates may be limited while the buffer gas cooling may be maximized in between gates. This shows that buffer gas cooling is a viable technique for trapped ion quantum systems. The system offers further opportunities in studying the decoherence of non-classical states of ion motion and spin-motion entangled states in the presence of a fermionic or bosonic quantum bath. The system may be used to explore the crossover between classical and quantum dynamics <cit.> and the occurrence of classical and quantum chaos in mixtures of ultracold atoms and trapped ions <cit.>. Moreover, the coupling of the trapped ion atom mixture to qubits may allow for quantum-enhanced measurements of the properties of this system in analogy to e.g. Ref. <cit.>. There, a fragile spin-motion entangled state was used to detect the scattering of a single photon. The momentum transfer between an ultracold Li atom and an Yb^+ is comparable to that of scattering infrared photons such that the scheme could be extended to quantum enhanced detection of atom-ion scattering. § ACKNOWLEDGEMENTS This work was supported by the Dutch Research Council (Grant Nos. 680.91.120 and VI.C.202.051), (R.G.). A.S.N is supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium programme (project number 024.003.037), Quantum Delta NL (project number NGF.1582.22.030) and ENW-XL grant (project number OCENW.XL21.XL21.122). L.O. acknowledges the European Cooperation in Science & Technology (COST Action CA17113). § TIME DEPENDENCE OF ION POSITION Here we provide the explicit form of the operator defined in Eq. (<ref>), describing the motion of the central ion of the chain in the frame rotating with ω_R and in absence of the bath: x̂_R(t,-τ)=∑_μb_2,μλ_μ {â_μ e^-iδ_μτe^-iω_Rt+â_μ^† e^iδ_μτe^iω_Rt + Ω_μ/2δ_μ∑_j=1,3b_j,μσ̂_j[e^-iδ_μτe^-iω_R t+e^iδ_μτe^iω_R t-2cos(ω_R t)]}. We remark that this result relies on the Hamiltonian Ĥ_chain being independent of time, hence on the rotating wave approximation (Sec. <ref>). In case of a time-dependent chain Hamiltonian, the calculation would require a different procedure, as in the case of the Paul-trapped ion <cit.>.
http://arxiv.org/abs/2409.03276v1
20240905063827
Tensor network square root Kalman filter for online Gaussian process regression
[ "Clara Menzen", "Manon Kok", "Kim Batselier" ]
cs.LG
[ "cs.LG" ]
Delft]Clara [email protected], Delft]Manon [email protected], Delft]Kim [email protected] [Delft]Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, the Netherlands Square root Kalman filtering, tensor network, Gaussian processes, recursive estimation. § ABSTRACT The state-of-the-art tensor network Kalman filter lifts the curse of dimensionality for high-dimensional recursive estimation problems. However, the required rounding operation can cause filter divergence due to the loss of positive definiteness of covariance matrices. We solve this issue by developing, for the first time, a tensor network square root Kalman filter, and apply it to high-dimensional online Gaussian process regression. In our experiments, we demonstrate that our method is equivalent to the conventional Kalman filter when choosing a full-rank tensor network. Furthermore, we apply our method to a real-life system identification problem where we estimate 4^14 parameters on a standard laptop. The estimated model outperforms the state-of-the-art tensor network Kalman filter in terms of prediction accuracy and uncertainty quantification. § INTRODUCTION In a time when data-driven AI models are trained on an exponentially growing amount of data, it is crucial that the models can be adapted to newly observed data without retraining from scratch. These online or recursive settings are present in many fields including system identification <cit.>, sensor fusion <cit.>, robotics <cit.>, and machine learning <cit.>. While Bayesian algorithms, like widely-used Gaussian processes (GPs) <cit.> are well-suited for an online setting, they are associated with potentially high computational costs. Standard GP regression using a batch of N observations has a cubic cost in N, i.e., 𝒪(N^3). The number of observations is growing in an online setting, so the cost increases each time step and can become a computational bottleneck. There are numerous parametric approximations to address scalability in batch settings, including sparse GPs <cit.> and reduced-rank GPs <cit.>, which both have a complexity of 𝒪(NM^2), M being the number of inducing inputs and basis function for the respective method. Structured kernel interpolation for sparse GPs <cit.> reduces the complexity further to 𝒪(N+DM^1+1/D), D being the number of input dimensions. Parametric approximations allow for a straightforward recursive update, where the posterior distribution from the previous time step is used as a prior for the current time step <cit.>. In this context, online GPs have been used, e.g., for GP state-space models <cit.>, rank-reduced Kalman filtering <cit.> and recursive sparse GPs <cit.>. In this paper, we consider the online parametric GP model given by y_t = ϕ (_t)^⊤_t + ϵ_t, ϵ_t∼𝒩(0,σ_y^2), _t-1∼𝒩(_t-1,𝐏_t-1), where y_t is a scalar observation at discrete time t, ϕ(·) are basis functions that map a D dimensional input vector _t to a feature space, _t∈ℝ^M are the parameters at time t, and σ_y^2 denotes the variance of the measurement noise ϵ_t which is assumed to be i.d.d. and zero-mean Gaussian. With (<ref>), the posterior distribution is computed each time step using the estimate _t-1 and covariance matrix _t-1 of the previous time step as a prior. We consider commonly used product kernels with a feature map given by ϕ(_t) = ϕ^(1)(_t) ⊗⋯⊗ϕ^(d)(_t) ⊗⋯⊗ϕ^(D)(_t), where ϕ^(d)(_t)∈ℝ^I with I being the number of basis functions in the dth dimension. The resulting number of basis functions is M=I^D, growing exponentially with the input dimension D. Several tensor network (TN)- based methods have been proposed to break this curse of dimensionality and achieve a linear computational complexity in D. In the batch setting, <cit.> and <cit.> give solutions for the squared exponential and polynomial kernel, respectively. In the online setting, the state-of-the-art method is the tensor network Kalman filter (TNKF) <cit.>, where the Kalman filter time and measurement update are implemented in TN format. While the TNKF lifts the curse of dimensionality, it has a significant drawback. The TNKF requires a TN-specific rounding operation <cit.>, which can result in covariance update losing positive definiteness <cit.>, resulting in the divergence of the filter. This paper resolves this issue by computing the square root covariance factor in tensor train (TT) format instead. Our approximation represents the M× M square root covariance factor as a tensor train matrix (TTm). This is motivated by prior square root covariance factors of product kernels having a Kronecker product structure, which corresponds to a rank-1 TTm. In addition, work by <cit.> and <cit.> approximate the covariance matrix as a rank-1 TTm. This work generalizes the rank-1 approximation to higher ranks which results in better prediction accuracy and uncertainty quantification. We call our method the tensor network square root Kalman filter (TNSRKF). We show in experiments that the TNSRKF is equivalent to the standard Kalman filter when choosing full-rank TTs. In addition, we show how different choices of TT-ranks affect the performance of our method. Finally, we compare the TNSRKF to the TNKF in a real life system identification problem with 4^14 parameters, and show that contrary to the TNKF, our method does not diverge. § PROBLEM FORMULATION Similar to the TNKF, we build on standard equations for the measurement update of the Kalman filter given by _t = ϕ_t^⊤_t-1ϕ_t + σ_y^2 _t = _t-1ϕ_t _t^-1 _t = _t-1 + _t(y_t - ϕ_t^⊤_t-1) _t = (𝕀_M-_tϕ_t^⊤)_t-1(𝕀_M-_tϕ_t^⊤)^⊤ + σ_y^2_t^⊤_t, where _t denotes the innovation covariance and _t denotes the Kalman gain. Note that for a scalar measurement, _t is a scalar and _t a vector, whereas in the case of multiple measurements per time step, they are matrices. In this way, we recursively update the posterior distribution of the parametric weights from (<ref>), i.e., p(_t|_1:t). For product kernels with a feature map given in (<ref>), it is _t∈ℝ^I^D and _t∈ℝ^I^D× I^D. In this case, the Kalman filter suffers from the curse of dimensionality. The first tensor-based Kalman filter, the TNKF <cit.>, solved the curse of dimensionality and implements (<ref>)-(<ref>) in TT format, where the weights are represented as a TT and the covariance matrix as a TTm. During the updates, the algebraic operations in TT format increase the TT-ranks of the involved variables, according to <cit.>. To counteract the rank increase and keep the algorithm efficient, the TNKF requires an additional step called TT-rounding <cit.>. This SVD-based operation transforms the TT or TTm to ones with smaller TT-ranks. TT-rounding can result, however, in the loss of positive (semi-) definiteness. To avoid this issue, we implement the square root formulation of the Kalman filter (SRKF), as described e.g. in <cit.>, in TT format. The SRKF expresses (<ref>)-(<ref>) in terms of the square root covariance factor, given by _t = [ (𝕀_M-_tϕ_t^⊤)_t-1 σ_y_t ], which consists of a concatenating matrices that increases the number of columns. For the next update, _t needs to be transformed back to its original size. In the SRKF, this is done by computing a thin QR-decomposition <cit.> of _t given by _t^⊤_(M+1)× M = 𝐐_t_(M+1)× M𝐑_t_M× M and replacing _t by 𝐑_t. The orthogonal 𝐐_t-factor can be discarded since _t = _t_t^⊤ = 𝐑_t^⊤𝐐_t^⊤𝐐_t_𝐈_M𝐑_t = 𝐑_t^⊤𝐑_t. In TT format, performing the QR-decomposition as in (<ref>) is not possible. We solve this issue by proposing an SVD-based algorithm in TT format that truncates _t back to its original size. § BACKGROUND ON TENSOR NETWORKS §.§ Tensor networks Tensor networks (TNs), also called tensor decompositions, are an extension of matrix decompositions to higher dimensions. In this paper, we use a specific architecture of TNs, called TTs <cit.> to approximate the weight vector's mean as discussed in Section <ref>, and a TT matrix (TTm) <cit.> to approximate the square root covariance factor, as discussed in Section <ref>. In this context, we denote TTs representing vectors as a lower-case bold letter, e.g. _t, and their components, called TT-cores, as capital calligraphic bold letters, e.g. W^(d). TT matrices are denoted by upper-case bold letters, e.g. _t and their corresponding TTm-cores as capital calligraphic bold letters, e.g. L^(d). §.§.§ Tensor train vectors As depicted in Fig. <ref>(a), a TT vector consists of interconnected three-way tensors, called TT-cores, visualized as nodes with three edges. Each edge corresponds to an index of a TT-core and connected edges are summations over the involved indices. Each TT-core is connected by two edges, called TT-ranks, to their neighbouring TT-cores, except for the first and last TT-core, which outer TT-ranks are by definition equal to one. For the purpose of this paper, consider a TT that represents the mean of the weight vector _t∈ℝ^M. The TT-cores, denoted by W_t^(1),⋯,W_t^(d),⋯,W_t^(D) are each of size R_d× I× R_d+1, where R_d and R_d+1 are the TT-ranks and I is the size of the non-connected edge such that . By definition R_1=R_D+1=1. Without the loss of generality, we use TT-cores with equal TT-ranks R_. The storage complexity of _t without TNs is 𝒪(I^D) and in TT format 𝒪(DIR_^2), where lower TT-ranks R_ will result in more efficient representations. An important characteristic of a TT for numerical stability is that it can be transformed into the site-d-mixed canonical format. Site-d-mixed canonical format <cit.> A TT _t in site-d-mixed canonical format is given by _t=𝐆_d,t_t^(d), where 𝐆_d,t∈ℝ^M× R_ IR_ is an orthogonal matrix computed from all TT-cores except the dth and is the vectorization of the dth TT-core. In this format, the TT representation is linear in the dth TT-core when all other TT-cores are fixed. §.§.§ TT matrices and tall TT matrices A TTm consists of interconnected four-way tensors, as depicted in Fig. <ref>(b). Analogous to the TT, the TTm components and connected edges are called TTm-cores and TTm-ranks, respectively, where each TTm-core has two free edges, the row and column indices. For the purpose of this paper, consider a TTm representation of the square root covariance factor _t∈ℝ^M× M. The TTm-cores L_t^(1),⋯,L_t^(d),⋯,L_t^(D) are of size R_d× I× J × R_d+1, where I and J are the row and column indices, indicated in Fig. <ref>(b) as red and and blue edges respectively, such that and M=J^D. By definition, R_1=R_D+1=1, and for this paper, we generally assume that all other TTm-ranks R_2=⋯=R_D=R_ are equal. The storage complexity of _t without TNs is 𝒪(I^D× I^D) and in TTm format 𝒪(DR_^2 IJ). A TTm can also be written in terms of the site-d-mixed canonical format as defined in Definition <ref>, but it requires to be transformed into a TT first. This can be done by combining the row and column indexes into one index, which represents a kind of vectorization of the matrix represented by the TTm. Note, however, that the indices are not ordered as in conventional vectorization. A site-d-mixed canonical format of a TTm is given by vec(_t) =𝐇_d,t𝐥_t^(d), where 𝐇_d,t∈ℝ^2M× R_ IJR_ is computed from all the TTm-cores but the dth, and 𝐥^(d)_t∈ℝ^R_ IJR_. To recompute _t back in its original like in the QR step of the SRKF (see (<ref>)), we need a special of a TTm, the tall TTm, as well as a thin SVD in TTm format. Tall TTm <cit.> A tall TTm, as depicted in Figure <ref>(c), has only one TTm-core with both a row and column index, while all other TTm-cores have only row indices. Then, the TTm represents a tall matrix with many more rows than columns. Thin SVD in TTm format <cit.> Consider a TTm in site-d-mixed canonical format, where the dth TTm-core is the one that has the column index, . The SVD of L^(d) reshaped and permuted in to a matrix of size R_ IR_× J, is given by 𝐔^(d)𝐒^(d)(𝐕^(d))^⊤. Now replace the dth TTm-core by 𝐔^(d) reshaped and permuted back to the original TTm-core dimensions. Then the thin SVD is given by the TTm with the replaced TT-core as the orthogonal 𝐔-factor, and 𝐒^(d)(𝐕^(d))^⊤ as the 𝐒𝐕^⊤-factors, as depicted in Fig. <ref>(d). § TENSOR-NETWORKED SRKF We propose our method combining efficient TN methods with the SRKF formulation for online GP regression. More specifically, we recursively compute the posterior distribution of the parametric weights in (<ref>) from the measurement update of the Kalman filter. To achieve this, we update the mean _t∈ℝ^M as a TT (Section <ref>), and the square root factor _t∈ℝ^M× M as a TTm (Section <ref>). All computations are summarized in Algorithm <ref>, which outputs the posterior weight distributions p(_t|_1:t)=𝒩(_t,_t), and the predictive distributions for a test input p(f_*,t)=𝒩(m_*,t,σ_*,t^2). §.§ Update of weight mean For updating the mean with a new measurement y_t∈ℝ, we compute (<ref>) in TT format. In the original tensor-based KF <cit.>, the two terms in equation (<ref>) are summed together in TT format, which increases the TT-ranks. To avoid this rank increase, we apply a commonly-used optimization algorithm from the tensor community, called the alternating linear scheme (ALS) <cit.>. The ALS computes a TT by updating one TT-core at a time while keeping all other TT-cores fixed. The optimization problem to be solved is given by _tmin _t-1 + _t(y_t-ϕ_t^⊤_t-1) - _t _F^2 s.t. _t being a low-rank TT, where the subscript F stands for the Frobenius norm, and _t-1 is the estimate from the last time step playing now the role of the prior for the current time step. Inserting (<ref>) in (<ref>), thus making use of the site-d-mixed canonical format from Definition <ref>, gives the optimization problem for the update of one TT-core _t^(d)min𝐆_d,t^⊤(_t-1 + 𝐊_t(y_t-ϕ_t^⊤_t-1)) - _t^(d)_F^2. In one so-called sweep of the ALS, (<ref>) is solved for each TT-core once. A stopping criterion for the convergence of the residual in (<ref>) determines the total number of sweeps. §.§ Update of square root covariance factor To compute the covariance matrix with the standard covariance update in the measurement update, see (<ref>), we recursively compute the square root covariance factor _t as defined in (<ref>) such that _t=_t_t^⊤. To achieve this, we use the ALS to solve (<ref>) (ALS step) and then we transform _t as in (<ref>) back to its original size (QR step). *ALS step In this step, we use the ALS to compute a TTm representing _t. We solve the optimization problem given by min__t [ (𝕀_M-_tϕ_t^⊤)_t-1 σ_y_t ] - _t _F^2 s.t. _t being a low-rank TTm, where _t-1 is the estimated square root covariance factor from time step t-1 now serving as the prior. The original ALS algorithm is defined for TTs, so we must adapt it for TT matrices. For this, it is necessary to use the site-d-mixed canonical form for TT matrices, as described in Section <ref> above (<ref>). In addition, we need to horizontally concatenate two matrices in TTm format, which can be done by summing two matrices of size M× 2M such that (<ref>) becomes min_𝐥_t^(d)𝐇_d,t^⊤vec([ 1 0 ]⊗ (𝕀_M-_tϕ_t^⊤)_t-1) . .+𝐇_d,t^⊤vec([ 0 1 ]⊗[ 1 0_M-1 ]⊗σ_y_t) - 𝐥_t^(d)) _F^2, where vec denotes the vectorizations of the involved TT matrices. *QR step The optimization problem given by (<ref>) requires concatenating a matrix with a column vector. In TT format, this results in a TTm of size M× 2M. For the TTm-cores of _t this means that one TTm-core, we call it the augmented core, is of size R_× I× 2J× R_. Before serving as a prior for the next time step, a QR step as in (<ref>) is required to transform _t back to its original size. We use an SVD-based algorithm in TN format to transform _t of size M× 2M back to M× M, as described in Algorithm <ref>. §.§ Predictions To perform GP predictions we compute the predictive distribution for a test output f_*,t=ϕ(_*)^⊤_t with mean and variance given by m_*,t = ϕ(_*)^⊤_t σ^2_*,t = ϕ(_*)^⊤_t_t^⊤ϕ(_*). Given _t as a TT and _t as a TTm, we can compute (<ref>) directly in TN format without explicitly reconstructing the mean vector and square root factor. For a test input _*, Fig. <ref> illustrates the computation of (a) the predictive mean m_*,t, (b) the predictive covariance σ^2_*,t. § IMPLEMENTATION In this section, we give a detailed description of the non-straight-forward TN operations to update the mean estimate _t and square root covriance factor _t as described in Algorithm <ref>. The leading complexities of the mean and square root covriance factor update are given in Table <ref>. §.§ Updating _t in TN format In the following sections, we discuss the implementation of (<ref>) for the mean update (Algorithm <ref>, line <ref>), and we describe how the mean is initialized in TT format (Algorithm <ref>, line <ref>). §.§.§ Implementation of 𝐆_d,t^⊤(_t-1 + 𝐊_t(y_t-ϕ_t^⊤_t-1)) To compute the TT representing the mean estimate _t, we implement the ALS to solve (<ref>) ( The following example illustrates the update of one TT-core during the ALS. TT-core update with ALS Take a D=5 dimensional weight vector in TT format with M_1=M_2=M_3=M_4=M_5=10 basis functions in each dimension, resulting in 10^5 parameters and uniform TT-ranks of R_2=R_3=R_4=4. Say, we are currently updating the third TT-core W_t^(3)∈ℝ^4×10× 4 using _t^(3)_160×1 = 𝐆_3,t^⊤_160× 10^5(_t-1_10^5× 1 + 𝐊_t_10^5× 1(y_t-ϕ_t^⊤_t-1)_1×1). We first multiply over the large dimension of 10^5 in 𝐆_3,t^⊤_t-1 and 𝐆_3,t^⊤_t(y_t-ϕ_t^⊤_t-1). In TT format, this matrix-vector-multiplication is done core by core, thus avoiding the explicit multiplication. Finally, we sum two vectors of size 160. Figure <ref> illustrates the multiplication of 𝐆_d,t^⊤_t-1 in TT format, resulting in a tensor of size R_× I× R_. The multiplication of between 𝐆_d,t^⊤ and _t(y_t-ϕ_t^⊤_t-1) works in the same way after firstly computing ϕ_t^⊤_t-1 in TN format and secondly multiplying one arbitrary TT-core of _t by the scalar (y_t-ϕ_t^⊤_t-1). During the update of the dth TT-core, the TT is in site-d-mixed canonical format. Before updating the next TT-core, either the (d-1)th or the (d+1)th, the site-(d-1)-mixed or site-(d+1)-mixed canonical format is computed. Note that because of the recursive property, updating every TT-core once with a new measurement is usually sufficient for the residual of (<ref>) to converge. §.§.§ Initialization of _0 and _1 For the first time step t=1 of Algorithm <ref>, we choose a zero-mean assumption for the prior estimate _0. The following Lemma explains how this can be implemented in TT format. Zero-mean prior in TT format <cit.> Consider a vector with all entries equal to zero. In TT format, such a vector is given by a TT in site-d-mixed canonical format, where the dth TT-core contains only zeros. In addition, Algorithm <ref> requires an initial guess for _1 to compute 𝐆_d,1 from all TT-cores of _1, except the dth. For this, we set _1=_0. §.§ Updating _t in TT format To compute the TTm representing _t, we implement the ALS to solve (<ref>) (Algorithm <ref>, line <ref>). The following example illustrates the update of one TTm-core during the ALS. TTm-core update with ALS Take a D=5 dimensional TTm representing _t∈ℝ^M× M, where we are currently updating the third TTm-core. We have I=10 and J=10, where the third TTm-core is augmented, and R_=4. We update L_t^(3)∈ℝ^4×10×20× 4 using 𝐥_t^(3)_3200×1 = 𝐇_d,t^⊤_3200×2·10^10 vec([ 1 0 ]_1× 2⊗_t-1_10^5× 10^5) - 𝐇_d,t^⊤_3200×2·10^10 vec([ 1 0 ]_1× 2⊗_tϕ_t^⊤_t-1_10^5× 10^5) + 𝐇_d,t^⊤_3200×2·10^10 vec([ 0 1 ]_1× 2⊗[ 1 0_M-1 ]_1× 10^5⊗σ_y_t_10^5× 1). We first multiply over the large dimension of 2·10^10 in TT format, then sum the three terms of size 3200× 1. From Example <ref>, it follows that the three terms of (<ref>) need to be implemented. We discuss them separately in the following sections. We distinguish between the update of the augmented TTm-core from all other ones, which result in TTm-cores of size R_× I× 2J × R_ and R_× I× J× R_, respectively. In the tensor diagrams (Fig. <ref>-<ref>), we depict the update for the augmented TTm-core. Before diving in, recall from (<ref>) that 𝐇_d,t is computed from TTm-cores of _t, except the dth, where row and column indices are combined. In the tensor diagrams, the indices are depicted not as combined because, in practice, they are generally summed over separately. However, the vectorized format is necessary for writing down the equations in matrix form. §.§.§ Implementation of first term of (<ref>) Fig. <ref> illustrates the computation of the augmented TTm-core in the first term of (<ref>), given by 𝐇_d,t^⊤vec([ 1 0 ]⊗σ_y_t-1). The column indices of _t-1 are indicated by the round edges that are connected to the row indices of 𝐇_d,t^⊤. The edge containing 𝐞_1=[1 0] is connected to the dth TTm core of _t with a rank-1 connection, which corresponds to the Kronecker product in (<ref>). The summation over the vertical and curved indices has the leading computational complexity of 𝒪(R_^4IJ) per dimension. When updating all TTm-cores except the augmented TTm-core, the additional index of size 2 is summed over resulting in a tensor of size R_× I× J× R_. §.§.§ Implementation of second term of (<ref>) Fig. <ref> illustrates the computation of 𝐇_d,t^⊤vec([ 1 0 ]⊗_t-1_t-1^⊤ϕ_t_t^-1ϕ_t^⊤_t-1), which directly follows from the second term of (<ref>). As shown, the row and column indices of 𝐇_d,t^⊤ are connected separately to the column and row indices of two TT matrices for _t-1, respectively. Like in the previous term, the edge containing 𝐞_1=[1 0] is connected to the augmented TTm-core of _t with a rank-1 connection, which corresponds to the Kronecker product in (<ref>). The leading computational complexity of 𝒪(R_^4IJ) per dimension comes from the summation over the vertical indices in the red or blue box indicated in the figure. The most efficient order of doing the computations in Fig. <ref> was found with the visual tensor network software by <cit.>. §.§.§ Implementation of third term of (<ref>) Fig. <ref> illustrates the computation of 𝐇_d,t^⊤vec([ 0 1 ]⊗[ 1 0_M-1 ]⊗σ_y_t_t^⊤ϕ_t_t^-1), which directly follows from the third term of (<ref>). The row of nodes each filled with 𝐞_1=[1 0_J-1] corresponds to [1 0_M-1] from (<ref>) and their rank-1 connections to the nodes above is the second Kronecker product in (<ref>), which is done dimension-wise in TT format. The node with 𝐞_2 corresponds to [0 1] from (<ref>) and its rank-1 connection is the first Kronecker product in (<ref>). The summation over the vertical indices is the leading computational complexity of 𝒪(R_^4IJ) per dimension. §.§.§ SVD-based QR step in TTm format When computing (<ref>), we double the number of columns of _t compared to _t-1. For the next time step, however, we need to transform _t back to its original size (Algorithm <ref>, line <ref>), otherwise its column size will grow with the iterations and slow down the algorithm. The QR step, as in (<ref>), computes a full QR decomposition of _t, which cannot be done in TT format. Instead, we compute a thin SVD in TTm format (Definition <ref>) of _t transformed into a tall TTm (here also denoted by _t) with all row indices of size IJ, except the dth which is of size I, and the dth column index of size 2J. The J-truncated SVD of _t is then given by _t_MJ^D-1×2J≈𝐔_t𝐒_t_MJ^D-1× J𝐕_t^⊤_J×2J, where 𝐔_t𝐒_t is the new _t and 𝐕_t^⊤ can be discarded because of (<ref>). In practice, we compute an SVD of the augmented TTm-core and truncate it back to the size of R_× I× J× R_. There is a way to make (<ref>) exact. This is possible if the augmented TTm-core is of size R_× I× 2J R_^2× R_. In this case, the SVD computed of the augmented TTm-core results in a square 𝐔-factor. Since the number of columns is doubled every measurement update, the QR step can be skipped p times until 2^p=2R_^2. Choosing smaller values for p reduces computational complexity at the cost of accuracy. The SVD-based QR step is described in Algorithm <ref>. The SVD of the reshaped and permuted augmented TTm-core is truncated for 2^p<2R_^2 and exact for 2^p≥ 2R_^2. §.§.§ Initialization of _0 and _1 At time t=1, Algorithm <ref> requires the prior square root covariance factor _0 in TTm format. We are considering product kernels that have priors in Kronecker format. The following Lemma describes how these types of priors can be transformed into a TTm for _0. (Prior covariance with Kronecker structure into TTm, follows from <cit.>) Given a prior covariance _0=_0^(1)⊗_0^(2)⊗⋯⊗_0^(D), the prior square root covariance in TTm format is given by a TTm with all ranks equal to 1, where the cores are given by _0^(1),_0^(2),…,_0^(D), each reshaped into a 4-way tensor of size 1× I× J× 1. In addition, Algorithm <ref> requires an initial guess in TTm format for _1∈ℝ^M× 2M. We cannot set _1 = _0 since the prior has TTm-ranks equal to one, and we may want higher TTm-ranks for _t. This is because the choice of the TTm-ranks of _1 determines the rank manifold on which the TTm-cores will be optimized. We initialize the TTm-cores as random samples from a zero-mean Gaussian distribution and transform the TTm into site-d-mixed canonical format, where d is the augmented TTm-core. § EXPERIMENTS In this section, we show how our method works in practice by performing online GP regression on synthetic and real-life data sets. We evaluate our predictions based on the root mean square error (RMSE) for the accuracy of the mean and negative log-likelihood (NLL) for the uncertainty estimation. The metrics after t measurement updates are defined as (RMSE)_t = √(∑_i=1^N_*(m_*,t,i-y_*,i)^2/N_*) and (NLL)_t = 0.5∑_i=1^N_*log (2πσ_*,t,i^2) + (m_*,t,i-y_*,i)^2/σ_*,t,i^2, where y_*,i is the ith measurement from the test set, m_*,t,i and σ_*,t,i are the predictive mean and variance for the ith test point, and N_* is the number of test points. First, we show the equivalence of the full-rank TNSRKF and the conventional Kalman filter. Then we show in a synthetic experiment how the choice of R_ and R_ impact the accuracy of the approximation. Finally, we compare our method to the TNKF on a benchmark data set for nonlinear system identification. All experiments were performed on an 11th Gen Intel(R) Core(TM) i7 processor running at 3.00 GHz with 16 GB RAM. For reproducibility of the method and the experiments, the code written in Julia programming language is freely available at . §.§ Equivalence of full-rank TNSRKF and Kalman filter In the first experiment, we show in which case our method is equivalent to the measurement update of the conventional Kalman filter. We generate D=3 dimensional synthetic data sampled from a reduced-rank GP by <cit.> with a squared exponential kernel and use I=4 basis functions per dimension, such that _t∈ℝ^64×64. The input data lies in a cuboid given by [-1 1]×[-1 1]×[-1 1] and N,N_*=100 Table <ref> shows the RMSE and NLL for test data at time t=N for different choices of p. The TNSRKF is equivalent to the Kalman filter when both R_ and R_ are full-rank. In addition, p must be chosen, such that the QR step, discussed in Section <ref>, is exact. For settings with lower values for p, the method trade-in accuracy. In the following sections, we look at scenarios where the Kalman filter can no longer be computed on a conventional laptop because both storage and computational time become unfeasible. §.§ Influence of the ranks on the approximation The choice of the TT- and TTm-ranks is not obvious and can be intricate. However, the computational budget often determines how high the ranks can be chosen. In this experiment, we use our method to make online GP predictions on synthetic data while varying the TTm-ranks of _t, as well as the TT-ranks of _t. We consider the Volterra kernel, a popular choice for nonlinear system identification. It is known that the truncated Volterra series suffers from the curse of dimensionality, which was lifted in a TN setting by <cit.>. With the notation of this paper, the basis functions ϕ of parametric model (<ref>) are a combination of monomials computed from the input sequence of the given problem. We generate synthetic training and testing data as described in <cit.>, where D=7 and I=4 such that the number of parameters is 4^7=16384. We set the SNR to 60, so there is relatively little noise. Fig. <ref> shows the RMSE and NLL on the testing data for R_=2,4 and R_=2,4 over time iterations of the TNSRKF. At t=N, the RMSE is lower for R_=2 and R_=4 than for R_=4 and R_=4. Thus it seems that a lower value for the mean estimate represents the data better. The NLL is the lowest for R_=4 and R_=4, which is close to the NLL for R_=4 and R_=2. Note that the NLL for the same R_ is different for the two settings of R_, because the NLL also depends on the difference between predicted and actual measurements, thus on the accuracy of _t. This experiment showed that the choice of R_ and R_ influence the performance of the TNSRKF. Since higher values for the ranks also increase the computational complexity, the computational budget will determine the higher limit for the ranks. In addition, an assumption with lower ranks may be fitting the data better in some cases. §.§ Comparison to TNKF for cascaded tanks benchmark data set In this experiment, we compare our method to the TNKF on a nonlinear benchmark for system identification, the cascaded tanks data set. A detailed description can be found in <cit.>. The training and testing data set consists of 1024 data points. To train our GP model, we choose lagged inputs and outputs as input to our GP, as described in <cit.>, resulting in an input of dimensionality D=14. We use a squared exponential kernel, which hyperparameters we optimize with the Gaussian process toolbox by <cit.>, and we choose I=4, such that the model has M=4^14=268435456 parameters. For the comparison to the TNKF, we choose the TT-ranks for the mean to be R_2=R_14=4, R_3=⋯ R_13=10, and we vary R_ and the TTm-ranks of the covariance matrix for the TNKF denoted by R_. Fig. <ref> and <ref> show the RMSE and NLL over the time iterations of the respective filter. When R_=1 and R_=1, both methods perform almost the same, as visualized by the overlapping green and blue lines. When R_^2=R_=4, our method improves both prediction accuracy and uncertainty estimation compared to the R_=1. On the contrary, the TNKF diverges and leaves the plotted figure area because the covariance matrix loses positive definiteness. When R_^2=R_=16, the TNKF shows a similar behavior, while the TNSRKF results in lower RMSEs but mostly higher NLL values. This setting shows that higher values for R_ are not always beneficial for the uncertainty estimation. Finally, Fig. <ref> shows the predictions with the TNSRKF on testing data after seeing 100, 200, and 922 data points. Aligned with the plot showing the RMSE and NLL, after 100 data points, the prediction is quite bad and uncertain. After 200 data points, the predictions are better and more certain and further improve after seeing the entire data set. § CONCLUSION In this paper, we presented a TT-based solution for online GP regression in terms of an SRKF. In our experiments, we show that our method is scalable to a high number of input dimensions at a reasonable computational cost such that all experiments could be run on a conventional laptop. In addition, we improve the state-of-the-art method for TN-based Kalman filter: In settings where the TNKF loses positive (semi-)definiteness and becomes numerically unstable, our method avoids this issue because we compute the square root covariance factors instead of the covariance matrix. In this way, we can choose settings for our method that achieve better accuracy than the TNKF. A future work direction is online hyperparameter optimization. We are looking at a truly online scenario, so future data is not available. Thus we cannot swipe over mini-batches of data multiple times like other methods, e.g. <cit.>, to optimize hyperparameters. Finally, there is still ongoing research to determine how to choose TT-ranks and TTm-ranks. In the synthetic experiments, we showed the impact of R_ and R_. Generally, the TT- and TTm-ranks need to be treated as hyperparameters. plain
http://arxiv.org/abs/2409.02085v1
20240903173629
EcoLife: Carbon-Aware Serverless Function Scheduling for Sustainable Computing
[ "Yankai Jiang", "Rohan Basu Roy", "Baolin Li", "Devesh Tiwari" ]
cs.DC
[ "cs.DC" ]
arabic : Carbon-Aware Serverless Function Scheduling for Sustainable Computing Alejandro W. Rodriguez September 9, 2024 ========================================================================= fancy This work has been accepted at SC ’24. § ABSTRACT This work introduces , the first carbon-aware serverless function scheduler to co-optimize carbon footprint and performance. builds on the key insight of intelligently exploiting multi-generation hardware to achieve high performance and lower carbon footprint. designs multiple novel extensions to Particle Swarm Optimization (PSO) in the context of serverless execution environment to achieve high performance while effectively reducing the carbon footprint. Serverless Computing, Sustainable Computing, Cloud Computing § INTRODUCTION Motivation and Goals of : Carbon footprint is increasingly becoming one of the most important measures of sustainability of large-scale computing systems. Due to the growing demand for computing in datacenters, the carbon footprint of these large-scale systems is rising <cit.>. Carbon dioxide (CO_2) and other greenhouse gases are emitted during manufacturing datacenter hardware (termed as embodied carbon footprint), and also during execution of applications on the hardware (termed as operational carbon footprint). The embodied carbon footprint is amortized over the lifetime of the hardware, and the operational carbon footprint depends on the energy consumption of the hardware and the carbon intensity of the power grid that provides energy to the datacenter. As detailed in Sec. <ref>, we observe that datacenter hardware from different generations (old and new hardware) has different proportions of embodied and operational carbon footprints, and combining hardware from different generations has the potential of jointly minimizing both application runtime and carbon footprint. In fact, this indirectly opens up the opportunity of potentially extending the lifetime of older hardware for higher environmental sustainability of large-scale computing systems. leverages this observation and opportunity in designing a scheduling solution for serverless computing. aims to make serverless computing sustainable and high-performant by performing carbon footprint-aware scheduling of serverless functions on hardware from different generations. Serverless computing and challenges in carbon-aware serverless scheduling: Serverless computing is gaining wider adoption as a paradigm of cloud computing due to several attractive features like a higher level of resource abstraction from end-users, auto-scaling of resources, and a pay-as-you-go billing model <cit.>. Due to these advantages, there is increasing interest in introducing the serverless computing model to the HPC community and workflows <cit.>, along with several related efforts in the parallel system's community <cit.>. To make serverless computing high-performant, service providers keep functions alive in the memory of servers so that they are not affected by a start-up overhead, also referred to as the cold start of functions. Keeping functions alive consumes resources and energy, which translates to a keep-alive carbon footprint. The summation of the keep-alive carbon footprint and the carbon footprint during execution constitutes the total carbon footprint of a function. Since older hardware often has lower embodied carbon <cit.>, it can be beneficial to keep functions alive in older hardware but suffer from performance degradation during execution (Sec. <ref>). Newer hardware is usually more energy efficient, and hence, results in lower operational carbon — representing a trade-off between different types of carbon footprint and performance (Sec. <ref>). Furthermore, different serverless functions need to be kept alive for different amounts of time depending on a function's arrival probability. Moreover, the carbon intensity of a power grid varies with time, which has an impact on the operational carbon footprint. Since the characteristics and invocation patterns of production serverless functions and the carbon intensity vary with time, this makes carbon-aware function scheduling challenging. 's Key Contributions: makes the following key contributions. I. is a novel high-performance and carbon-aware serverless scheduler that exploits hardware of different generations to improve the sustainability of computing systems (exploiting the lifetime extension of older-generation hardware) while achieving high performance. To the best of our knowledge, this is the first work that focuses on reducing the carbon footprint of serverless computing. II. introduces novel extensions to the Particle Swarm Optimization (PSO) technique in the context of serverless scheduling. The novel design and implementation of PSO extensions include (a) perception-response mechanism to adapt in the dynamic serverless environment, and (b) function warm pool adjustment mechanism to intelligently prioritize function keep-alive time and location among multi-generation hardware, in response to varying memory requirements. III. is evaluated using widely-used serverless function invocation trace from Microsoft Azure cloud <cit.>, and is shown to consistently perform close to the theoretically-optimal () solution using multiple different generations of hardware and is robust under different scenarios. Our evaluation indicates that is consistently within 7.7% and 5.5% points from in terms of service time and carbon footprint, respectively. is available at <https://zenodo.org/records/10976139>. § BACKGROUND Serverless function keep-alive in cloud computing. The serverless computing model abstracts the cloud computing infrastructure for users, allowing them to upload their code to be executed as stateless functions. In the serverless computing model, cloud providers manage user functions as container images and orchestrate the underlying hardware resources without a need for user intervention. Upon invocation, a function's image is loaded into the server for execution. To enhance efficiency, after execution, the function remains in the server memory for a certain period. The duration during which the function is kept alive in memory is termed the keep-alive time, and is controlled by the cloud provider. Keeping functions alive in the memory decreases the chances of a cold start of a function, potentially eliminating the need to reload the function into memory. If a function is re-invoked post the keep-alive period, it incurs a cold start overhead that requires loading the function in the memory. If a function is re-invoked before the keep-alive period, it receives a warm start. The service time of a function is comprised of its cold start overhead (zero in the case of a warm start) plus the execution time of the function. Given that the execution times for typical production serverless functions can be comparable to the cold start overhead <cit.>, optimizing keep-alive time for serverless functions is an important design consideration. Carbon footprint of computing systems. The carbon footprint encompasses both embodied carbon and operational carbon. Embodied carbon refers to the emissions associated with manufacturing and packaging computer hardware <cit.>, such as that from foundries like TSMC. Since this occurs only once, the share of embodied carbon for a traditional (non-serverless) application is proportional to the execution time of the application relative to the lifespan of the device <cit.>. Indeed, as the rapid development in the lithography process continues in hardware manufacturing, advanced hardware has improved capabilities. However, it often comes with larger die sizes, increased core counts, and expanded memory sizes <cit.>. Hence, the manufacturing process for such newer-generation hardware often has a higher embodied carbon footprint compared to the older-generation hardware. These carbon emissions generated during manufacturing contribute to the embodied carbon footprint of the hardware and are taken into account throughout its lifespan. Unlike embodied carbon footprint, operational carbon footprint refers to the emissions originating from the electricity supplied by grid operators to power the computing infrastructure. It is quantified as the product of the grid’s carbon intensity (gCO_2/kWh) and energy usage (kWh). Here, carbon intensity denotes the amount of carbon dioxide emitted per unit of generated energy, and it varies over time. For a traditional (non-serverless) application, the operational carbon footprint includes energy consumed during the execution. Carbon footprint estimation for a serverless function. In contrast to traditional non-serverless functions, the overall carbon footprint of a serverless function is calculated for all three periods: keep-alive period, duration of potential cold-start, and execution time (the first two periods are serverless-specific). The carbon footprint estimation in serverless is composed of embodied carbon footprint estimation and operational carbon footprint estimation. The embodied and operational carbon footprint of a serverless function accounts for the carbon footprint generated by both the CPUs and DRAMs <cit.>. Below, we briefly describe how carbon footprint is estimated – with the acknowledgment that the carbon footprint estimation of serverless functions is non-trivial and has many complex interactions, but the model described below captures the first-order principles and effects. First, for the embodied carbon footprint, the attribution of embodied carbon is different during different phases (e.g., keep-alive period and execution time) due to differences in the amount of resources being used. The embodied carbon footprint estimation of DRAM and CPU is accounted for the usage proportion attributed to function f. The embodied carbon footprint per unit of time of a DRAM is calculated by dividing the total embodied carbon of the DRAM (EC_DRAM) by its lifetime (LT_DRAM), and multiplied with the memory usage ratio – M_f/M_DRAM. here, M_f is the memory size of the function f and M_DRAM is the size of DRAM. Then, the embodied carbon of DRAM with keep-alive time k and service time S_f can be modeled as: DRAM Embodied CO_2 = S_f+k/LT_DRAM·M_f/M_DRAM· EC_DRAM During the service period, the entire CPU is assigned to serverless execution. However, during the keep-alive period, one CPU core is preserved to keep the serverless function alive. The embodied carbon per core of a CPU is determined by dividing the total embodied carbon footprint of the CPU (EC_CPU) by the number of cores (Core_num). Therefore, the formal expression for the embodied carbon of the CPU can be written as: CPU Embodied CO_2 = S_f/LT_CPU· EC_CPU + k/LT_CPU·EC_CPU/Core_num The embodied carbon footprint of hardware is already incurred during manufacturing, but it must be considered and accounted for during the operational period too. Similar to how energy consumption is tracked, the distribution and attribution of the embodied carbon footprint among different applications must be carefully accounted for to inform future planning and potential resource usage. Second, the operational carbon footprint includes executing serverless functions during invocation, in addition to the energy required to maintain it in memory during keep-alive. The operational carbon footprint of DRAM can be estimated by multiplying the real-time carbon intensity (CI) with the energy consumption of DRAM (E_DRAM^Service+E_DRAM^Keep-alive) during both the service period and keep-alive period. Note that the operational carbon footprint of DRAM incurs a carbon footprint based on the function's share of the overall operational carbon footprint of DRAM. Therefore, the memory usage ratio – M_f/M_DRAM is multiplied to estimate the operational carbon footprint generated by DRAM of a function. The estimation of the CPU's operational carbon footprint is similar to the estimation of embodied carbon. The entire CPU is assigned to serverless function execution, but during the keep-alive period, only one CPU core is used to keep the function alive. These estimations can be formally expressed as: DRAM Operational CO_2 = M_f/M_DRAM·( E_DRAM^Service+ E_DRAM^Keep-alive)·CI CPU Operational CO_2 = (E_CPU^Service + E_CPU^Keep-alive/Core_num )·CI We acknowledge that a universally accepted methodology is not widely established for the carbon footprint estimation of serverless functions. The approach outlined here is one intuitive method for modeling carbon footprints. Because each period of serverless computing is unique, separately considering each period provides an easy interpretation of the carbon footprint for serverless functions. Incorporating second-level effects (such as storage) can be modeled as an extension by adding the proportional carbon footprint of storage. Our described model does not necessarily favor and is only used to demonstrate that carbon savings are possible while achieving high performance. primarily focuses on CPU-based systems, which are commonly used in serverless environments <cit.>. While trends in our motivational observations in Sec. <ref> apply to GPUs, too, because of multi-generational trade-offs, we do not directly focus on GPUs. can be adapted for multi-generation GPUs using the GPU-specific carbon footprint model and measurement. Table <ref> shows three old-generation / new-generation pairs to demonstrate that the motivation and key ideas behind are not restricted to a single pair, and benefits can be observed over different pairs (Sec. <ref> confirms this quantitatively). While one cannot practically evaluate all possible multiple generation pairs, entries in Table <ref> were selected to capture three different types of generations (with one, two, and four years of gap representing different lifetimes of the hardware) and where accurate embodied carbon footprint data is available. We anticipate that hardware upgrades can happen in a one to five year timeline, and demonstrates how it can be used to leverage the prior generation of hardware to achieve both high performance and a low carbon footprint. § MOTIVATION Observation. Serverless functions generate a significant carbon footprint during their keep-alive period — which is unique compared to the traditional non-serverless computing model, where functions are not kept alive in memory in anticipation of an actual invocation. First, we measure the carbon footprint generated during the keep-alive period and the service time of different serverless functions from SeBS benchmark <cit.> on (Table <ref>). Fig. <ref> shows the trends for three representative functions: video processing, graph search, and DNA visualization. Other functions demonstrate a similar trend, but these functions were selected for motivation as they represent diverse characteristics in terms of computational and memory requirements, and also represent the core of many algorithms. From Fig. <ref>, we observe that as the keep-alive period increases, the keep-alive carbon footprint also rises due to the increased embodied and operational carbon emissions. Consequently, its proportion in the total carbon footprint becomes higher. For example, when the keep-alive time is increased from 2 minutes to 10 minutes, the keep-alive carbon footprint of function Graph-BFS has increased from previously constituting 18% of the total carbon footprint to now 52% of the total carbon footprint. Fig. <ref> also shows that the carbon footprint during the keep-alive period can often be higher than the carbon footprint during the actual execution – this is because the keep-alive period (typically multiple minutes) is often orders of magnitude longer than the execution time (often millisecond to a few seconds). Opportunity. The use of relatively older-generation hardware, which inherently has a lower embodied carbon footprint, opens the opportunity to lower the carbon footprint during the keep-alive period. We observe this opportunity via comparing the overall service time and carbon footprint across various serverless workloads, using two pairs of older-newer hardware pair as illustrated in Fig. <ref> (Both pair A and C are selected for demonstration). We found that while leveraging relatively older-generation hardware lowers the carbon footprint during the keep-alive period, unfortunately, it results in significant performance (execution time) degradation. For example, as shown in Fig. <ref>, considering executing video processing on and , respectively, and keeping the function alive for 10 minutes, using to keep the function alive compared to using can save 23.8% of carbon footprint. However, the execution time of the function increases by 15.9%. This is because, as expected, older hardware often yields slower performance for many workloads. However, this leads to interesting trade-offs where leveraging older hardware's extended lifetime for carbon footprint benefits competes with the execution time metric. However, recall that for serverless functions, the metric for performance is service time (not execution time alone). Service time is the sum of the execution time and cold-start overhead (if the function was not warm or kept alive in memory at the time of invocation). Interestingly, a lower carbon footprint on older hardware during the keep-alive period enables us to afford a longer keep-alive period on older hardware compared to newer hardware under the same or lower carbon footprint budget. This indirectly results in higher chances of warm starts and hence, potentially lower service time even when using older hardware. However, navigating this trade-off is challenging because the magnitude of the trade-off varies across different functions and hardware generations. Challenge. To effectively exploit the opportunity identified earlier (trade-off between carbon footprint and service time), one needs to intelligently determine the keep-alive period for different functions on different generations of hardware – the optimal periods can be different for different functions and vary over time. In Fig. <ref>, we perform a comparative experiment to measure the corresponding service time and overall carbon footprint with two testing scenarios (Case A and Case B, described in Fig. <ref>) on two generations of hardware ( vs C_New) under two carbon intensity (Carbon Intensity = 50, Carbon Intensity = 300). The results provide strong experimental evidence for the previously discussed opportunity. For example, in Fig. <ref> (top row with Carbon Intensity = 300), when the video-processing function is kept alive in memory for a longer time (15 mins) and executed on older hardware () with warm start, it leads to a 52.3% saving in service time and a 14.9% saving in carbon footprint compared to utilizing shorter keep-alive periods (10 mins) on newer hardware (C_New) with cold start. This is true for Graph-BFS and DNA visualization functions, too. We show that utilizing older hardware for extended keep-alive time could potentially reduce carbon emissions while maintaining high performance, because of the increasing possibility of warm starts (hence, eliminating the cold start overhead). Intuitively, when carbon intensity is high, it is worth keeping the function alive on the old-generation hardware (by incurring relatively lower embodied carbon) rather than experiencing the cold start on the new-generation hardware and incurring a high operational carbon footprint during the cold-start period. Essentially, we attempt to eliminate high operational carbon footprint during the cold start on newer hardware by being able to keep the function alive on older hardware – which incurs lesser embodied carbon and reduces chances of cold start (and hence, better service too). However, the magnitude of this benefit can be reduced or absent in some cases when the carbon intensity is very low, and hence, the high operational carbon footprint during cold start on newer generation hardware is less significant compared to embodied carbon on older hardware during longer keep-alive. Our results demonstrate one such case where leveraging older-generation hardware does not always necessarily lead to a lower carbon footprint. The inverted case is shown in Fig. <ref> (bottom) where keeping the DNA-visualization function alive on the older hardware for a longer period improves the service time as before, but may not result in carbon footprint saving – as alluded earlier, this is because of the impact of temporal variations in the carbon intensity and its impact on the operational carbon footprint. The inversion depends on many factors including carbon intensity, keep-alive period, execution length, cold-start, energy consumption of function, etc., and hence, the inversion point can vary among functions. Energy consumption is different in both scenarios (case A& B) because case B has longer service time due to cold-start and the DRAM energy may contribute toward the carbon footprint in different amounts for different functions. In such situations, naívely choosing older hardware with a longer keep-alive period does not automatically lead to savings in the carbon footprint and service time and this is why the optimization is challenging. Choosing keep-alive periods effectively requires carefully considering the carbon intensity of the energy source and adapting to temporal variations of the carbon intensity – since carbon intensity affects the operational carbon footprint component of the overall carbon footprint (embodied plus operational carbon footprint). Joint Optimization for Carbon Footprint and Service Time. Figure <ref> demonstrates the potential for reducing carbon footprint while decreasing service time within the stateless serverless computing environment. The solution represents the most optimal solution solely focused on minimizing carbon footprint, while the solution represents the optimal solution for minimizing service time alone. The solution, theoretically optimal, aims to co-optimize both carbon footprint and service time. Note that these three solutions are impractical in real-world systems as they rely on brute-force methods to explore all possible choices, providing only an upper bound for the design reference for . The solution stands as the traditional and naive optimal solution only focused on minimizing energy consumption to indirectly minimize carbon footprint. Notably, although energy consumption primarily contributes to the operational carbon footprint, solution is far from solution. This is because energy-aware solutions often overlook the significance of embodied carbon footprint and variations in carbon intensity. Unfortunately, co-optimization of service time and carbon footprint is challenging as shown in Figure <ref> where even the solution is more than 7% far from the respective and solutions. An effective approach for co-optimizing service time and carbon footprint should incorporate the changes in function invocations and carbon intensity. Therefore, it is essential to develop a scheduler capable of adapting to a dynamically changing serverless environment. is inspired by these necessities. § DESIGN OF In this section, we first formulate the objective function that minimizes. Then, we present the key ideas behind the design of . §.§ Problem Formulation The goal of is to determine the most suitable location (older-generation hardware or newer-generation hardware) and keep-alive periods for serverless functions – to co-optimize both service time and the carbon footprint. As mentioned in Sec. <ref>, the keep-alive period can influence function cold starts, which in turn impacts both service time and carbon footprint. Our optimization can be subdivided into three main components: service time, carbon footprint during function execution, and carbon footprint during the keep-alive period of functions. The following expression shows the general objective function for achieving the optimization goal. *argmin_ l∈ L, k∈KATλ_s[darkgray]p1E[S_f_l, k]/S_f_max + λ_c[red]p2E[SC_f_l, k]/SC_f_max + λ_c[blue]p3KC_f_l,k/KC_f_k_max [yshift=0.45em,]above,leftp1Service Time [yshift=0.45em,xshift = -3em]abovep2Carbon Footprint During Execution [yshift=-0.05em]below,leftp3Carbon Footprint During Keep-alive period λ_s and λ_c are the adjustable parameters to determine the optimization weights on reducing service time and carbon footprint, respectively. E[S_f_l,k] is the expected value of service time of function f, keeping alive on hardware l, with k keep-alive time period. S_f_max is the maximum service time (the function has a cold start and is executed on the older-generation hardware). Similarly, E[SC_f_l,k] denotes the expected carbon footprint during the service time of function f when kept alive on hardware l for k keep-alive period. SC_f_max represents the maximum carbon footprint during service time. The service time and carbon footprint of the function f account for the service time and carbon generated by the additional latency and delay. The term KC_f_l,k is the carbon footprint of the function f during the keep-alive period k on hardware l, KC_f_kmax is the maximum carbon footprint during keeping function f alive (function is kept alive on newer-generation hardware). aims to simultaneously determine the keep-alive locations l (older-generation hardware or newer-generation hardware) and the keep-alive periods k (selected from a set of keep-alive period values) for all invoked functions in order to co-optimize all functions, minimizing the overall carbon footprint and service time. To achieve this optimization, the scheduler should have the following design properties: (a) Adaptability to variations in function invocation patterns and carbon intensity: must be capable of responding to rapidly changing patterns of function invocations and fluctuations in carbon intensity within short time periods. (b) Co-optimization of all invoked functions: Scheduling one serverless function should consider the keep-alive choices of other functions due to limited memory resources. (c) Low decision-making overhead: should have low overhead to handle large serverless function invocation loads efficiently. §.§ Overview of is the first design using multi-generation hardware and intelligently selected keep-alive period to minimize the carbon footprint while maintaining high performance. consists of three key components: function warm pools, the Keeping-alive Decision Maker (KDM), and the Execution Placement Decision Maker (EPDM). These components operate in coordination with one another. manages two warm pools that monitor functions that are kept alive in the memory of Docker containers running on hardware spanning two across both old and new generations. Each pool of kept-alive functions has a memory constraint, necessitating to ensure that the combined memory usage of all functions kept alive in the warm pool does not exceed the maximum memory capacity available. When the user sends requests of serverless function invocations, uses the Keeping-alive Decision Maker to decide the keep-alive time and keep-alive location for every invoked function. If the memory space of hardware is not enough to hold a bursty load of function invocations, performs adjustments in the pool of kept-alive functions for better usage of the available memory for incoming new functions that need to be kept alive. Regarding the function execution, determines where to execute functions based on the Execution Placement Decision Maker to minimize carbon footprint and service time. Next, we will discuss the detailed design of each of the components of , and how they contribute toward meeting the desired design properties discussed previously. §.§ 's Keeping-alive Decision Maker (KDM) 's KDM uses Particle Swarm Optimization (PSO) to determine the keep-alive time of functions. Before going into the basics of PSO and our novel extensions on vanilla PSO to solve 's optimization, we discuss the reasons behind using PSO in . Why does use PSO? (a) Even the vanilla PSO algorithm is efficient in terms of determining the keep-alive time of serverless functions and has low decision-making overhead <cit.>, fulfilling one of the design properties of . PSO can rapidly converge to global optima due to its exploration-exploitation balance. Other exploration-exploitation optimization methods, such as reinforcement learning, have a larger overhead and require offline training. This is because PSO relies on simple, pre-defined rules for updating particle positions rather than learning complex policies through trial and error. (b) Due to its strong exploration capabilities, PSO is well suited to perform online optimization. It can continuously adapt to changing conditions and provide near-optimal solutions in dynamic environments, which is needed in a serverless context (one of the desired design properties). In PSO, multiple particles jointly explore the search space, which helps it to converge quickly when variations in system conditions change the optimal solution. Other traditional searching algorithms, such as gradient descent, are slower to adapt to the variations and are usually stuck in the local optima. Deep learning approaches are also not suitable for real-time online optimizations due to high training overhead and training data requirements. (c) In comparison to other heuristic optimization algorithms, such as Artificial Bee Optimization or Grey Wolf Optimization, PSO needs minimal parameter tuning with just three parameters. In our evaluation, we measured that PSO can reduce the carbon footprint by 17.4% and service time by 7.2%, compared to the Genetic Algorithm (another closely related nature-inspired optimization technique) with crossover probability of 0.6, mutation probability of 0.01, and population size of 15. Additionally, PSO showed a 6.2% reduction in carbon footprint and a 13.46% decrease in service time compared to the Simulated Annealing algorithm, which was set with an initial temperature of 100, a stop temperature of 1, and a temperature reduction factor of 0.9. Next, we briefly discuss the basics of a vanilla PSO optimizer. Thereafter, our extensions on vanilla PSO make more suitable in the context of keeping serverless functions alive. Basics of Particle Swarm Optimization. Particle swarm optimization is a meta-heuristic optimization algorithm inspired by how bird flocks forage. The bird flock effectively locates the best position of food source by sharing information collectively to let other birds know their respective positions. Birds determine whether the position they found is the optimal one and also share information about the best positions of the entire flock. Eventually, the entire bird flock gathers around the best position of food source. PSO utilizes massless particles to simulate birds in a flock, each particle has two attributes: velocity vector and position vector. Velocity represents the speed of movement, and position represents the direction of movement. The quality of each particle's position is determined by the fitness score. At the start of PSO search, N particles will be distributed at random positions in the search space. Particles change their positions in accordance with the following rules after each iteration: V_t+1 = ω*V_t+c_1r_1(X_pbest-X_t)+c_2r_2(X_gbest-X_t) X_t+1 = X_t + V_t+1 V_t+1 and X_t+1 represent the updated velocity and position of a particle, respectively, while V_t and X_t denote the previous velocity and position, respectively. X_pbest is the optimal position found by the individual particle, and P_gbest is the optimal position found by the entire swarm of particles. ω serves as the inertia weight, determining how much a particle should adhere to its previous velocity. c_1 and c_2 are cognitive and social coefficients, respectively, controlling the balance between refining the particle's search results and acknowledging the swarm's search results. r_1 and r_2 are random numbers uniformly distributed between 0 and 1. These adjustable coefficients regulate the trade-off between exploration and exploitation conducted by the swarm of particles. Next, we discuss the first extension that performs on a vanilla PSO that helps it to quickly adjust to changing function invocation characteristics of serverless platforms. 's Dynamic-PSO. constructs a two-dimensional search space for each serverless function to determine the optimal position. One dimension represents two generations of hardware for keeping alive (l), while the other dimension covers a set of keep-alive times (k). For each new invocation of a serverless function, assigns a PSO optimizer and preserves it in order to use it for the next function invocation consisting of parameters for keep-alive location and keep-alive time. However, a vanilla PSO algorithm is not ideally suited for achieving optimal solutions in the serverless environment due to the temporal fluctuations in carbon intensity and function invocations. Note that, configurable weights (w, c_1, and c_2) jointly regulate exploration and exploitation. One intuitive way is to dynamically adjust these weights based on the changes in carbon intensity and function invocation. The weights can be formally expressed as: w = w_max(Δ F/Δ F_max +ΔCI/ΔCI_max) c_1=c_2=c_max(1-Δ F/Δ F_max -ΔCI/ΔCI_max) Here, ω_max denotes the maximum value of inertia weight, c_1 and c_2 share the same value, and c_max is the maximum value of the empirical coefficient. Δ F and ΔCI denote the absolute changes of function invocations and carbon intensity, respectively, since the last invocation. Δ F_max and ΔCI_max are the maximum absolute changes in function invocations and carbon intensity across all observation windows so far. To further enhance PSO's responsiveness to serverless environment variations, introduces a perception-response mechanism to make PSO adapt to the dynamic environment (visually depicted in Fig. <ref>). This mechanism allows the particle swarm to be dynamically updated in response to environmental changes. If the perception indicates a change in the environment, the particle swarm updates to enlarge the exploration area. Conversely, if there is no perceived change in the environment, updates of the swarm are unnecessary. In , perception is represented by changes in Δ F and ΔCI. detects variations and divides the particle swarm into two halves. One-half is randomly redistributed within the search space, intensifying PSO's exploration and its ability to move past local optima. Meanwhile, the other half of the swarm retains its positions, providing the PSO optimizer with a level of memory, making it easier to find the optimal solution in dynamically changing environments. 's Warm Pool Adjustment. After collecting all the keep-alive decisions generated by 's PSO, functions are designated for keeping alive on either old hardware, or new hardware, otherwise no keep-alive at all. Following these decisions, serverless functions will be kept alive in memory for the entire keep-alive period. However, incoming serverless functions may not be allocated due to memory limitation, despite their greater necessity for being kept alive. This can result in sub-optimal solutions. To address this issue, adopts a priority eviction mechanism to sort functions already kept alive in the warm pool as well as those about to be kept alive to find the best arrangement (visually depicted in Fig. <ref>). This is performed by calculating the difference in service time and carbon footprint between cold start and warm start for functions on both old and new hardware to do priority ranking. After performing warm pool adjustment on one type of hardware, functions that are unable to be kept alive due to hardware memory limitation are transferred to another type of hardware for keeping alive, maximizing the utilization of hardware for keeping alive and thus increasing the probability of warm starts. Through warm pool adjustment, co-optimizes all serverless functions to reduce service time and carbon footprint, which is the design objective of . §.§ 's Execution Placement Decision Maker uses the Execution Placement Decision Maker (EPDM) to determine where to execute this function. If the Docker containers on hardware retain this function, it implies that regardless of where the function is executed, it will receive a warm start. EPDM will execute this function on this hardware to avoid the cold start overhead. Else, if this function is not kept alive on only hardware, EPDM's decision will be based on the following scores to determine the optimal execution location: f_score = λ_sS_r/S_f_max+λ_cSC_r/SC_max Here, r denotes the execution location for the function f. S_f_max and SC_max denotes the maximum service time and carbon footprint. §.§ : Combining All Design Elements When a new serverless function is invoked for the first time, makes the decision to allocate functions for execution (cold starts), and assign a PSO optimizer for this serverless function. PSO optimizer forms the search space for the function and initializes a number of particles randomly distributed in the space. As a function gets invoked multiple times, utilizes the EPDM to determine where to execute this function to avoid cold start overhead based on the warm pools. After execution, 's PSO detects the changes in the serverless environment (carbon intensity and function invocations), and the optimizer belonging to this function updates the PSO weights (ω, c_1, c_2) according to the changes and randomizes half of the swarm to explore in the search space. After performing the particle movement, uses the global best position generated by the PSO to decide the keep-alive location and keep-alive period. If there is limited memory for keep-alive, performs warm pool adjustment for function keep-alive arrangement. Algorithm <ref> summarizes the scheduling framework of . meets all the potential design properties as discussed in Sec. <ref>. § METHODOLOGY Experimental Setup. evaluation uses two types of testing nodes, and , which are selected from the AWS servers. comprises a 2016 released 36 cores Xeon E5-2686 CPU, and a 2018 released 512 GiB Micron DRAM. equips with a 2020 released 24 cores Xeon Platinum 8252C CPU, and a 2019 released 192 GiB Micron DRAM. This hardware configuration corresponds to Pair A in Table <ref>, used as the default configuration in Sec. <ref>. Serverless functions are executed in the Docker container, as Docker is widely used in serverless execution <cit.>. Additionally, uses an Intel Skylake-SP server with 16 cores, 64 GB memory, and a 4 Gbps network bandwidth as the controller node. We store each serverless function as a Docker image in an S3 bucket. The Docker image will be downloaded from the S3 bucket to the testing node assigned by in the control node when execution starts during the simulation campaign. PSO Setup and Configuration. We deploy the PSO-based on the Intel Skylake-SP server as previously discussed. We assign equal weights to both λ_s and λ_c (λ_s=λ_c=0.5) to ensure equal optimization of service time and carbon footprint. As for the PSO in , ω ranges from 0.5 to 1, c_1 ranges from 0.3 to 1 and c_2 ranges from 0.3 to 1. These weights jointly control the exploration and exploitation of PSO, as discussed in Section <ref>. We use 15 particles in the PSO, the number of particles influences the decision-making overhead, and changing the number of particles has negligible influence on the optimization results. Evaluated Workloads. Serverless functions are collected from SeBS benchmark suites <cit.>, including various scientific serverless workloads. These functions are invoked following the Microsoft Azure trace <cit.>. During our trace-driven simulation evaluation, the functions in the Microsoft Azure trace are selected for invocation randomly, but uniformly to ensure representativeness. maps all serverless functions to the closest match, considering the memory and execution time. Carbon Footprint Estimation and Carbon Intensity. follows the carbon estimation mentioned in Sec. <ref>. uses a publicly available dataset <cit.> and well-established calculation methodologies <cit.> to determine the total embodied carbon of CPU and DRAM. uses a typical four-year lifetime <cit.>, <cit.> for DRAM and CPU. Carbon intensity within is gathered from a widely-used Electricity Maps <cit.>, and expanded to minute intervals to capture the temporal environmental variations. primarily utilizes carbon intensity from California Independent System Operator (CISO), where carbon intensity fluctuates by an average of 6.75% hourly, with a standard deviation of 59.24. Additionally, collects carbon intensity from Tennessee (TEN), Texas (TEX), Florida (FLA), and New York (NY) for robustness analysis. The utilizes Likwid <cit.> - a simple Linux-based tool suite to read out RAPL <cit.> energy information and get info about turbo mode steps on bare-metal machines for energy consumption measurement. Relevant and Complementary Techniques. is evaluated to compare with the following schemes: , . , follow a ten (10) minutes keep-alive policy of OpenWhisk <cit.>. The scheme prioritizes the utilization of faster, newer hardware for executing functions under high-performance demands. The scheme operates in the opposite manner, it always utilizes older-generation hardware for executing functions. It is important to note that utilizing multi-generation hardware to keep functions alive is not a feature introduced in either the or scheme. , , and . compares against infeasible solutions, including (Carbon Footprint Optimal Solution), (Performance Optimal Solution) and (Best Optimal Solution). These solutions utilize heterogeneous hardware and present the theoretical upper bounds, which are computed via brute-forcing every possible scheduling option for each function invocation. , . These schemes are static versions of , and we use single-generation hardware to schedule functions. and primarily emphasize the determination of keep-alive periods while overlooking the trade-off between older hardware and newer hardware, which is the highlight brought by multi-generation hardware that concentrates. Figures of Merit. Carbon footprint and service time are two metrics used to evaluate . They are represented as percentages under the and ( in robustness analysis) to show the increase. § EVALUATION In this section, we evaluate the effectiveness of , explain its effectiveness, and demonstrate its robustness. §.§ Effectiveness of In Fig. <ref>, stands out as the closest scheme to the in terms of carbon footprint and service time among all the schemes. Recall from Sec. <ref>, , and only minimize carbon footprint, service time and energy consumption respectively, and all of them are significantly far away from the . However, implementing the directly in real-world systems is impractical. co-optimize both metrics, while bridging the gap between sustainability and execution performance. Compared to , experiences a 7.7% increase in average service time and a 5.5% increase in average carbon footprint, respectively. Furthermore, is close to from the perspective of individual function invocations. As shown in Fig. <ref>, we present the cumulative distribution function (CDF) of service time and carbon footprint respectively, and both service time and carbon footprint remain less than 1% for each percentile of invoked functions. The service time is the average service time which includes queuing delay, setup delay, cold start (if applicable), and execution time. The P95 latency of is within 15% increase of the service time in . decision-making overhead is also low and practical, less than 0.4% of service time, and 1.2% of carbon footprint for the invocation loads in the Azure trace. achieves scalability by addressing the memory limitation problem of the co-located function with the warm pool adjustment. As discussed in Sec. <ref>, the utilization of multi-generation hardware can bring a high-performance and environmentally friendly serverless execution. In Fig. <ref>, We compare with and in Sec. <ref> with single-generation hardware under the 10-minute fixed keep-alive policy. While adopting older-generation hardware may reduce carbon emission, 's utilization of multi-generation hardware results in a service time saving of 12.7%. Similarly, although using newer-generation hardware will slightly accelerate the function execution, can reduce carbon by 8.6% with multi-generation hardware. is closer to because of its heterogeneity and intelligently selected keep-alive periods. This confirms that combines the advantages of both hardware generations to achieve the co-optimization of service time and carbon footprint. §.§ Reasons Behind 's Effectiveness utilizes a perception-response mechanism to dynamically adapt to the PSO search space based on the changes in function invocations (Δ F) and carbon intensity (ΔCI), as described in Sec. <ref>. This enables to effectively and precisely locate the near-optimal solution. As shown in Fig. <ref>, without dynamic PSO experiences a 5.6% increase in service time and a 16.9% increase in carbon footprint. The decisions generated by dynamic PSO impact both the keep-alive period and the keep-alive location, which in turn directly affect the cold start overhead and the carbon footprint. Consequently, without dynamic PSO, the sub-optimal decisions would result in increased service time and carbon footprint. Warm pool adjustment in makes effective when memory resources are insufficient to handle numerous serverless functions (Sec. <ref>). In Fig. <ref>, we present a comparison of the service time, carbon intensity, and number of evicted functions with and without warm pool adjustment. The old and new hardware keep-alive memory size varies across three combinations, denoted as “old/new”. The evicted functions are a result of limited memory space in their designated keep-alive hardware. A higher number of evicted functions indicates that the hardware is not utilizing its full potential to keep functions alive, resulting in longer service time because of the more frequent cold starts. As shown in Fig. <ref>, service time, carbon footprint, and evicted functions with warm pool adjustment are consistently lower than without. For example, with 15GiB memory available on old and new hardware (15/15), warm pool adjustment can save 7.9% of service time, 3.7% of carbon footprint, and keep 17% more functions alive which would otherwise be evicted. §.§ Robustness of is effective even with single-generation hardware ( or ), if multi-generation hardware is not available, as demonstrated in Fig. <ref>. Service time in and carbon footprint in are notably higher compared to the . This is because calculation is based on the multi-generation hardware, which achieves the best balance between minimizing service time and carbon footprint. However, implementing the directly is impractical. leverages the advantages of multi-generation utilization and enhances the co-optimization of service time and carbon footprint, offering a viable solution even in the absence of multi-generation hardware. is generally applicable to different hardware generation pairs, as we demonstrate its effectiveness against various hardware combinations from Table <ref> in Fig. <ref>. Across all hardware generation pairs, consistently achieves benefits close to the , as both the service time and carbon footprint remain within a 7.5% margin to . This demonstrates 's ability to flexibly leverage prior-generation hardware to balance service time and carbon footprint, without exploiting specific hardware types. 's evaluation is focused on demonstrating its effectiveness for a single pair (two generations) to convey the benefit of its key insights. Assuming a five-year lifespan of one generation and alternate-year major hardware upgrades, one would likely expect to predominantly find two or three generations of hardware to be present at a given time in data centers. Three or more generations of hardware also present operational maintainability challenges. Nevertheless, can work in the presence of multiple multi-generation pairs, by maintaining multiple warm pools. We acknowledge that embodied carbon footprint can have small inaccuracy because the estimation relies on the accuracy of external data sources (e.g. vendor data) and the field is still rapidly evolving with multiple methodological practices. Nevertheless, the benefits of remain within 7% (carbon) and 10% (service time) of even if we allow a 10% estimation flexibility range for the embodied carbon footprint. While primarily considers the embodied carbon footprint of CPU and DRAM, is still effective when considering the embodied carbon footprint of other computer system components, including storage, motherboard, power unit, etc. performs within 5.63% of in carbon footprint and 8.2% in service time. Finally, we evaluate 's effectiveness w.r.t. carbon intensity, as the carbon intensity profile may vary across geographical regions. In Fig. <ref>, we evaluate using carbon intensity data from various regions. The results show that remains effective across diverse geographical regions, as it remains within 7% of the Oracle in terms of service time and 6% in terms of carbon footprint. This showcases 's ability to adapt to various geographical environments and respond to trends in carbon intensity. § DISCUSSION We acknowledge that 's unorthodox approach of mixing multi-generational hardware raises several important and interesting considerations. For example, a critical consideration is operational maintainability in a data center environment. We argue that the presence of multi-generational hardware is already natural in today's data center because of portability, compatibility, smooth transition reasons, frequent hardware upgrades, and the need to support diverse customer needs (e.g., Amazon AWS). simply exploits that opportunity for saving carbon footprint. We also highlight that heterogeneity and multi-generation hardware have already been demonstrated to be beneficial from cost and performance perspectives for serverless workloads (e.g., IceBreaker <cit.>). adds one more beneficial dimension – environmental sustainability to exploit these implicit investments/practices around multi-generation hardware. advocates for longer lifetimes for older hardware. Thus, opens the avenue for novel research to investigate the trade-off among performance, carbon footprint, lifetime, cost, and maintenance cost. 's idea of lifetime extension for older hardware also has implications for the post-life/disposal carbon footprint of computing hardware. We also recognize that carbon footprint modeling and estimates are currently prone to errors, esp. for embodied carbon. As discussed earlier, continues to provide benefits even for a range of estimations, but more efforts are needed to strengthen carbon footprint estimations. § RELATED WORK Carbon footprint optimizations. The expansion of cloud and HPC infrastructure has spotlighted the importance of minimizing its carbon footprint, a concern echoed across numerous studies <cit.>. Efforts to reduce carbon emissions span diverse computation sources, from autonomous vehicles <cit.> and chip design <cit.> to smartphones <cit.> and the training of large language models <cit.>. Within this context, extends this effort into serverless computing, differentiating itself by optimizing the keep-alive strategy of serverless functions for reduced service time and carbon emissions. While cMemento <cit.> introduces carbon-aware memory placement in heterogeneous systems, it does not address the unique challenges of serverless function keep-alive that tackles. Although previous research has proposed workload scheduling based on the carbon intensity's temporal and spatial variations <cit.>, innovates by considering serverless functions' execution and keep-alive on multi-generation hardware to promote sustainability. Serverless function orchestration. Serverless computing has emerged as a scalable and efficient service model for cloud users <cit.>. Research in this domain has extensively explored optimizations, focusing on cold start mitigation <cit.>, hardware resource provisioning <cit.>, stateful execution <cit.>, and cost-effectiveness <cit.>. Amid these developments, a gap remains in addressing the carbon footprint of serverless functions. While Icebreaker <cit.> and Molecule <cit.> have explored the use of heterogeneous hardware to enhance serverless function provisioning, they stop short of integrating carbon modeling to harness potential carbon savings. Similarly, energy-efficient solutions for serverless edge computing <cit.> underscore energy savings, which is only one aspect of system carbon footprint. In contrast with all prior works, leverages an intelligent keep-alive mechanism on a multi-generation hardware platform, taking the first step towards sustainable serverless cloud computing. § CONCLUSION This paper presented , a novel placement strategy designed to use multi-generation hardware for optimizing both the carbon footprint and service time of serverless functions. We hope our work will encourage the adaptation of multi-generation hardware in serverless execution environments, promoting more attention to computing sustainability and environmental considerations of large-scale computing systems. Acknowledgment. We thank the reviewers for their constructive feedback. This work was supported by NSF Awards (2124897, and 1910601) and Northeastern University. This research partially used resources from the Massachusetts Green High Performance Computing Center (MGHPCC). We utilized ChatGPT, an AI language model developed by OpenAI, for partial assistance in drafting the text, and all generated content was thoroughly reviewed and edited for accuracy. IEEEtran
http://arxiv.org/abs/2409.03404v1
20240905104117
KAN See In the Dark
[ "Aoxiang Ning", "Minglong Xue", "Jinhong He", "Chengyun Song" ]
cs.CV
[ "cs.CV", "cs.AI" ]
KAN See In the Dark Aoxiang Ning, Minglong Xue1, Jinhong He and Chengyun Song This work is supported by Chongqing Postgraduate Research and Innovation Project Funding (No. CYS23677 and No. CYS240680), Youth Project of Science and Technology Research Program of Chongqing Education Commission of China (No. KJQN202401106).(Corresponding author: Minglong Xue) Aoxiang Ning, Minglong Xue, Jinhong He and Chengyun Song are with Chongqing University of Technology, Chongqing, 400054, China. (e-mail: [email protected], [email protected], [email protected], [email protected]) September 9, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Existing low-light image enhancement methods are difficult to fit the complex nonlinear relationship between normal and low-light images due to uneven illumination and noise effects. The recently proposed Kolmogorov-Arnold networks (KANs) feature spline-based convolutional layers and learnable activation functions, which can effectively capture nonlinear dependencies. In this paper, we design a KAN-Block based on KANs and innovatively apply it to low-light image enhancement. This method effectively alleviates the limitations of current methods constrained by linear network structures and lack of interpretability, further demonstrating the potential of KANs in low-level vision tasks. Given the poor perception of current low-light image enhancement methods and the stochastic nature of the inverse diffusion process, we further introduce frequency-domain perception for visually oriented enhancement. Extensive experiments demonstrate the competitive performance of our method on benchmark datasets. The code will be available at: https://github.com/AXNing/KSIDhttps://github.com/AXNing/KSID. Low-light image enhancement; Kolmogorov-Arnold networks; Frequency-domain perception; Diffusion model § INTRODUCTION Low-light image enhancement (LLIE) is a critical task in computer vision and is essential for various applications ranging from surveillance to autonomous driving<cit.>. Images captured in low-light environments often suffer from low contrast and loss of detail, making downstream tasks such as object or text detection, semantic segmentation, and others highly challenging<cit.>. Therefore, to further enhance various visual applications in poor environments, low-light image enhancement tasks have received extensive attention from researchers<cit.>.Traditional methods utilize retinex theory<cit.> and gamma correction<cit.> to correct image illumination. With the development of deep learning, some methods<cit.> significantly improve the performance of low-light image enhancement by learning the mapping between low-light and normal images in a data-driven way. Recently, the diffusion model <cit.> has received much attention for its remarkable performance in generative tasks. <cit.> introduced the diffusion model into the low-light image enhancement task to improve the recovery of image details and textures in low-light conditions. However, there are nonlinear degradation factors in low-light enhancement tasks, such as uneven illumination and varying degrees of noise in low-light images, and it is difficult for existing methods to model complex nonlinear relationships on limited data. On the other hand, although current state-of-the-art technologies have achieved remarkable breakthroughs in the field of LLIE, its inner workings are often viewed as a black box that is difficult to decipher, limiting the development of the model in specific domains. The recent introduction of Kolmogorov-Arnold Networks<cit.> has raised hopes of opening the black box of traditional networks<cit.>. It enables networks to efficiently represent complex multivariate functions by employing the Kolmogorov-Arnold representation theorem<cit.>. Unlike MLPs, which have fixed activation functions at nodes, KANs use fixed activation functions at edges. KANs make each part of the model easier to understand by decomposing a complex function into a series of simpler combinations of bivariate functions. This decomposition helps to reveal the decision-making process and output of the model, which enhances the interpretability of the model. <cit.> was the first to introduce KANs into visual tasks, reformulating U-Net as U-KAN to improve medical image segmentation and generation. <cit.> utilized KANs' local plasticity for human activity recognition. Despite these initial explorations, the potential of KANs for low-level visual tasks such as low-light image enhancement has not yet been demonstrated. In this paper, we propose a novel LLIE method (KSID) that introduces KANs to low-level visual tasks for the first time to learn better the nonlinear dependencies between the normal and low-light domains and enhance the interpretability of the model. Specifically, we design the KAN-Block and embed it into the U-Net used for denoising by the diffusion model. KAN-Block consists of the KAN-Layer and DwConv. Since the KAN-Layer has spline-based convolutional layers and the learnable activation function, it can capture nonlinear dependencies more efficiently, thus significantly improving the image generated by the diffusion model. In addition, to improve the stability and visualization of the generation process, we reconstruct the image at each step in the denoising process and introduce frequency domain perception using the Fast Fourier Transform (FFT) to further refine the image details by learning the spectrum of the normal image. There is evidence that our method not only has good interpretability but also significantly improves the performance of the model, as shown in Fig. <ref>. We performed extensive experiments on benchmark datasets to demonstrate the effectiveness of our method. Our contribution can be summarized as follows: * To the best of our knowledge, we are the first to successfully introduce KANs into the LLIE task. While enhancing the performance of the model, it also further improves the interpretability of the model. * We introduce frequency-domain perception for visual orientation enhancement by learning the spectrum of a normal image through the Fast Fourier Transform. * We performed extensive experiments on the low-light image enhancement benchmark datasets and achieved impressive performance. § METHODS §.§ Kolmogorov-Arnold theorem Preliminaries The Kolmogorov-Arnold theorem states that any continuous function can be represented as a composition of a finite number of continuous univariate functions. Specifically, for any continuous function f(x) defined in n-dimensional real space, where x=(x_1,x_2,...,x_n), it can be expressed as a composition of a univariate continuous function h and a series of continuous bivariate functions x_i and g_q, i. Specifically, the theorem shows that there exists such a representation: f(x_1,x_2,...,x_n)=∑_q=1^2n+1h(∑_i=1^ng_q,i(x_i) ) This representation indicates that even complex functions in high-dimensional spaces can be reconstructed through a series of lower-dimensional function operations. §.§ Overall Network Architecture The structure of our proposed (KSID) is shown in Fig. <ref> (a). Our training is divided into two phases: In the first phase, inspired by <cit.>, we introduced uncertainty-guided regularization into the diffusion process to enhance the recovery performance in challenging areas; in the second phase, we froze the weights of the uncertainty network to guide the network's learning. In both phases, we utilized the KAN-Block to strengthen the learning of nonlinear dependencies, and in the second phase, we incorporated a frequency-domain perception module to achieve visually-guided enhancement. §.§ KAN-Block We aim to embed KANs into low-light image enhancement networks to enhance the interpretability of the model and its ability to learn from nonlinear dependencies. As shown in Fig. <ref> (c), <cit.> proposed KANs based on the Kolmogorov-Arnold representation theorem, similar to multilayer perception (MLP), KANs with N KAN-Layers can be represented as: KAN(Z)=(Φ _N-1∘Φ _N-2∘···∘Φ _1∘Φ _0) where Z is the input feature; Φ_i signifies the i-th layer of the entire KAN network. Each KAN-Layer, with n_in-dimensional input and n_out-dimensional output, Φ comprises n_in× n_out learnable activation functions ϕ: Φ={ϕ_q,p}, p=1,2,...,n_in, q=1,2,...,n_out where ϕ_q,p is the parameter that can be learned. Unlike the traditional MLP, the weight of each connection in KAN is not a simple numerical value but is parameterized as a learnable spline function, which is used to parameterize these variable functions in the formula. The results of the computation of KANs from the l-th layer to the l+1-th layer can be expressed in the form of a matrix: Z_l+1= [ ϕ_l,1,1(· ) ϕ_l,1,2(· ) ⋯ ϕ_l,1,n_l(· ); ϕ_l,2,1(· ) ϕ_l,2,2(· ) ⋯ ϕ_l,2,n_l(· ); ⋮ ⋮ ⋱ ⋮; ϕ_l,n_l+1,1(· ) ϕ_l,n_l+1,2(· ) ⋯ ϕ_l,n_l+1,n_l(· ) ]_Φ _l Z_l where Φ _l is the function matrix corresponding to the l-th KAN-Layer.As shown in Fig. <ref>(b), we designed KAN-Block to introduce KANs into the U-Net structure in the low-light enhancement task. The U-Net extracts high-level features through a stepwise downsampling operation and recovers low-level details using jump connections. To avoid interference from low-level information, we replace the middle layer in the U-Net with the KAN-Block, with no change in the sampling stage. Specifically, our network first takes the image X_t∈ R^H× W with added random noise ϵ∈ R^H× W and extracts the high-frequency features through downsampling operations and residual learning, where the height and width become 1/8 of the original, and we change its shape and input it into the first KAN-Block. As shown in Fig. <ref>(b), KAN-Block consists of KAN-Layer and DwConv. We take the input features by combining multiple KAN-Layer and DwConv to learn the nonlinear dependencies and fine-grained information further. The operation is defined as follows: X_l =DwConv(KAN(X_l-1)) where X_l is the feature map output from the l-th KAN-Layer. To learn global nonlinear dependencies, we designed a loop structure. §.§ Frequency Domain Perception Module Although the current low-light enhancement methods based on the diffusion model have made good progress, due to the stochastic nature of their inverse diffusion process and unsatisfactory visual effects, we have introduced frequency-domain perception to make the whole process more stable and achieve visually oriented enhancement.The training of the diffusion denoising probabilistic model<cit.> (DDPM) starts with obtaining a closed form X_t at any time step t; in the subsequent process, the model uses a learnable function ϵ _θ(Y, X_t,α̅_t ) to learn the underlying noise distribution<cit.>. Since the network can successfully learn this noise, we construct a learnable X_t-1 for frequency-domain perception, thereby constraining the entire training process to be more stable. It is defined as follows: X_t-1=1/√(α _t)(X_t-1-α _t/√(1-α̅)ϵ _θ(Y,X_t,α̅_t) ) where ϵ _θ∈ R^H× W is the predicted noise distribution. We introduce a frequency domain perceptual loss to learn the spectrum of the normal image. Specifically, we perform the FFT of X_0 and X_t-1, respectively, which can be expressed as follows: amp_high,pha_high=F(X_0) amp_low,pha_low=F(X_t-1) where amp denotes amplitude and pha denotes phase. F stands for Fast Fourier Transform. To align X_t-1 with X_0 in high-frequency details, we construct the frequency domain loss L_f, which is defined as follows: L_f= γ _1 amp_low-amp_high + γ _2 pha_low-pha_high where γ _1 and γ _2 are weighting parameters for amplitude loss and phase loss. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets and Metrics We used four common low-light image enhancement benchmark datasets for evaluation: LOLv1, LOLv2, and LSRW. For evaluation metrics, we use the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) as two full-reference distortion metrics to evaluate the performance of the proposed method. In addition, we use Learned Perceptual Image Block Similarity (LPIPS) and FréchetInceptionDistance (FID) as two perceptual metrics to measure the visual quality of the enhanced results. §.§.§ Implementation Details We implemented KSID on a NVIDIA RTX 3090 GPU with PyTorch, setting the batch size to 8 and patch size to 96x96. The learning rate was set to 1e-4, with Adam as the optimizer. The training process is divided into two phases: the first phase focuses on optimising the uncertainty network, with the number of epochs set to 1e6; the second phase extends the training by setting the number of epochs to 2e6. §.§ Comparisons With State-of-the-Art Methods We qualitatively and quantitatively compare the proposed KSID with state-of-the-art low-light image enhancement methods. We train on the LOLv1 dataset and test on all datasets. §.§.§ Quantitative Comparisons Table <ref> presents the quantitative results of various LLIE methods, indicating that our approach is competitive across PSNR, SSIM, LPIPS, and FID metrics. Notably, our method attains the best SSIM scores on the LOLv1 dataset and remains competitive in other metrics. Furthermore, on the LOLv2-real dataset, our method excels in both SSIM and PSNR, which are full-reference metrics, as well as in FID and LPIPS, which are perceptual metrics. These results underscore the ability of our model to effectively learn and emulate complex nonlinear degradation processes in image restoration, while also adeptly recovering image details and markedly enhancing their perceptual quality. Additionally, our method achieves the best results on the large-scale LSRW dataset in SSIM, LPIPS, and FID metrics, further confirming its strong generalization and robustness. §.§.§ Qualitative Comparisons We performed a qualitative comparison with different LLIE methods. As shown in Fig. <ref>, in the LOLV2-real test dataset, we observed that the images restored by other methods suffered from colour distortion and could not effectively handle uneven illumination. However, our method has effectively addressed these issues with restored images closer to the reference images' colour distribution and better at recovering detailed information. §.§ Ablation Study To evaluate the effectiveness of our model for low-light image enhancement tasks, we performed ablation studies on different modules on the LOLv2 test set. Table <ref> demonstrates that the KAN-Block significantly enhances the model's ability to learn the nonlinear degradation relationship between low-light and normal images, resulting in restored images that closely align with the true distribution. The frequency-domain perceptual module effectively enhances the detail information in the restored images, thereby improving perceptual quality. § CONCLUSION In this paper, we propose a novel low-light image enhancement method, KSID, which introduces KANs into the LLIE task for the first time, improves the model's ability to learn nonlinear dependencies and achieves high-quality mapping of the degradation parameters. In addition, we introduce the Frequency Domain Perception Module to refine the image details further and make the inverse diffusion process more stable. Extensive experiments validate the effectiveness and robustness of our method. Overall, we provide an initial exploration of the potential of KANs in the field of LLIE and argue that this non-traditional linear network structure is important for processing low-level visual tasks. plain
http://arxiv.org/abs/2409.02187v1
20240903180029
Late-time ensembles of quantum states in quantum chaotic systems
[ "Souradeep Ghosh", "Christopher M. Langlett", "Nicholas Hunter-Jones", "Joaquin F. Rodriguez-Nieva" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "quant-ph" ]
http://arxiv.org/abs/2409.03723v1
20240905172451
Quantum Energy Density of Cosmic Strings with Nonzero Radius
[ "M. Koike", "X. Laquidain", "N. Graham" ]
hep-th
[ "hep-th", "gr-qc" ]
Department of Physics, Middlebury College, Middlebury, Vermont 05753, USA Department of Physics, Middlebury College, Middlebury, Vermont 05753, USA [email protected] Department of Physics, Middlebury College, Middlebury, Vermont 05753, USA § ABSTRACT Zero-point fluctuations in the background of a cosmic string provide an opportunity to study the effects of topology in quantum field theory. We use a scattering theory approach to compute quantum corrections to the energy density of a cosmic string, using the “ballpoint pen” and “flowerpot” models to allow for a nonzero string radius. For computational efficiency, we consider a massless field in 2+1 dimensions. We show how to implement precise and unambiguous renormalization conditions in the presence of a deficit angle, and make use of Kontorovich-Lebedev techniques to rewrite the sum over angular momentum channels as an integral on the imaginary axis. Quantum Energy Density of Cosmic Strings with Nonzero Radius Noah Graham ============================================================ § INTRODUCTION The effects of quantum fluctuations can be of particular importance in systems with nontrivial topology. In one space dimension, one can carry out calculations analytically in scalar models <cit.>, and their supersymmetric extensions. In the latter case, such corrections appear to violate the BPS bound <cit.>, but a corresponding correction to the central charge ensures that the bound remains saturated <cit.>. In higher dimensions, such corrections must vanish identically in supersymmetric models to preserve multiplet shortening <cit.>, while detailed calculations are difficult in nonsupersymmetric models. String backgrounds in Higgs-gauge theory offer the opportunity to study quantum effects of topology in higher dimensions while remaining computationally tractable, making it possible to study quantum effects on string stability <cit.>. In this paper we consider quantum fluctuations of a massless scalar field ϕ in the gravitational background of a cosmic string, which introduces topological effects through a deficit angle in the otherwise flat spacetime outside the string core. When the string is taken to have zero radius, the problem becomes scale invariant and the quantum corrections can be computed exactly <cit.>. For a string of nonzero radius r_0, we must specify a profile function for the background curvature, which was previously concentrated at r=0. We will consider two such models for the string core <cit.>, the “ballpoint pen,” in which the curvature is constant for r<r_0, and the “flowerpot,” in which the curvature is localized to a δ-function ring at r=r_0. In the former case, the curvature for a specified deficit angle is chosen so that the metric is continuous at r_0. For both models, we can construct the scattering wavefunctions <cit.>, matching the solutions inside and outside using boundary conditions at r=r_0. From these scattering data, we can then construct the Green's function, from which we determine the quantum energy density. This calculation is formally divergent and requires renormalization. In all cases, we must subtract the contribution of the free Green's function, which corresponds to renormalization of the cosmological constant. Because of the nontrivial topology, this subtraction is most efficiently implemented by adding and subtracting the result for the zero radius “point string,” and then using analytic continuation to imaginary angular momentum to compute the difference between the point string and the free background. In addition, for the interior of the ballpoint pen, we have a nontrivial background potential and must subtract the tadpole contribution, which corresponds to a renormalization of the gravitational constant. In this calculation, we find it advantageous to break the calculation of the quantum energy density into two parts: a “bulk” term ⟨ (∂_t ϕ)^2 ⟩ and a “derivative” term (1/4- ξ) ⟨1/r^2 D_r^2 (ϕ^2) ⟩, where ξ is the curvature coupling, for which we can identify the counterterm contributions individually. Putting these results together, we obtain the full quantum energy density as a function of r for a given deficit angle, string radius, and curvature coupling, which we can efficiently compute as a numerical sum and integral over the fluctuation spectrum. § MODEL AND GREEN'S FUNCTION We begin from the general case of scalar field in d spacetime dimensions, for which the action functional is S = -1/2∫ d^d x √(-g)( ∇_αϕ∇^αϕ + U ϕ^2 + ξ Rϕ^2) . with coupling ξ to the Ricci curvature scalar R. Of particular interest is the case of conformal coupling, ξ = 1/8 in two space dimensions and ξ = 1/6 in three space dimensions. This expression includes an external background potential U; we will set U=0, but it is straightforward to introduce a nontrivial U(ϕ), such as a mass term U=μ^2. The equation of motion is -∇_α∇^αϕ + Uϕ + ξ Rϕ = 0 with metric signature (-+++). The stress-energy tensor is given by <cit.> T_αβ = ∇_αϕ∇_βϕ -g_αβ1/2(∇_γϕ∇^γϕ + U ϕ^2) + ξϕ^2 ( R_αβ -1/2 g_αβ R) + ξ(g_αβ∇_γ∇^γ - ∇_α∇_β)(ϕ^2) , as obtained by varying the action with respect to the metric. Note that the curvature coupling contributes to the stress-energy tensor even in regions where R=0, although it does so by a total derivative. We consider the spacetime metric ds^2 = -dt^2 + p(r)^2 dr^2 + r^2 dθ^2 with a deficit angle 2θ_0, meaning that the range of angular coordinate is 0… 2(π-θ_0)α, and we define σ = π/π-θ_0. To implement the deficit angle without a singularity at the origin, we introduce a profile function p(r) that ranges from 1/σ at the origin to 1 at the string radius r_0. The nonzero Christoffel symbols in this geometry are Γ_r r^r = p'(r)/p(r) Γ_θθ^r = -r/p(r)^2 Γ_θ r^θ = Γ_r θ^θ = 1/r , and because the geometry only has curvature in two dimensions, all the nonzero components of the Riemann and Ricci tensors R_θ rθ^r = -R_θθ r^r = R_θθ = g_θθ R/2 R_rθ r^θ = -R_rrθ^θ = R_rr = g_rr R/2 can be expressed in terms of the curvature scalar R=2/rp'(r)/p(r)^3. Acting on any scalar χ, the covariant derivatives simply become ordinary derivatives, while for second derivatives we have nontrivial contributions from the Christoffel symbols given above, ∇_θ∇_θχ = ∂_θ^2 χ - Γ_θθ^r ∂_r χ ∇_r ∇_r χ = ∂_r^2 χ - Γ_rr^r ∂_r χ ∇_r ∇_θχ = ∇_θ∇_r χ = ∂_θ∂_r χ - Γ_rθ^θ∂_θχ , and, as a result, covariant derivatives with respect to θ can be nonzero even if χ is rotationally invariant. In particular, we have (g^θθ∇_θ∇_θ + g^rr∇_r ∇_r) χ = 1/r^2(∂^2 χ/∂θ^2 + D_r^2 )χ, where D_r = r/p(r)∂/∂ r is the radial derivative. By the symmetry of the string configuration and vacuum state, the vacuum expectation value of ϕ^2 will depend only on r, and we can write the energy density as ⟨ T_tt⟩ =⟨1/2 (∂_t ϕ)^2 +1/2r^2( D_r ϕ)^2 + 1/2r^2(∂_θϕ)^2 + ξ/2 Rϕ^2 - ξ/r^2 D_r^2 (ϕ^2) ⟩ , where ϕ obeys the equation of motion (∂^2/∂ t^2 - 1/r^2 D_r^2 - 1/r^2∂^2/∂θ^2 + ξ R) ϕ = 0 . Again by symmetry, we have 1/2∂_t^2 ⟨ϕ^2 ⟩ = ∂_t ⟨ϕ (∂_t ϕ) ⟩=0, and so ⟨ϕ (∂_t^2 ϕ) ⟩ = - ⟨ (∂_t ϕ)^2 ⟩, and similarly for θ. Using these results and D_r^2 (ϕ^2) = 2 ( D_r ϕ)^2 + 2 ϕ D_r^2 ϕ, together with the equation of motion, we obtain ⟨1/4 r^2 D_r^2 (ϕ^2) ⟩ = ⟨1/2r^2( D_r ϕ)^2 -1/2 (∂_t ϕ)^2 + 1/2r^2 (∂_θϕ)^2 + ξ/2 Rϕ^2 ⟩ , yielding a simplified expression in terms of a consolidated derivative term, ⟨ T_tt⟩ = ⟨ (∂_t ϕ)^2 + (1/4- ξ) 1/r^2 D_r^2 (ϕ^2) ⟩ . This form will be more convenient for organizing the calculation, particularly with regard to renormalization using the techniques of Ref. <cit.>, but for numerical calculation we will find it preferable to re-expand the second derivative in terms of squared first derivatives. Our primary tool will be the Green's function G_σ(r,r',κ) for imaginary wave number k=iκ, which obeys -(1/r^2 D_r^2 + 1/r^2∂^2/∂θ^2 - ξ R - ) G_σ(r,r',κ) = 1/r p(r) . We consider two profile functions <cit.>: the “flowerpot,” p^ flower(r) = 1/σ r< r_0 1 r>r_0 and the “ballpoint pen,” p^ pen(r) = [σ^2-r^2/r_0^2 (σ^2-1)]^-1/2 r<r_0 1 r>r_0 where r_0 is the string radius, as shown in Fig. <ref>. The flowerpot has zero curvature everywhere except for a δ-function contribution at the string radius, while the ballpoint pen has constant curvature R=2 (σ^2-1)/r_0^2 inside and zero curvature outside. We write the Green's function in the scattering form G_σ(r,r',κ) = σ/π∑_ℓ=0^∞' ψ_,ℓ^ reg (r_<) ψ_,ℓ^ out (r_>) cos[σℓ (θ-θ')] where the prime on the sum indicates that the ℓ=0 term is counted with a weight of one-half, arising because we have written the sum over nonnegative ℓ only. The radial wavefunctions obey the equation [-1/r^2 D_r^2 + ℓ^2σ^2/r^2 + ξ R + ]ψ_,ℓ(r) = 0 , where the regular solution is defined to be well-behaved at r=0, while the outgoing solution obeys outgoing wave boundary conditions for r→∞, normalized to unit amplitude, and r_< (r_>) is the smaller (larger) axial radius of r and r'. The functions are normalized so that they obey the Wronskian relation d/dr(ψ_,ℓ^ reg(r)) ψ_,ℓ^ out(r) - ψ_,ℓ^ reg(r) d/dr(ψ_,ℓ^ out(r)) = p(r)/r , which provides the appropriate jump condition for the Green's function. As shown in Ref. <cit.>, the renormalized energy density of a scalar field in flat spacetime with a background potential that is spherically symmetric in m dimensions and independent of n dimensions after a single “tadpole” subtraction can be written as ⟨ H⟩_ ren = -1/2(4π)^n+1/2Γ(n+3/2)∑_ℓ=0^∞ D^m_ℓ∫_0^∞ dκ 2κ^n+2[ G_ℓ,m(r,r, κ) - G_ℓ,m^ free(r,r, κ) 1/1. . ×(1 - (2-m)V_ℓ(r)/2κ^2) -n+1/κ^2(1/4 - ξ)1/r^2m-2 D^2_r,m G_ℓ,m(r,r, κ) ] , where we have included the contribution from the curvature coupling ξ, which contributes to Eq. (<ref>) even when R=0. Here 1/r^2m-2 D^2_r,m with D_r,m = r^m-1/p(r)∂/∂ r is the radial Laplacian in m dimensions, V_ℓ(r) is the scattering potential in channel ℓ with degeneracy factor D^m_ℓ, and we have decomposed the m-dimensional Green's function for equal angles into its component G_ℓ,m(r,r',κ) in each channel, which obeys the equation (- D_r,m^2 + V_ℓ(r) + ℓ(ℓ+m-2)/r^2 + κ^2) G_ℓ,m(r,r',κ) = δ^(m)(r-r') in terms of the m-dimensional δ-function. We will focus on the case of m=2 and n=0, so a single subtraction will be sufficient. The case of a three-dimensional string with m=2 and n=1 works similarly, but requires additional renormalization counterterms due to the higher degree of divergence. § POINT STRING AND KONTOROVICH-LEBEDEV APPROACH We begin by reviewing the case of the “point string,” <cit.> where the radius r_0 of the string core is taken to zero. The scattering solutions can be obtained using the same techniques as for a conducting wedge <cit.>, but with periodic rather than perfectly reflecting boundary conditions. The normalized scattering functions are ψ_,ℓ^ reg,point(r) = I_σℓ( r) and ψ_,ℓ^ out,point(r) = K_σℓ( r), and the Green's function becomes G_σ^ point(r,r',κ) = σ/π∑_ℓ=0^∞' I_σℓ( r_<) K_σℓ( r_>) cos[ℓσ (θ-θ')] , Setting σ=1, we obtain the free Green's function G^ free(r,r',κ) = 1/π∑_ℓ=0^∞' I_ℓ(( r_<) K_ℓ( r_>) cos[ℓ (θ-θ')] = 1/2π K_0(|r e^iθ-r'e^iθ'|) . A useful computational tool is to replace the sum over the angular quantum number ℓ by a contour integral, based on the Kontorovich-Lebedev transformation. In this approach <cit.>, one multiplies the summand by π (-1)^ℓ/sinπℓ, which has poles of unit residue at all integers ℓ. Because the summand has no other poles in the right half of the complex plane, the original sum over nonnegative ℓ then equals the integral of this product over a contour that goes down the imaginary axis and returns by a large semicircle at infinity, taking into account the factor of 2π i from Cauchy's theorem. The infinitesimal semicircle needed to go around the pole at ℓ=0 accounts for the factor of one-half associated with that term in the sum. For the functions we consider, the integral over the large semicircle vanishes, while the contributions from the negative and positive imaginary axis can be folded into a single integral, which often can be simplified through identities such as K_ν(x)=π/2I_-ν(x)-I_ν(x)/sinνπ , which is valid for any ν that is not a real integer. We illustrate this approach using the point string. To compute finite quantum corrections, we will want to take the difference between the full Green's function in Eq. (<ref>) and the free Green's function in Eq. (<ref>), in the limit where the points become coincident, meaning that the individual Green's functions diverge. However, the necessary cancellation does not emerge term-by-term in the sum, and as a result the standard calculations for this case <cit.> first carry out the integral over κ, taking advantage of the availability of analytic results in three space dimensions, which do not exist in our case. In contrast, using the Kontorovich-Lebedev approach as described above, we obtain G^ free(r,r',κ) = 1/π^2∫_0^∞ dλ K_iλ( r) K_iλ( r') cosh[λ(π - |θ-θ'|)] , where we have used ℓ = i λ and the π term in the hyperbolic cosine reflects the (-1)^ℓ factor above. Note that the jump condition now emerges from the angular rather than the radial component. Similarly, by letting σℓ = i λ, we obtain for the point string G_σ^ point(r,r',κ) = 1/π^2∫_0^∞ dλ K_iλ( r) K_iλ( r') cosh[λ(π/σ - |θ-θ'|)] sinhλπ/sinhλ/σπ , and the difference between Green's functions can be computed by subtraction under the integral sign, yielding after simplification Δ G_σ^ point(r,r,κ) =G_σ^ point(r,r,κ) - G^ free(r,r,κ) = 1/π^2∫_0^∞ dλ K_iλ( r) K_iλ( r) sinh[λ/σπ (σ-1)]/sinhλ/σπ , where we have now taken the limit of coincident points since the difference of Green's functions is nonsingular. § SCATTERING WAVEFUNCTIONS Following Ref. <cit.>, we next compute the regular and outgoing scattering wavefunctions for both the flowerpot and ballpoint pen, each of which will be computed piecewise, with separate expressions inside and outside of the string. In regions where p(r) is constant, namely for r>r_0 in both models and r<r_0 for the flowerpot, we have R=0 and the solutions to Eq. (<ref>) are modified Bessel functions I_σ̃ℓ( r_*) and K_σ̃ℓ( r_*), where ℓ is an integer, σ̃= σ p(r), and r_* = p(r) r is the physical radial distance. For r<r_0 in the ballpoint pen model, the solutions are Legendre functions P_ν()^ℓ(1σ p(r)) and Q_ν()^ℓ(1σ p(r)) with ν() (ν() + 1) = -(r_0^2κ^2/σ^2 -1+ 2 ξ), so that ν() = -1/2 + 1/2√((1-8ξ) - 4 r_0^2/σ^2-1) . We can thus write the full solutions as ψ_,ℓ^ pen(r) = [ r<r_0 r>r_0regular A_,ℓ^ pen P_ν()^ℓ(1σ p(r)) I_σℓ( r) + B_,ℓ^ pen K_σℓ( r) outgoing C_,ℓ^ pen P_ν()^ℓ(1σ p(r)) + D_,ℓ^ pen Q_ν()^ℓ(1σ p(r)) K_σℓ( r) ] for the ballpoint pen and ψ_,ℓ^ flower(r) = [ r<r_0 r>r_0regular A_,ℓ^ flower I_ℓ(r/σ) I_σℓ( r) + B_,ℓ^ flower K_σℓ( r) outgoing C_,ℓ^ flower I_ℓ(r/σ) + D_,ℓ^ flower K_ℓ(r/σ) K_σℓ( r) ] for the flowerpot. In these expressions, for r>r_0 the coefficient of the outgoing wave is normalized to one, and then we can also set the coefficient of the first-kind solution in the regular wave to one by the Wronskian relation, Eq. (<ref>). For r<r_0, the regular solution must be proportional to the the first-kind function, since it is the only solution regular at the origin. In the ballpoint pen model, both the wavefunction and its first derivative are continuous at r=r_0, while in the flowerpot model the wavefunction and the quantity r/p(r)d/drψ_,ℓ^ flower(r) + 2 ξψ_,ℓ^ flower(r)/p(r) are continuous at r=r_0 (note that p(r) is discontinuous). The boundary conditions for the function and its first derivative at r=r_0 thus yield four equations for the four unknown coefficients. In addition, from the Wronskian relation for r<r_0 we know that A_,ℓ^ pen D_,ℓ^ pen = 1/σΓ(ν()-ℓ +1)/Γ(ν()+ℓ +1) and A_,ℓ^ flower D_,ℓ^ flower = 1/σ . Given this result, for brevity we quote only the remaining combinations we will need to form the Green's function, C_,ℓ^ pen/D_,ℓ^ pen = - (σ^2-1)Q_ν()^ℓ'(1σ) K_σℓ( r_0) + σκ r_0 Q_ν()^ℓ(1σ) K_σℓ'( r_0) / (σ^2-1)P_ν()^ℓ'(1σ) K_σℓ( r_0) + σκ r_0 P_ν()^ℓ(1σ) K_σℓ'( r_0) B_,ℓ^ pen = - (σ^2-1)P_ν()^ℓ'(1σ) I_σℓ( r_0) + σκ r_0 P_ν()^ℓ(1σ) I_σℓ'( r_0) / (σ^2-1)P_ν()^ℓ'(1σ) K_σℓ( r_0) + σκ r_0 P_ν()^ℓ(1σ) K_σℓ'( r_0) and C_,ℓ^ flower/D_,ℓ^ flower = - K_ℓ'(r_0/σ) K_σℓ( r_0) - K_ℓ(r_0/σ) K_σℓ'( r_0) + 2 ξ(σ-1)/ r_0 K_ℓ(r_0/σ) K_σℓ( r_0) / I_ℓ'(r_0/σ) K_σℓ( r_0) - I_ℓ(r_0/σ) K_σℓ'( r_0) + 2 ξ(σ-1)/ r_0 I_ℓ(r_0/σ) K_σℓ( r_0) B_,ℓ^ flower = - I_ℓ'(r_0/σ) I_σℓ( r_0) - I_ℓ(r_0/σ) I_σℓ'( r_0) + 2 ξ(σ-1)/ r_0 I_ℓ(r_0/σ) I_σℓ( r_0) / I_ℓ'(r_0/σ) K_σℓ( r_0) - I_ℓ(r_0/σ) K_σℓ'( r_0) + 2 ξ(σ-1)/ r_0 I_ℓ(r_0/σ) K_σℓ( r_0) , where prime denotes a derivative with respect to the function's argument. Finally, we note that when ℓ is not a real integer, as will arise in situations we consider below, for r<r_0 it is computationally preferable to take as independent solutions P_ν()^ℓ and P_ν()^-ℓ rather than P_ν()^ℓ and Q_ν()^ℓ for the ballpoint pen, and I_ℓ and I_-ℓ rather than I_ℓ and K_ℓ for the flowerpot. With these replacements made throughout, the same formulae hold as above, except that the right-hand side of Eq. (<ref>) becomes π/2 σsinπℓ in both cases. § RENORMALIZATION: FREE GREEN'S FUNCTION SUBTRACTION In regions where p(r) is constant, we have a flat spacetime (although possibly with a deficit angle), and so to obtain renormalized quantities we must only subtract the contribution of the free Green's function. As in the case of the point string, however, the necessary cancellation may not appear term by term in the sum over ℓ, making numerical calculations difficult. For r>r_0, we again use the approach of Ref.<cit.> and consider the difference between the full string and a point string with the same σ. We can then add the difference between the point string and empty space using Eq.(<ref>). We obtain, in both models, G_σ(r,r',κ) -G_σ^ point(r,r',κ) = σ/π∑_ℓ=0^∞' B_,ℓ K_σℓ( r) K_σℓ( r') cos[ℓ (θ-θ')] for r,r'>r_0, written in terms of the scattering coefficient described above for each model. For the flowerpot model with r<r_0, we also have flat space, although now corresponding to zero interior deficit angle σ̃= p(r) σ = 1, and with the physical distance to the origin given by r_*=r/σ. We can therefore subtract the free Green's function directly, evaluating it the same physical distances, G_σ^ flower(r,r',κ) -G^ free(r_*,r'_*,κ) = 1/π∑_ℓ=0^∞' C_,ℓ^ flower/D_,ℓ^ flower I_σℓ( r) I_σℓ( r') cos[ℓ (θ-θ')] . We can evaluate these expressions at coincident points, since the singularity cancels through the subtraction. For r<r_0 in the ballpoint pen model, we will use a hybrid of these subtractions. First, we define r_* = r_0/√(σ^2 - 1)arccos1/σ p(r) ⇔ r = r_0 σ/√(σ^2 - 1)sinr_* √(σ^2 - 1)/r_0 so that dr_*/dr = p(r) and r_* represents the physical distance to the origin. We then subtract the contribution from a point string with deficit angle σ̃= σ p(r), corresponding to the angle deficit at that point, evaluated at r_*. As above, we add back in the contribution of this point string using the results of the previous section. There is one further subtlety in this calculation. The free Green's function, what we ultimately subtract, depends only on the separation between points, and thus is unchanged by translation or rescaling. However, to carry out the subtraction, we must separate the points by a distance ϵ in both Green's functions, and then take the limit of the difference as ϵ goes to zero. The limit should correspond to splitting the points by the same physical distance. Since lim_ϵ→ 0[K_0(aϵ) - K_0(ϵ)] = - log a , we therefore must subtract 1/2πlogr_*/r p(r) to correct for this discrepancy. Thus we obtain, for r<r_0, G_σ^ pen(r,r,κ) -G_σ̃^ point(r_*,r_*,κ) = 1/π∑_ℓ=0^∞' [ Γ(ν()-ℓ +1)/Γ(ν()+ℓ +1) P_ν()^ℓ(1σ p(r)) (C_,ℓ^ pen/D_,ℓ^ pen P_ν()^ℓ(1σ p(r)) + Q_ν()^ℓ(1σ p(r))) - σ̃I_σ̃ℓ( r_*) K_σ̃ℓ( r_*)] -1/2πlogr_*/r p(r) in the limit of coincident points. § RENORMALIZATION: TADPOLE SUBTRACTION In the case of the ballpoint pen for r<r_0, the string background effectively creates a background potential, leading to additional counterterms. Following Ref. <cit.>, we use dimensional regularization and consider configurations that are trivial in n dimensions and spherically symmetric in m dimensions, meaning that a string in three space dimensions corresponds to the case of n=1, m=2. After integrating over the n trivial directions, the contribution G_ℓ(r,r',κ) to the Green's function from angular momentum channel ℓ in m dimensions is replaced by the subtracted quantity G_ℓ,m(r,r,κ) - G^ free_ℓ,m(r,r,κ) + (2-m)V_ℓ(r)/2κ^2 G^ free_ℓ,m(r,r,κ) where V_ℓ(r) is the background potential for that channel. Here the first subtraction represents the free background and the second represents the tadpole graph. Since we are interested in m=2, the latter contribution appears to vanish. However, it multiplies the free Green's function at coincident points, which diverges, so we must take the limit carefully. To do so, we consider the free radial Green's function in m dimensions for channel ℓ, G^ free_ℓ,m(r,r',κ) = Γ(m/2)/2 π^m/21/(r r')^m/2-1 I_m/2 -1+ℓ(κ r_<) K_m/2 -1+ℓ(κ r_>) . Its contribution is weighted by the degeneracy factor D^m_ℓ = Γ(m+ℓ-2)/Γ(m-1)Γ(ℓ+1)(m+2ℓ-2) , which has the following limits as special cases D^m=2_ℓ = 2 - δ_ℓ 0 D^m=1_ℓ = δ_ℓ 0 + δ_ℓ 1 D^m=0_ℓ = δ_ℓ 0 , expressed in terms of the Kronecker δ symbol. For equal angles, the free Green's function is then given by the sum G^ free_m(r,r',κ) = 1/(2π)^m/2(κ/|r-r'|)^m/2 - 1 K_m/2 -1(κ |r-r'|) = ∑_ℓ=0^∞ D^m_ℓ G^ free_ℓ,m(r,r',κ) . To bring the points together, we expand around r=r'=0 (since the free Green's function only depends on their difference, we may choose to have them both approach any point we choose), in which case we have (2-m) D^m_ℓ G^ free_ℓ,m(r,r,κ) = -(2-m)[ (κ r/2)^2ℓκ ^m-2(m+2 ℓ) Γ(m/2) Γ(2-m/2-ℓ) Γ(m + ℓ - 2)/(4π)^m/2Γ(m-1) Γ(ℓ+1) Γ(1+m/2+ℓ) + O(r^2-m)] , where, crucially, we have dropped terms of order r^2-m because we approach m=2 from below, where the integrals converge, and so these terms vanish for r→ 0. When ℓ≠ 0, the term we have kept also vanishes for r→ 0. However, for ℓ=0, it goes to 1/2π in the limit m→ 2, and so we have found lim_m→ 2[(2-m) D^m_ℓ G^ free_ℓ,m(r=0,r=0,κ) ] = 1/2πδ_ℓ 0 ⟹ lim_m→ 2[(2-m) ∑_ℓ=0^∞ D^m_ℓ G^ free_ℓ,m(r,r,κ) V_ℓ(r)] = 1/2πV_ℓ=0(r) , with the result for the summed Green's function depending only on the contribution of the potential in the ℓ=0 channel. To find the potential V_ℓ(r), we rewrite the wave equation for r<r_0 using the physical radius r_* given in Eq.(<ref>). The rescaled wavefunction ϕ_,ℓ(r_*) = √(r)ψ_,ℓ(r) then obeys <cit.> [-d^2/dr_*^2 + σ^2 (ℓ^2-1/4/r^2) -1/4(σ^2-1/r_0^2) +ξ R + ] ϕ_,ℓ(r_*) = 0 , where the denominator of the second term represents r as a function of r_*. A free particle would instead obey the equation [-d^2/dr_*^2 + (ℓ^2-1/4/r_*^2) + ] ϕ^ free_,ℓ(r_*) = 0 , and so we can consider the difference between the two expressions in brackets as a scattering potential, V_ℓ^ full(r) = (ℓ^2-1/4) σ ^2/r^2 -(ℓ^2-1/4) (σ^2-1)/r_0^2 (arccos1/σ p(r))^2 + (8 ξ - 1) (σ ^2 - 1)/4 r_0^2 . For the counterterm, we need only the ℓ=0 case, and the tadpole subtraction is given by the leading order in perturbation theory. We take σ^2-1 as the coupling constant. Expanding to leading order in this quantity, we obtain the tadpole contribution for r<r_0, V_ℓ=0(r) = 2σ^2 - 1/r_0^2(ξ -1/6) = R(ξ - 1/6) , which is independent of r and vanishes for conformal coupling in three dimensions. Note that this term exactly coincides with the first-order heat kernel coefficient <cit.>. Putting these results together, we obtain the subtracted Green's function summed over angular momentum channels G_σ(r,r,κ) - G^ free(r,r,κ) + V_ℓ=0(r)/4πκ^2 = G_σ(r,r,κ) - G^ free(r,r,κ) + R/4πκ^2(ξ - 1/6) in the limit where m→ 2 and the points are coincident. Furthermore, we can pull the last term of Eq. (<ref>) inside the sum used to define the Green's function by using a special case of the addition theorem for Legendre functions, 1 = 2 ∑_ℓ=0^∞' Γ(ν()-ℓ +1)/Γ(ν()+ℓ +1) P_ν()^ℓ(1σ p(r))^2 . § DERIVATIVE TERM Next we compute the derivative term 1/r^2 D_r^2 G_σ(r,r, κ) by differentiating the expressions above. We note that for any two functions of ψ_A(r) and ψ_B(r), D_r^2 (ψ_A(r) ψ_B(r)) = 2( D_r ψ_A(r))( D_r ϕ_B(r)) + ψ_A(r) ( D_r^2 ψ_B(r)) + ( D_r^2 ψ_A(r) ) ψ_B(r) , and so by using the equations of motion, we can express the terms involving squares of first derivatives in terms of the second derivative of the product, and vice versa. Accordingly, for any pair of solutions ψ_A(r) and ψ_B(r) obeying Eq. (<ref>), we have 1/2 r^2 D_r^2 (ψ_,ℓ^A (r) ψ_,ℓ^B(r)) = ((σℓ)^2/r^2 + ξ R + ) ψ_,ℓ^A(r) ψ_,ℓ^B(r) + 1/r^2( D_r ψ_,ℓ^A(r)) ( D_rψ_,ℓ^B(r)) , and we note that 1/p(r)d/dr P_ν()^ℓ(1σ p(r)) = -r(σ^2-1)/r_0^2 σ P_ν()^ℓ'(1σ p(r)) , and similarly for Q_ν()^ℓ, and so by using recurrence relations we can simplify 1/2 r^2 D_r^2 [ P_ν()^ℓ(1σ p(r)) Z_ν()^ℓ(1σ p(r))] = ((σℓ)^2/r^2 + ξ R + ) P_ν()^ℓ(1σ p(r)) Z_ν()^ℓ(1σ p(r)) +1/r^2[r√(σ^2-1)/r_0 P_ν()^ℓ+1(1σ p(r)) + ℓ/p(r) P_ν()^ℓ(1σ p(r))] ×[r√(σ^2-1)/r_0 Z_ν()^ℓ+1(1σ p(r)) + ℓ/p(r) Z_ν()^ℓ(1σ p(r)) ] where Z is either P or Q; similar simplifications based on recurrence relations apply for Bessel functions. Renormalization of the derivative term in the curved space background requires an additional counterterm compared to the flat space expression in Eq. (<ref>). This subtraction is also proportional to the curvature scalar R. The renormalized derivative term becomes 1/r^2 D_r^2 G_σ(r,r,κ) - 1/4π R , where again the counterterm is proportional to the Ricci scalar and we have taken the points to be coincident. With this choice, the full tadpole counterterm contribution to the modified integrand of Eq.(<ref>) with m=2 and n=0 is κ^2 R/4πκ^2(ξ - 1/6) + (1/4-ξ) R/4π = R/48π , consistent with the general result originating from the two-dimensional conformal anomaly <cit.>, since our geometry is only curved in two dimensions. As above, we can pull this term inside the sum using Eq. (<ref>). Finally, as before we find it more computationally tractable to compute the derivative term for the difference of the full Green's function and the corresponding point string. We must then add back the derivative of the point string contribution as well. In both what we subtract and add back in, we define the radial derivative for the point string contribution taking p(r) constant, corresponding to the derivative we would use in the point string case. § KONTOROVICH-LEBEDEV APPROACH FOR NONZERO WIDTH STRING As with the point string above, we can express the Green's function as an integral over imaginary angular momentum λ using the Kontorovich-Lebedev approach. As described above, we take the pairs P_ν()^ℓ and P_ν()^-ℓ and I_ℓ and I_-ℓ as the independent solutions for r<r_0 in the ballpoint pen and flowerpot models respectively. The Green's function becomes G_σ(r,r',κ) = 1/2π∫_0^∞idλ/sinhλπ/σ[ψ_,iλ/σ^ reg (r_<) - ψ_,-iλ/σ^ reg (r_<) ] ψ_,iλ/σ^ out (r_>) cosh[λ(π/σ - |θ-θ'|)] , where we have used that the outgoing wave is always even in λ. For r>r_0, we can use Wronskian relationships to simplify i/sinhλπ/σ[ψ_,iλ/σ^ reg,pen (r) - ψ_,-iλ/σ^ reg,pen (r) ] = 2/πσ^3 K_iλ( r)/|(σ^2-1)P_ν()^iλ/σ'(1σ) K_iλ( r_0) + σκ r_0 P_ν()^iλ/σ(1σ) K_iλ'( r_0)|^2 for the ballpoint pen and i/sinhλπ/σ[ψ_,iλ/σ^ reg,flower (r) - ψ_,-iλ/σ^ reg,flower (r) ] = 2/πσ/κ^2 r_0^2 K_iλ( r)/|I_iλ/σ'(r_0/σ) K_iλ( r_0) - I_iλ/σ(r_0/σ) K_iλ'( r_0) + 2 ξ(σ-1)/ r_0 I_iλ/σ(r_0/σ) K_iλ( r_0)|^2 for the flowerpot. For r<r_0, we have i/sinhλπ/σ[ψ_,iλ/σ^ reg (r) - ψ_,-iλ/σ^ reg (r) ] = i/sinhλπ/σA_,iλ/σ/C_,iλ/σψ_,iλ/σ^ out (r) , and so in both cases the Green's function is written entirely in terms of outgoing waves. We can then subtract the free Green's function in the form of Eq. (<ref>). For numerical calculation, however, we find that this approach is only effective for r>r_0. § RESULTS Collecting all of these terms, we have the full expression for the renormalized energy density, written with the point string subtracted and then added back in, ⟨ H⟩_ ren = -1/π∫_0^∞ dκ[κ^2 ( G_σ(r,r,κ) -G_σ̃^ point(r_*,r_*,κ) +Δ G_σ̃^ point(r_*,r_*,κ) -1/2πlogr_*/r p(r)) -(1/4-ξ) 1/r^2( D_r^2 G_σ(r,r,κ) -D̅_r^2 G_σ̃^ point(r_*,r_*,κ) +D̅_r^2 Δ G_σ̃^ point(r_*,r_*,κ) ) + R/48π] where r_* is the physical distance in each model as defined above (with r_*=r for r>r_0) and σ̃= p(r) σ in each region (with σ̃=σ for r>r_0). Here we have defined D̅_r = r d/dr so that we add and subtract derivatives of the point string in the background of a flat spacetime with a deficit angle, as described above. The combined counterterm R/48π (with R=0 for r>r_0, and for r<r_0 in the flowerpot model) is obtained by combining the two individual terms obtained in Sec. <ref> using Eq. (<ref>). In both the first and second lines of Eq. (<ref>), the contribution from the difference between the full and point string Green's functions can be taken inside the sum over ℓ, using the results in Sec. <ref> and Eq. (<ref>), while the contribution from the difference between the point string and empty space Green's functions can be computed as an integral over imaginary angular momentum using Eq. (<ref>). For the case of r>r_0, we can check our calculation using the results of Sec. <ref>, in which case Eq. (<ref>) can be expressed entirely in terms of an integral on the imaginary angular momentum axis. For that calculation, there is no need to add and subtract the point cone contribution, so we can simply subtract the free Green's function directly, using Eq. (<ref>). Sample results are shown in Figs. <ref> through <ref>, for both minimal and conformal coupling. We note that for the interior of the flowerpot, the sign of the energy density for r<r_0 is opposite at large and small deficit angles for minimal coupling, with the sign change occurring at θ_0 ≈ 1. For the ballpoint pen, we see a discontinuity at r=r_0, corresponding to the discontinuity in the curvature. For the flowerpot, the energy density diverges at r=r_0, corresponding to the δ-function curvature profile. § CONCLUSIONS We have shown how to use scattering data to compute the quantum energy density of a massless scalar field in the background on a nonzero width cosmic string background, using both the “flowerpot” and “ballpoint pen” string profiles in two space dimensions. Of particular interest is the interior of the ballpoint pen, where the background space time has nontrivial (but constant) curvature. We precisely specify counterterms corresponding to renormalization of both the cosmological constant and the gravitational coupling to the scalar curvature R. In addition, to make the calculation tractable numerically, we subtract and then add back in the contribution of a “point string” with the same deficit angle and physical radius. We can then subtract the free space contribution, corresponding to the cosmological constant renormalization, by combining it with the point string result and using analytic continuation of the angular momentum sum to an integral over the imaginary axis. These results extend straightforwardly to three dimensions, but that case requires an additional subtraction of order R^2. § ACKNOWLEDGMENTS It is a pleasure to thank K. Olum for sharing preliminary work on this topic and H. Weigel for helpful conversations and feedback. M. K., X. L., and N. G. were supported in part by the National Science Foundation (NSF) through grant PHY-2205708. apsrev
http://arxiv.org/abs/2409.02615v1
20240904111230
USEF-TSE: Universal Speaker Embedding Free Target Speaker Extraction
[ "Bang Zeng", "Ming Li" ]
eess.AS
[ "eess.AS", "cs.SD" ]
USEF-TSE: Universal Speaker Embedding Free Target Speaker Extraction Bang Zeng, Ming Li July 2024 ==================================================================== § ABSTRACT Target speaker extraction aims to isolate the voice of a specific speaker from mixed speech. Traditionally, this process has relied on extracting a speaker embedding from a reference speech, necessitating a speaker recognition model. However, identifying an appropriate speaker recognition model can be challenging, and using the target speaker embedding as reference information may not be optimal for target speaker extraction tasks. This paper introduces a Universal Speaker Embedding-Free Target Speaker Extraction (USEF-TSE) framework that operates without relying on speaker embeddings. USEF-TSE utilizes a multi-head cross-attention mechanism as a frame-level target speaker feature extractor. This innovative approach allows mainstream speaker extraction solutions to bypass the dependency on speaker recognition models and to fully leverage the information available in the enrollment speech, including speaker characteristics and contextual details. Additionally, USEF-TSE can seamlessly integrate with any time-domain or time-frequency domain speech separation model to achieve effective speaker extraction. Experimental results show that our proposed method achieves state-of-the-art (SOTA) performance in terms of Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) on the WSJ0-2mix, WHAM!, and WHAMR! datasets, which are standard benchmarks for monaural anechoic, noisy and noisy-reverberant two-speaker speech separation and speaker extraction. Target speaker extraction, speaker recognition model, speaker embedding free. § INTRODUCTION Humans have the remarkable ability to selectively focus on a specific speech signal in a noisy environment, a skill often referred to as the cocktail party problem <cit.>. Much research has been devoted to enabling machines to replicate this ability. Speech separation has emerged as a practical solution to address this challenge. It plays a vital role as a preliminary step for various speech signal processing technologies, including speech recognition, speaker recognition, and speaker diarization <cit.>. Traditional speech separation algorithms, such as non-negative matrix factorization (NMF) and computational auditory scene analysis (CASA) <cit.>, typically use spectro-temporal masking to isolate each speaker’s voice from mixed speech. While these methods have significantly advanced speech separation technology, they often struggle with handling nonlinear or non-stationary signals and exhibit limited generalization. With the rapid advancement of deep learning, new methods that employ deep neural networks to estimate mask matrices have emerged, offering improved performance and adaptability. Deep neural network-based speech separation algorithms, such as Deep Clustering (DC) <cit.>, Deep Attractor Network (DANet) <cit.>, and Permutation Invariant Training (PIT) <cit.>, have markedly enhanced separation performance. Moreover, DC and PIT have effectively addressed the label permutation problem. However, the time-domain audio separation networks (TasNet) <cit.> highlight a significant limitation of time-frequency domain speech separation algorithms: their inability to accurately reconstruct the phase of clean speech. TasNet addresses this by using convolutions and deconvolutions in place of the short-time Fourier transform (STFT) and inverse STFT (iSTFT) for modeling and reconstructing target speech. Consequently, time-domain speech separation approaches <cit.> have become widely adopted. Various time-domain methods, such as Dual-Path RNN (DPRNN) <cit.>, SepFormer <cit.> and Mossformer2 <cit.>, have significantly improved separation performance, with Mossformer2 <cit.> achieving state-of-the-art (SOTA) results. Recently, the introduction of TF-GridNet <cit.> has led to renewed excellence in time-frequency domain speech separation <cit.>, rekindling interest in this area. Despite these advances, many speech separation methods still require prior knowledge of the number of speakers in the mixed speech. This prior knowledge is not always available in real-world applications, which poses challenges for applying these separation solutions in practical scenarios. Recently, target speaker extraction <cit.> models have emerged as essential tools for addressing the limitations of traditional speech separation methods, especially in scenarios where the number of speakers in the audio mixture is unknown. By leveraging reference speech, these models can effectively isolate the target speaker’s voice from complex audio mixtures, making them highly applicable in real-world situations. This innovative approach not only improves the practicality of speech separation technology but also expands its applicability in diverse and dynamic acoustic environments. Figure <ref> depicts a typical target speaker extraction framework. In the time-frequency (T-F) domain, the encoder and decoder represent the STFT and iSTFT operations, respectively. In the time domain, these components correspond to convolution and deconvolution operations. Traditional target speaker extraction models rely on a speaker embedding extractor to obtain the target speaker’s embedding from the reference speech. This extractor, such as a ResNet model <cit.>, can be pre-trained or jointly trained with the separator through multi-task learning. Building upon the framework illustrated in Figure <ref>, various techniques have been developed to enhance speaker extraction performance. However, these models aim to maximize speaker recognition performance by training the embedding extractor. Consequently, the speaker embedding captures only the target speaker’s information, which means that traditional speaker extraction models may not fully utilize the information available in the reference speech. As a result, relying on speaker embeddings for the speaker extraction task may not be optimal. In our previous work, we are the first to propose a time-domain speaker embedding-free target speaker extraction model (SEF-Net) <cit.>. It offers a viable new solution for isolating a target speaker’s voice without relying on speaker embeddings. In SEF-Net <cit.>, both the mixed speech and reference speech are processed using twin weights-sharing conformer <cit.> encoders. SEF-Net <cit.> then employs cross multi-head attention within the transformer <cit.> decoder to implicitly leverage the speaker information in the reference speech’s conformer encoder outputs. SEF-Net <cit.> has achieved comparable performance to typical target speaker extraction models, highlighting the effectiveness of the speaker embedding-free framework for target speaker extraction tasks. While SEF-Net <cit.> demonstrates significant potential in target speaker extraction, it does have some limitations. Firstly, SEF-Net <cit.> requires either repetition or zero-padding to align the length of the reference speech with the mixed speech, which poses practical challenges. Secondly, SEF-Net <cit.> utilizes a multi-head cross-attention mechanism within both intra-block and inter-block modules to facilitate interaction between the features of mixed and reference speech. However, this approach does not fully exploit global information, potentially constraining the model’s performance improvements. Thirdly, SEF-Net <cit.> has yet to be tested on data with noise and reverberation, leaving its effectiveness in natural, noisy environments unverified. Lastly, unlike speaker embedding-based target speaker extraction methods, SEF-Net <cit.> is not a universal framework. It cannot be easily integrated with arbitrary time-domain or time-frequency domain speech separation networks for target speaker extraction tasks. To address the aforementioned issues, this paper proposes a Universal Speaker Embedding-Free Target Speaker Extraction (USEF-TSE) framework for monaural target speaker extraction based on SEF-Net <cit.>. USEF-TSE employs a unified encoder to process both the reference and mixed speech. The encoding of the mixed speech is utilized to query the encoding of the reference speech through a cross multi-head attention (CMHA) module. The output of the CMHA module is a frame-level feature that maintains the same length as the encoded features of the mixed speech. This frame-level feature represents the target speaker’s attributes and is fused with the encoding of the mixed speech. The resulting fused feature is then fed into a separator to model the target speaker’s voice components. This paper builds upon and extends our previous work on speaker embedding-free approaches for monaural target speaker extraction. The key contributions of this article are summarized as follows: * Improved: We address the limitation of SEF-Net, which requires the lengths of the reference and mixed speech to be the same. Additionally, we have altered the interaction position of the reference and mixed speech, enhancing the model’s ability to learn global information more effectively. Experimental results demonstrate that these modifications significantly improve the model’s performance. * Universal: We introduce USEF-Net, a universal speaker embedding-free target speaker extraction framework. USEF-Net can seamlessly integrate with any deep neural network-based time-domain or time-frequency domain speech separation model for target speaker extraction. To validate the effectiveness of USEF-Net, we propose time and T-F domain target speaker extraction models, namely USEF-SepFormer and USEF-TFGridNet. Experimental results show that our proposed methods achieve SOTA performance on the WSJ0-2mix <cit.> datasets. * Robust: To assess the real-world performance of the USEF-TSE framework, we evaluated it on the <cit.> and WHAMR! <cit.> datasets, achieving SOTA results. § RELATED WORK §.§ Target Speaker Extraction using speaker embedding Target speaker extraction technology has advanced significantly, with methods generally categorized into time and T-F domain approaches.T-F domain methods typically involve applying the STFT to mixed speech signals and then separating the speech by estimating each speaker’s T-F masks. For instance, early approaches such as VoiceFilter <cit.> and SpeakerBeam <cit.> employed deep learning techniques for T-F mask estimation. However, a key challenge with these methods is the phase estimation problem, as accurately recovering phase information from the magnitude spectrum of the STFT is difficult. With the recognition of deep learning's advantages in time-domain signal processing, time-domain schemes have gained significant attention. These methods operate directly on time series, avoiding the phase estimation challenges associated with the STFT. For example, models such as TasNet <cit.> and Conv-TasNet <cit.> have demonstrated effectiveness in speech separation tasks by using learnable 1-D convolutional neural networks to extract features directly from speech signals. SpEx <cit.> extended this approach to target speaker extraction, improving accuracy and speech quality through multi-scale embedding coefficients and a multi-task learning framework. SpEx+ <cit.> advanced this further by proposing a comprehensive time-domain speaker extraction solution. By sharing weights between two identical speech encoder networks, SpEx+ addressed the feature space mismatch in SpEx <cit.>. SpEx_pc <cit.> enhanced the utilization of speaker information by integrating speaker embeddings with speech features to predict the target speaker’s mask, exploring how to better leverage speaker information within deep network structures for improved extraction performance. X-SepFormer <cit.> developed an end-to-end target speaker extraction model that builds upon X-vectors <cit.> and SepFormer <cit.>. X-SepFormer introduced two novel loss schemes that redefine the training objective, focusing on reconstruction performance improvement metrics at the trim segment level and using distribution information related to these metrics. This approach helps the network concentrate on segments with speaker confusion (SC) <cit.> issues, enhancing system performance while reducing computational costs. X-TF-GridNet <cit.> employs U^2-Net <cit.> and TF-GridNet <cit.> as the speaker embedding extractor and separator backbone, respectively. Additionally, X-TF-GridNet utilizes adaptive speaker embedding fusion to effectively separate and enhance the target speaker's speech in complex acoustic environments. Despite its successes, this approach may not fully leverage the contextual information available during registration. §.§ Speaker Embedding Free Target Speaker Extraction Traditional target speaker extraction methods depend on speaker embeddings, which are fixed-dimensional representations derived from reference speech. Although speaker embeddings capture speaker characteristics effectively, these methods may not fully utilize all the information in the reference speech. Crucial content details, such as local dynamics and temporal structures, are essential for guiding more effective speaker extraction. Recent advancements have explored embedding-free approaches  <cit.> that leverage frame-level acoustic features from the reference speech instead of relying on speaker embeddings. This shift addresses the above limitations of traditional methods. These methods do not require a pre-trained speaker embedding extractor or a speaker recognition loss function, focusing instead on directly processing the audio signals to extract the target speaker. SEF-Net <cit.> employs a dual-path structure with conformer encoders and a transformer decoder, using cross-multi-head attention to implicitly utilize the speaker information embedded in the reference speech’s conformer encoding outputs. This model achieves performance comparable to other target speaker extraction methods without relying on speaker embeddings, offering a novel solution to the embedding mismatch problem. The VE-VE Framework <cit.> introduces a Voice Extractor-Voice Extractor framework designed to handle target speaker extraction with ultra-short reference speech. This framework utilizes an RNN-based voice extractor, which relies on RNN states rather than speaker embeddings to capture speaker characteristics. This method addresses the feature fusion problem and effectively supports reference speech, though it is limited to RNN-based extraction networks. With the success of TF-GridNet <cit.>, there has been a gradual transition from time-domain to T-F domain approaches in target speaker extraction. Inspired by SEF-Net <cit.>, models such as SMMA-Net <cit.> and CIENet <cit.> employ attention mechanisms to interact with the T-F representations of both the reference and mixed signals. These models aim to achieve consistent T-F representations that guide more effective extraction by leveraging contextual information directly in the TF domain. Although these approaches have yielded relatively good results, they have not established a general framework for speaker embedding-free target speaker extraction. This paper extends our previous work on speaker embedding-free target speaker extraction <cit.> to develop a high-performance, universal, and robust framework for target speaker extraction. § UNIVERSAL SPEAKER EMBEDDING FREE TARGET SPEAKER EXTRACTION In this section, we will first review our previous work, SEF-Net <cit.>. Next, we will introduce the improvements and details of the USEF-TSE framework compared to SEF-Net <cit.>. To validate the universality and effectiveness of USEF-TSE, we will use SepFormer <cit.> and TF-GridNet <cit.> as backbones for the separator. This will involve constructing time-domain and T-F domain target speaker extraction models, named USEF-SepFormer and USEF-TFGridNet, respectively. §.§ Recap: SEF-Net SEF-Net <cit.> is a masking-based time-domain target speaker extraction network that does not use speaker embeddings. The architecture of SEF-Net <cit.> is illustrated in Figure <ref>. SEF-Net employs two weight-sharing convolutional encoders to process the mixed speech m and the reference speech r, obtaining STFT-like representations E_m∈ℝ^N × L_1 and E_r∈ℝ^N × L_2. Here, m∈ℝ^1 × T_1 and r∈ℝ^1 × T_2 represent the mixed speech and reference speech, respectively. N denotes the feature dimension, while L_1, L_2 represent the number of frames. In the segmentation stage, E_m and Er are divided into 3-D features S_m∈ℝ^N × K × S_1 and S_r∈ℝ^N × K × S_2. Here, K is the length of chunks, and S_1, S_2 denote the number of chunks. The entire process of the mixed speech can be formulated as follows: E_m = Enc(m) S_m = Seg(E_m) where Enc(·) denotes the convolutional encoder and Seg(·) represents the segmentation operation. The diagram of the masking network is shown in the center of Figure <ref>. The masking network adopts a dual-path structure similar to SepFormer <cit.>, which consists of an intra-module and an inter-module. The details of the intra-module are depicted on the right side of Figure <ref>. This module includes two parallel conformer encoders and a transformer decoder. The shared-weight intra-conformer encoders handle intra-chunk processing for S_m and S_r. The conformer encoding of S_r is then used to query the conformer encoding of S_m in the transformer decoder. These conformer encodings are denoted as C_m and C_r. In this setup, the cross multi-head attention layer within the transformer decoder functions as a feature fusion module: C_m = CE(S_m),C_r = CE(S_r) D_intra = TD(q=C_r; k,v=C_m) where CE(·) and TD(·) represent the conformer encoder and transformer decoder, respectively. The output of the intra-module is denoted as D_intra. The inter-module shares the same components and structure as the intra-module, performing inter-chunk processing on the permuted D_intra. The operations in the inter-conformer are the same as the intra-module. D_inter denotes the output of the inter-module. After processing through the masking network, SEF-Net <cit.> applies an overlap-add operation to D_inter, transforming it back into a 2-D feature. Finally, the decoder derives the estimation of the target speaker. §.§ Architecture Although SEF-Net <cit.> achieves performance on par with other target speaker extraction methods, it has certain limitations. In SEF-Net <cit.>, the transformer decoder is tasked with fusing mixed and reference speech features and extracting the target speaker's speech. This dual responsibility may constrain the model’s separation performance. Furthermore, in Equation <ref>, the lengths of C_m and C_r must be identical, requiring that the mixed speech length T_1 match the reference speech length T_2. This requirement significantly hampers the model’s practical applicability. To address the aforementioned issues, USEF-TSE improves and extends SEF-Net <cit.>. Figure <ref> illustrates the overall flowchart of USEF-TSE, a universal framework applicable to target speaker extraction in both the time and T-F domains. The critical components of USEF-TSE include an Encoder, CMHA module, fusion module, separator, and decoder. §.§.§ Encoder USEF-TSE uses the same encoder for both reference and mixed speech. This encoder can be STFT or a one-dimensional convolution, depending on whether the model works in the time or T-F domain. §.§.§ CMHA module Unlike SEF-Net <cit.>, which combines feature fusion and target speaker extraction within the transformer decoder, USEF-TSE separates these tasks to improve performance. The encoder outputs of m and r are fed into the CMHA module, where a cross multi-head attention mechanism is applied to extract frame-level features of the target speaker: E_spk = CMHA(q=E_m; k,v=E_r) where E_m∈ℝ^N × L_1 and E_r∈ℝ^N × L_2 represent the encoder outputs of the mixed speech and reference speech, respectively. The CMHA operation is denoted as CMHA·, and E_spk∈ℝ^N × L_1 is the output of the CMHA module. Unlike SEF-Net <cit.>, the CMHA module in USEF-TSE uses mixed speech encoding as the query. This approach produces a frame-level feature with the same length as E_m, allowing the mixed and reference speech lengths to differ in the USEF-TSE framework. §.§.§ Fusion module USEF-TSE uses E_spk as the reference feature for the target speaker and applies a feature fusion module to combine E_spk with E_m: E_f = F(E_m, E_spk) where E_f denotes the output of the fusion module. The feature fusion, represented by F(·), can be implemented using methods like feature concatenation or feature-wise linear modulation (FiLM) <cit.>. §.§.§ Separator E_f is fed into the separator to extract the target speaker’s components. The separator can be any deep neural network (DNN) based speech separation backbone, such as DPRNN <cit.>, SepFormer <cit.>, or TF-GridNet <cit.>. In masking-based approaches, the separator’s output is element-wise multiplied with E_m before being passed to the decoder. In mapping-based approaches, the separator’s output is directly sent to the decoder. §.§.§ Decoder Corresponding to the encoder, the decoder can be either iSTFT or a one-dimensional deconvolution: E_o = S(E_f) est = Dec(M(E_o)) where E_o ∈ ℝ^N × L_1 denotes the output of the separator. M(·) refers to the masking or mapping operation. S(·) and Dec(·) represent the separator and decoder, respectively. est ∈ ℝ^1 × T_1 denotes the final estimation of the target speaker. §.§ Time and TF Domain USEF-TSE Building on the USEF-TSE framework, we propose two target speaker extraction models: USEF-SepFormer and USEF-TFGridNet. These models employ the SepFormer <cit.> and TF-GridNet <cit.> backbone networks as separators the USEF-TSE architecture . Accordingly, USEF-SepFormer operates in the time domain, while USEF-TFGridNet operates in the T-F domain for target speaker extraction. §.§.§ USEF-SepFormer The diagram of the USEF-SepFormer is shown in the Figure <ref>. In this model, two weight sharing encoders are used to process the mixed and reference speech: E_m = ReLU(Conv1d(m)) E_r = ReLU(Conv1d(f)) where E_m ∈ ℝ^B × N × L_1 and E_r ∈ ℝ^B × N × L_2 represent the encoded outputs of the mixed and reference speech, respectively. B is the batch size. N is the feature dimension. L_1 and L_2 are the number of time steps. Using the CMHA module as described in Equation <ref>, E_m and E_r are processed to produce E_spk ∈ ℝ^B × N × L_1. To align with the SepFormer <cit.> backbone network, employ a transformer encoder as the CMHA module in USEF-SepFormer. Subsequently, E_spk is combined with E_m through the fusion module. In USEF-SepFormer, FiLM is utilized for feature fusion: E_f = FiLM(E_m, E_spk) FiLM(E_m, E_spk) = γ(E_spk) ·E_m + β(E_spk) where E_f ∈ ℝ^B × N × L_1 represents the fused features. The scaling and shifting vectors of FiLM are represented by γ(·) and β(·), respectively. E_f is then input into the separator. USEF-SepFormer utilizes the SepFormer <cit.> backbone network as its separator. SepFormer <cit.> is is a masking-based, dual-path structure time-domain speech separation model. It divides long sequence features into equal-length chunks and performs intra-chunk and inter-chunk operations on these segmented features: E_intra = S_f + IntraTE(S^'_f) E_inter = E_intra + InterTE(E^'_intra) where S_f ∈ ℝ^B × N × K × S denotes the segmentation result of E_f. S^'_f ∈ ℝ^(B × S) × K × N is a transformation of S_f. E_intra, E_inter ∈ ℝ^B × N × K × S are the outputs of the intra-chunk and inter-chunk modules, respectively. E^'_intra ∈ ℝ^(B × K) × S × N is a transformation of E_intra. IntraTE(·) and InterTE(·) are the intra and inter transformer encoder, respectively. After the dual-path network, USEF-SepFormer perform an overlap-add operation on E_inter to obtain E_o ∈ ℝ^B × N × L_1. E_o is the estimated mask for the target speaker. At last, the decoder processes the product of E_m and E_o and yielding the estimated target speaker speech: est = TConv1d(E_m * E_o) where TConv1d(·) denotes the one-dimensional transposed convolution operation. est ∈ ℝ^B × 1 × T_1 denotes the estimation of the target speaker. §.§.§ USEF-TFGridNet The diagram of USEF-TFGridNet is shown in Figure <ref>. USEF-TFGridNet uses two weight-sharing 2D convolutions to process the real and imaginary (RI) components of the STFT features for both the mixed speech and the reference speech: E_m = Conv2d(RI_m) E_r = Conv2d(RI_r) where Conv2d(·) refers to 2D convolutions. RI_m ∈ ℝ^B × 2 × F × L_1 and RI_r ∈ ℝ^B × 2 × F × L_2 represent the stacked real and imaginary components of the STFT features for the mixed and reference speech, respectively, respectively. F is the embedding dimension for each T-F unit. L_1 and L_2 are the numbers of T-F units. The encoder results for the mixed and reference speech are E_m ∈ ℝ^B × C × F × L_1 and E_r ∈ ℝ^B × C × F × L_2. Through Equation <ref>, the CMHA module processes E_m and E_m, outputting E_spk ∈ ℝ^B × C × F × L_1. To align with the separator, USEF-TFGridNet uses the attention module from TF-GridNet <cit.> as its CMHA module. Next, we use concatenation as the feature fusion method to combine E_m and E_spk, resulting in the fused feature E_f ∈ ℝ^B × 2C × F × L_1. USEF-TFGridNet employs the TF-GridNet <cit.> backbone network as its separator. TF-GridNet <cit.> is a mapping-based T-F domain speech separation model. The backbone network of TF-GridNet <cit.> is composed of three key modules: the intra-frame full-band module, the sub-band temporal module, and the cross-frame self-attention module. The intra-frame full-band module processes data along the F dimension to capture full-band spectral and spatial information: U_f = Unfold(V(E^'_f)) E_intra = E^'_f + TConv1d(BLSTM(U_f)) where E^'_f ∈ ℝ^B × 2C × F^'× L_1^' represents the zero-padded version of E_f. The reshaping operation, denoted as V(·), reshapes E^'_f into the shape ℝ^(B × L_1^') × F^'× 2C. Here, F^' = ⌈F-ks/hs⌉× hs + ks. The function Unfold(·) corresponds to the torch.unfold function, with U_f ∈ ℝ^(B × L_1^') × (F^'-ks/hs+1) × (2C × ks) as its output. The parameters ks, hs are the kernel size and stride of torch.unfold function, respectively. TConv1d(·) denotes a one-dimensional transposed convolution with kernel size ks and stride hs. BLSTM(·) means the bidirectional long short-term memory (BLSTM) <cit.> layer. The output of the intra-frame full-band module is E_intra ∈ ℝ^B × 2C × F^'× L_1^'. The sub-band temporal module then processes E_intra along the L_1^' dimension in the same manner as the intra-frame full-band module: E_sub = E_intra + V(Sub(E^'_intra)) where E^'_intra ∈ ℝ^(B × L_1^') × F^'× 2C represents a transformation of E_intra. The function Sub(·) denotes the operations within the sub-band temporal module, following the same procedures as in Equations (<ref>) - (<ref>). The output of the sub-band temporal module is E_sub ∈ ℝ^B × 2C × F^'× L_1^'. After removing zero padding, E_sub is passed into the cross-frame self-attention module. This module is the same as the one in TF-GridNet <cit.>. It uses multi-head attention, allowing each T-F unit to attend directly to other relevant units: E_cross = Cross(E_sub[:,:,F,L_1]) E_o = E_cross + E_sub[:,:,F,L_1] where Cross(·) refers to the operations in the cross-frame self-attention module, which are the same as in TF-GridNet <cit.>. E_Cross, E_o ∈ ℝ^B × 2C × F × L_1 are the outputs of the cross-frame self-attention module and the separator, respectively. A 2D transposed convolution layer is then applied to adjust the number of channels in E_o back to 2. Finally, the result of the 2D transposed convolution is converted back to a waveform using iSTFT. est = iSTFT(TConv2d(E_o)) where TConv2d(·) denotes the 2D transposed convolution operation. est ∈ ℝ^B × 1 × T_1 denotes the estimation of the target speaker. § EXPERIMENTAL SETUP §.§ Datasets §.§.§ Anechoic Speech Mixtures We validate our proposed methods on the WSJ0-2mix dataset <cit.>, a benchmark for speech separation and target speaker extraction in anechoic conditions. WSJ0-2mix is a two-speaker mixed dataset derived from the Wall Street Journal (WSJ0) corpus <cit.>. It consists of three subsets: the training set with 20,000 utterances from 101 speakers, the development set with 5,000 utterances from 101 speakers, and the test set with 3,000 utterances from 18 speakers. The training and development sets feature speakers from WSJ0 “si_tr_s”, while the test set includes speakers from WSJ0 “si_dt_05” and “si_et_05”. The speakers in the training and development sets are different from those in the test set. Utterances from two speakers are randomly selected to create mixed audio with a relative signal-to-noise ratio (SNR) between 0 dB and 5 dB. During training, the reference speech for the target speaker is randomly chosen and updated in each epoch. In the inference stage, we use the same reference speech as SpEx+ [1][1]<https://github.com/gemengtju/SpEx_Plus>. §.§.§ Noisy Speech Mixtures We use the WSJ0 Hipster Ambient Mixtures (WHAM!) dataset <cit.> to evaluate our methods for noisy target speaker extraction. WHAM! is a noisy variant of the WSJ0-2mix dataset, featuring background noise recorded in urban environments such as coffee shops and parks. It preserves the same relative levels between the two speakers as WSJ0-2mix. Noise is added by sampling a random SNR value from a uniform distribution ranging from -6 to +3 dB. During training, the reference speech for the target speaker is randomly selected and updated each epoch. In the inference stage, we use the same reference speech as in the anechoic scenario. §.§.§ Noisy-Reverberant Speech Mixtures We utilize the WHAMR! dataset <cit.> to test our algorithms for noisy reverberant target speaker extraction. WHAMR! extends the WSJ0-2mix dataset by reverberating each clean source and incorporating background noise. The reverberation time (T60) in these mixtures ranges from 0.2 to 1.0 seconds. The signal-to-noise ratio (SNR) between the louder speaker and the noise varies from -6 to 3 dB, the relative energy level between the two speakers ranges from -5 to 5 dB, and the speaker-to-array distance spans from 0.66 to 2.0 meters. The dataset includes 20,000 mixtures for training, 5,000 mixtures for development, and 3,000 mixtures for testing. During training, the direct-path signal from each speaker, recorded by the first microphone, serves as the target. This signal is also used as the reference for metric evaluation. The target speaker’s reference speech is randomly selected and updated each epoch during training. We use the same reference speech as in the anechoic scenario for inference. §.§ Network Configuration §.§.§ USEF-SepFormer We configure USEF-SepFormer based on the parameter settings of SepFormer from SpeechBrain <cit.>. The encoder employs a 1D convolutional layer with a kernel size of 16 and a stride of 8. It has an input dimension of 1 and an output dimension of 256. The transformer encoder in the CMHA module consists of 4 layers, 8 parallel attention heads, and a 1024-dimensional feed-forward network. FiLM is implemented with two single-layer linear layers for scaling and shifting, with an input and output dimension of 256. The segmentation stage divides the input into chunks of size K = 250. For the SepFormer backbone network, the intra-transformer encoder and inter-transformer encoder each consist of 8 layers, 8 parallel attention heads, and a 1024-dimensional feed-forward network. The dual-path processing pipeline is repeated twice. The decoder uses a 1D transposed convolutional layer with the same kernel size and stride as the encoder. The hyperparameters for USEF-SepFormer are summarized in Table <ref>. §.§.§ USEF-TFGridNet We configure USEF-TFGridNet based on the parameter settings of TF-GridNet from ESPNet <cit.>. For STFT, the window length is 16 ms, and the hop length is 8 ms. A 128-point Fourier transform is used to extract 65-dimensional complex STFT features at each frame. The 2D convolution layer has a kernel size of (3,3) and a stride of 1, with input and output dimensions of 2 and 128, respectively. The attention block in the CMHA module consists of 1 layer, 4 parallel attention heads, and a 512-dimensional feed-forward network. The kernel size and stride of the torch.unfold function are both 1. In the intra-frame full-band and sub-band temporal modules, the BLSTM layer has 256 units. The cross-frame self-attention module uses 1 layer, 4 parallel attention heads, and a 512-dimensional feed-forward network. The number of TF-GridNet blocks is 6. The 2D transposed convolution layer has the same kernel size and stride as the 2D convolution layer, with input and output dimensions of 256 and 2, respectively. The hyperparameters for USEF-TFGridNet are summarized in Table <ref>. §.§ Training Details We trained our models using the Adam optimizer <cit.>, with an initial learning rate set to 1e-4. The learning rate was halved if the validation loss did not improve within 3 epochs. No dynamic mixing or data augmentation was applied during training. The speech clips were truncated to 4 seconds during training, while the full speech clips were evaluated during inference. In the evaluation phase, each speaker in the mixed speech is considered as the target speaker in turn. The models were trained to maximize the scale-invariant signal-to-distortion ratio (SI-SDR) <cit.>, which is defined as follows: s_T = <ŝ,s>s/||s||^2 s_E = ŝ - s_T SI-SDR = 10||s_T||^2/||s_E||^2 where ŝ ∈ ℝ^1 × T represents the estimated target speaker speech, while s ∈ ℝ^1 × T represents the clean source speech. <s,s> denotes the power of the signal s. §.§ Evaluation Metrics We use SI-SDR or SI-SDR improvement (SI-SDRi) <cit.> and signal-to-distortion ratio (SDR) or SDR improvement (SDRi) <cit.> as objective metrics to assess the accuracy of target speaker extraction. In addition to distortion metrics, we evaluate the estimated target speaker’s speech quality using the perceptual evaluation of subjective quality (PESQ) <cit.>. The PESQ values are calculated using the pypesq[2][2]<https://github.com/youngjamespark/python-pypesq> tookit, and the thop[3][3]<https://github.com/Lyken17/pytorch-OpCounter> toolkit is employed to measure the number of model parameters and computational cost. § RESULTS AND DISCUSSIONS We present results from two sets of experiments. The first set is conducted on the WSJ0-2mix dataset, while the second set focuses on the WHAM! and WHAMR! datasets. §.§ Results on WSJ0-2mix §.§.§ Ablation Results of USEF-SepFormer With Different Hyper-Parameters Table <ref> presents the ablation results of the USEF-SepFormer on WSJ0-2mix, using different model hyperparameters. Critical parameters of the USEF-SepFormer model are detailed in Table <ref>. Comparing row 1 with row 5 (or row 2 with row 6, row 3 with row 7, and row 4 with row 8), it is evident that the USEF-SepFormer model with 4 attention layers in the CMHA module outperforms the model with only 1 attention layer (C_l = 4 vs. C_l = 1). Further, comparing row 5 with row 6 (or rows 7 and 8) shows that the best performance is achieved when the SepFormer backbone module has 8 attention layers and a feed-forward network dimension of 1024 (S_l = 4, S_d = 1024). Finally, comparing rows 6 and 8 (or rows 2 and 4) reveals that with S_l = 4 and S_d = 1024, having 8 attention heads in the CMHA module of USEF-SepFormer yields better performance than having 4 attention heads (SI-SDRi = 19.9 dB vs. SI-SDRi = 19.2 dB). It may be because the CMHA module is consistent with the attention modules used in the separator, which ensures that the frame-level target speaker features and fusion feature reside in a highly similar feature space. §.§.§ Ablation Results of USEF-TFGridNet With Different Hyper-Parameters Table <ref> presents the ablation results of the USEF-TFGridNet on WSJ0-2mix using different model hyper-parameters. The key parameters for the USEF-SepFormer model are detailed in Table <ref>. In USEF-TFGridNet, the self-attention parameters in the separator are consistent with those in TF-GridNet (S_l = 1, S_h = 4, S_d = 512). Comparing rows 1, 3, and 5, we observe that when the Unfold kernel size is set to 1 (K = 1), increasing the number of attention heads (C_h) or the dimension of the feed-forward network (FFN) (C_d) in the CMHA module does not enhance the model’s performance. Additionally, comparing rows 1 and 2, as well as rows 5 and 6, shows that when the FFN dimension is 512 (C_d = 512), using Unfold with a kernel size of 4 (K = 4) leads to decreased performance in the USEF-TFGridNet. Like USEF-Sepformer, USEF-TFGridNet achieves the best results (SI-SDRi = 23.3 dB) when the parameter settings of the CMHA module are consistent with those of the attention modules in the separator. §.§.§ Comparison With Previous Models Tabel <ref> compares the performance of our proposed methods with previous speech separation (SS) and target speaker extraction (TSE) models on WSJ0-2mix. Our time-domain model, USEF-SepFormer, outperforms the best previous time-domain TSE models using speaker embedding, such as SpEx+, SpEx_pc, and X-SepFormer, with an SI-SDRi of 19.9 dB compared to 17.4 dB, 19.0 dB, and 19.1 dB, respectively. Additionally, compared to the best time-domain speaker-embedding-free TSE models like SEF-Net and VEVE, USEF-SepFormer achieves state-of-the-art results in both SDRi and SI-SDRi (20.2 dB and 19.9 dB). These results demonstrate that our enhancements to the previous SEF-Net model are practical, significantly improving the performance of time-domain TSE models. Compared to previous SOTA T-F domain TSE models using speaker embedding, such as SMMA-Net and CIENet-mDPTNet, our proposed time-domain model USEF-TFGridNet demonstrates superior performance, with an SI-SDRi of 23.3 dB versus 20.4 dB and 21.4 dB, respectively. While the separators in SMMA-Net, CIENet-mDPTNet, and USEF-TFGridNet all process features along the channel dimension, SMMA-Net and CIENet-mDPTNet perform cross-attention on the real and imaginary parts of the reference speech’s STFT features and the mixed speech’s STFT features along the frequency dimension. The resulting features are then concatenated with the mixed speech’s STFT features along the channel dimension before increasing the channel dimension. In contrast, our USEF-TFGridNet model first increases the channel dimension and then applies cross-attention mechanisms directly along the channel dimension. This consistency in the modeling dimensions between the CMHA and separator modules leads to better performance. Overall, our proposed USEF-TFGridNet outperforms other time or T-F domain TSE models. Notably, the backbone networks of X-SepFormer and X-TF-GridNet are identical to those used in our proposed USEF-SepFormer and USEF-TFGridNet. The critical difference lies in the auxiliary features used: X-SepFormer and X-TF-GridNet employ a pre-trained speaker verification model to extract X-vectors as auxiliary features for input into the separator, while USEF-SepFormer and USEF-TFGridNet directly input the acoustic features of the reference speech into the separator. This suggests that, in addition to speaker identity, other aspects of the reference speech, such as contextual information, can also be valuable for accurately extracting the target speaker. We also compared our proposed models with several mainstream speech separation models on the WSJ0-2mix dataset. The current SOTA performance in the SS task is held by the MossFormer2 model, with an SI-SDRi of 24.1 dB. Previously, the best performance in the target speaker TSE task is achieved by the CIENet-mDPTNet model, with an SI-SDRi of 21.4 dB. Our proposed USEF-TFGridNet has set a new SOTA in the TSE task, achieving an SI-SDRi of 23.2 dB. This represents a 1.8 dB improvement over the CIENet-mDPTNet model, significantly narrowing the performance gap between the TSE and SS tasks from the previous 11.2% to the current 3.3% (from 2.7 dB to 0.8 dB). §.§.§ Comparison of Different Fusion Methods Table <ref> presents the experimental results of USEF-SepFormer and USEF-TFGridNet with different feature fusion methods. Both FiLM and concatenation are effective for feature fusion in the USEF-TSE framework. Notably, FiLM generally outperforms concatenation in USEF-TSE, yielding better results (19.9 dB vs. 19.4 dB and 23.3 dB vs. 23.3 dB). However, in USEF-TFGridNet, while FiLM provides a performance boost, it also introduces additional network structures, slightly increasing the model’s parameters and computational complexity compared to concatenation. §.§.§ Comparison of Distributions of the number of test utterances at various dB ranges Figure <ref> illustrates the distribution of SI-SDRi for 6,000 target speaker predictions (covering all speakers from 3,000 mixed speech samples). Predictions with an SI-SDRi value below 0 are considered poor cases, and fewer poor cases indicate better model performance. Notably, USEF-SepFormer and USEF-TFGridNet show significantly fewer poor cases compared to SEF-Net. Additionally, USEF-SepFormer and USEF-TFGridNet have a higher concentration of predictions in the 20 to 25 dB range than SEF-Net. The majority of USEF-TFGridNet’s predictions exceed 20 dB, with significantly more predictions above 25 dB compared to USEF-SepFormer and SEF-Net, indicating that USEF-TFGridNet achieves higher-quality target speaker estimation. §.§.§ Comparison of Different and Same Gender Table <ref> reports the speaker extraction performance of our proposed methods on both different-gender and same-gender mixed speech. To facilitate comparison with other TSE models, we report the experimental results for only the first speaker. It is well-known that separating same-gender mixed speech is more challenging than different-gender speech, resulting in lower SDR and PESQ scores in same-gender scenarios. As shown in the table, both USEF-SepFormer and USEF-TFGridNet outperform SEF-Net in both scenarios. USEF-SepFormer demonstrates a relative improvement over SEF-Net of 22.9% in SDR and 11.2% in PESQ for different-gender scenarios, with an even more significant relative improvement of 30.3% in SDR and 11.5% in PESQ for same-gender scenarios. Additionally, USEF-TFGridNet notably reduces the SDR gap between same-gender and different-gender scenarios from 1.47 dB to 0.42 dB. This suggests that our proposed model is more effective and consistent in handling same-gender mixed speech scenarios. §.§ Results on WHAM and WHAMR! Table <ref> compares the SDRi and SI-SDRi of our proposed methods with previous SS and TSE models on WHAM! and WHAMR!. USEF-SepFormer outperforms the previous best time-domain TSE model, SpEx++, on WHAM! (15.1 dB vs. 14.3 dB). However, it performs slightly worse on WHAMR! (11.3 dB vs. 11.7 dB). In contrast, T-F domain TSE models like USEF-TFGridNet and CIENet-mDPTNet significantly outperform time-domain models such as SpEx++ and USEF-SepFormer on both datasets. Moreover, the performance of time-domain TSE models degrades more in reverberant environments than T-F domain models. This suggests that T-F domain models may be better suited for target speaker extraction in complex scenarios than time-domain models. USEF-TFGridNet achieves SOTA performance on both the WHAM! and WHAMR! datasets. It shows a 5.3% relative improvement in SDRi and a 6.0% relative improvement in SI-SDRi over CIENet-mDPTNet on the WHAM! dataset. On the WHAMR! dataset, it demonstrates a 4.2% relative improvement in SDRi and a 2.5% relative improvement in SI-SDRi. We also compared our proposed models with several leading speech SS models on the WHAM! and WHAMR! datasets. MossFormer2, which has achieved SOTA performance in SS tasks, reports an SI-SDRi of 18.1 dB and 17.0 dB on the WHAM! and WHAMR! datasets, respectively. Previously, the SOTA performance for TSE on these datasets is held by the CIENet-mDPTNet model, with SI-SDRi scores of 16.6 dB and 15.7 dB, respectively. Our proposed USEF-TFGridNet sets new state-of-the-art records in TSE tasks, achieving SI-SDRi scores of 17.6 dB and 16.1 dB on the WHAM! and WHAMR! datasets, respectively. Furthermore, USEF-TFGridNet reduces the performance gap between TSE and SS tasks from 8.3% and 7.6% to 2.8% and 5.3% on the WHAM! and WHAMR! datasets, respectively. § CONCLUSION In this paper, we propose a Universal Speaker Embedding-Free Target Speaker Extraction (USEF-TSE) framework for monaural target speaker extraction. To validate the effectiveness of USEF-TSE, we integrate it with SepFormer and TF-GridNet to create two models: USEF-SepFormer and USEF-TFGridNet. Experimental results demonstrate that USEF-SepFormer and USEF-TFGridNet perform better than other time-domain and time-frequency domain target speaker extraction (TSE) models. Specifically, USEF-TFGridNet achieves state-of-the-art SI-SDRi scores of 23.3 dB on the WSJ0-2mix dataset, 17.6 dB on the WHAM! dataset, and 16.1 dB on the WHAMR! dataset for the TSE task. Furthermore, USEF-TFGridNet significantly narrows the performance gap between TSE and speech separation (SS) tasks, highlighting the robustness and effectiveness of our USEF-TSE framework under noisy and reverberant conditions. Future work will focus on addressing the limitations of the USEF-TSE framework and exploring potential issues, such as examining its performance on extremely large-scale speaker datasets, which represents a promising research direction. IEEEtran
http://arxiv.org/abs/2409.03698v1
20240905165533
Quantum optimal transport with convex regularization
[ "Emanuele Caputo", "Augusto Gerolin", "Nataliia Monina", "Lorenzo Portinale" ]
math-ph
[ "math-ph", "math.MP", "math.OC" ]
Constituent Automorphism Decoding of Reed–Muller Codes Yicheng Qu, Amir Tasbihi, and Frank R. KschischangThe authors are with the Edward S. Rogers Sr. Dept. of Electrical and Computer Engineering, University of Toronto, 10 King's College Road, Toronto, Ontario M5S 3G4, Canada. Email: , , . Submitted for publication on July 19th, 2024. September 9, 2024 =============================================================================================================================================================================================================================================================================================== § ABSTRACT The goal of this paper is to settle the study of non-commutative optimal transport problems with convex regularization, in their static and finite-dimensional formulations. We consider both the balanced and unbalanced problem and show in both cases a duality result, characterizations of minimizers (for the primal) and maximizers (for the dual). An important tool we define is a non-commutative version of the classical (c,ψ)-transforms associated with a general convex regularization, which we employ to prove the convergence of Sinkhorn iterations in the balanced case. Finally, we show the convergence of the unbalanced transport problems towards the balanced one, as well as the convergence of transforms, as the marginal penalization parameters go to +∞. § INTRODUCTION This work provides the first complete study of convex regularized quantum optimal transport problems and their extension to the unbalanced case. More precisely, given two finite-dimensional Hilbert spaces _1, _2, our goal is to prove the existence and characterize the solutions of the convex regularized quantum optimal transport problem = min{[C Γ] + [ φ(Γ)] Γ↦ (ρ,σ) }, where the cost C ∈(_1⊗_2) is a Hermitian matrix over _1⊗_2, and φ:[0,+∞) →ℝ is a proper, convex, superlinear at infinity and bounded from below function. The notation Γ↦(ρ,σ) means that Γ is a density matrix over _1⊗_2 having partial traces equal to density matrices ρ over _1 and σ over _2. The unbalanced (convex-regularized) quantum optimal transport problem is given by 𝔉^,τ_1,τ_2 = min{[C Γ] + [φ(Γ) ] + τ_1 ( P_1(Γ)| ρ)+ τ_2 ( P_2(Γ)| σ) : Γ≥ 0 }, where (Γ|ρ)= [ Γ( logΓ - logρ - Id_) + ρ] denotes the non-commutative relative entropy functional, P_1,P_2 are, respectively, the partial traces in _2 and _1. The notation Γ≥ 0 means that Γ is a semi-definite positive operator. For the precise definitions, see equation (<ref>) and Section <ref>. The main difference between the problems (<ref>) and (<ref>) is that instead of the marginal constraint Γ↦(ρ,σ) in (<ref>), the unbalanced quantum optimal transport introduces a soft-penalization for the partial traces of Γ to be close (in the relative entropy functional) to the density matrices ρ and σ, weighted by some trade-off parameters τ_1,τ_2 ∈ (0,+∞). The main contributions of this paper include * A general duality result, both for the balanced (<ref>) and unbalanced case (<ref>), including the case when φ nor its Legendre's transform are nonsmooth (Theorem <ref>). * The study and characterization of the optimizers, both in (<ref>) and (<ref>), as well as their corresponding dual formulations (Theorem <ref> and <ref>). * The definition and the proof of the convergence of Sinkhorn-type iterations in the balanced case with general convex regularization φ (Theorem <ref>). * The convergence, as the soft-penalization parameters τ_i → +∞, of the unbalanced transport problem (<ref>) to the corresponding balanced formulation (<ref>). More precisely, we show that when τ_i→+∞ of (i) the energies converge, i.e. 𝔉^,τ_1,τ_2→; (ii) Any sequence of minimizers (Γ^ε,τ_1,τ_2)_τ_1,τ_2>0 of the unbalanced quantum optimal transport (<ref>) converges to a minimizer Γ^ε of (<ref>); and finally (iii) any sequence of maximizers of the dual formulation of (<ref>) converges to a maximizer in the dual formulation of <ref> (Theorem <ref>). Moreover, if we assume that φ is strictly convex and C^1, we can show that for both primal problems (<ref>) and (<ref>) admit a unique minimizer, which can be written in the explicit form Γ = ψ' [ U ⊕ V - C/ε] , where ψ is the Legendre transform of φ, and (U,V) are maximizers of their associated dual problems (Theorem <ref> and Theorem <ref>). §.§.§ Quantum Optimal Transport Quantum optimal transport extends the classical theory of optimal transport for probability measures to quantum states (e.g., density matrix or operators). Mathematically, challenges arise from the intrinsic non-commutative nature of quantum mechanics, which contrasts with the commutative structures underlying classical probability theory. The pursuit of a non-commutative version of the Monge–Kantorovich optimal transport theory dates back roughly thirty years, with pioneering works by Connes <cit.>, Biane and Voiculescu <cit.>, see also <cit.>. Their approach is primarily centered on adapting the static dual formulation of optimal transport. More recently, several efforts have been made to extend the optimal transport theory for quantum states, starting from either the static (e.g., Kantorovich relaxation, dual) or dynamical formulation (known after Benamou–Brenier). While these formulations are equivalent in the optimal transport theory for probability measures, they can lead to distinct and not necessarily equivalent theories in the quantum setting. In the static framework, quantum optimal transport between quantum states has been first introduced by Caglioti, Golse and Paul <cit.> and De Palma and Trevisan <cit.>. In <cit.>, the quantum optimal transport arose in the context of the study of the semiclassical limit of quantum mechanics. In this setting, the problems correspond to (<ref>) with ε=0, the joint states Γ are defined in _1⊗_2 and the cost operator is given in terms of quadratures C=∑_i=1^2m(R_i⊗𝕀_ℋ_2 - 𝕀_ℋ_1⊗ R_i)^2. In the alternative proposal in <cit.>, the plans instead are states on ℋ⊗ℋ^*, i.e., one acts with a partial transpose operation: the variational problem reads similarly to the previous one, but its properties are rather different and it enjoys operational interpretations thanks to a correspondence between plans and quantum channels. Also, in <cit.>, the authors introduced a quantum optimal transport problem by generalizing the dual formulation of the so-called 𝕎_1 distance specifically for qubits. A dynamical formulation of quantum optimal transport has been introduced by Carlen and Maas <cit.>, which the authors used to study long-behavior and fine properties of the Fermionic Fokker–Planck equations. In <cit.>, Wirth introduced a dynamical formulation for von Neumann entropy-regularized quantum optimal transport within the dynamical framework. In the same year, Monsaingeon and Vorotnikov <cit.> introduced the Fisher–Rao space of matrix-valued non-commutative probability measures, and the related Hellinger space. The study of quantum composite systems at positive temperature – equation (<ref>) with von Neumann entropy φ(Γ)=[Γ(logΓ - Id__1⊗_2)] – has been initiated in <cit.>, including the cases when the density matrices are fermionic, important for Electronic Structure Theory <cit.>. The von Neumann entropy regularization is fundamental for implementing quantum optimal transport functionals into efficient computational algorithms. Unbalanced quantum optimal transport was studied in <cit.>, motivated by optimal transport theory for non-negative measures introduced, independently, by Liero, Mielke and Savaré <cit.>, Chizat, Peyré Schmitzer and Vialard <cit.>, and Kondratyev, Monsaingeon and Vorotnikov <cit.>. In <cit.>, the authors considered a particular problem (<ref>) when φ(Γ) is the von Neumann entropy regularization, proved weak duality, Γ-convergence results when either the soft-penalization parameter in (<ref>) τ_1=τ_2→+∞ goes to infinity and the von Neumann entropy-regularization parameter ε→ 0^+ goes to zero. This work addresses general convex entropy regularization problems within both the quantum optimal transport (<ref>) and unbalanced quantum optimal transport (<ref>) frameworks. In the context of optimal transport for probability measures, convex regularization has been studied extensively (e.g., <cit.>). This list of examples include quadratic regularization as well as regularization with the Tsallis entropy. The study of these cases is primarily driven by the goal of designing computational algorithms leveraging the sparse structure of optimal transport problems. A recent survey on the topic has been recently published <cit.>. Methodology: From a technical perspective, the technique and results of <cit.> often rely on the special form of the dual problem associated with the von Neumann regularization (in particular, properties of the exponential map x ↦ e^x), which are not at our disposal at this level of generality. On a second note, even the classical results provided in <cit.> regard C^1 regularization functions, which makes our general result slightly more general even with respect to the classical setting. Our results are based on suitable a priori estimates on the dual functional, which in many cases turn out to be dimension-free. Although our techniques are still finite-dimensional, the final, long-term goal is to develop all the tools and investigate ideas to obtain results in the infinite-dimensional setting. Organization of the paper: Section <ref> presents the setting and the statement of the main result of this manuscript. Section <ref> focus on the proofs for the (balanced) quantum optimal transport problem, the definition of (C,ψ,)-transform and its properties. In the final part, we define the Sinkhorn iterations and study their convergence. In Section <ref>, we prove strong duality in the unbalanced case, we define the (C,ψ,,τ)-transform and study its properties. Section <ref> extends the results of previous sections by removing the assumption that ψ∈ C^1. Section <ref> contains the Γ-convergence results of the unbalanced problem to the balanced one and the convergence of transforms as the size parameters τ converge to +∞. § SETTING AND MAIN RESULTS Given a Hilbert space , we denote by () the vector space of Hermitian operators over . Let us denote by _≥() the set of positive semi-definite Hermitian and by () the set of density matrices, i.e. elements of _≥() with trace equal to 1. We also write _>() to denote the elements A of _≥() such that ker(A)={ 0}. Given ρ∈(_1) and Γ∈(_1 ⊗_2), we say that ρ is the first partial trace if [ Γ (U ⊗ Id__2) ]= [ ρ U ] for every U ∈(_1) . In such a case, we write P_1 Γ = __2Γ =ρ. Similarly, given σ∈(_2) and Γ as before, we denote by P_2 Γ = __1Γ =σ the second partial trace defined in an analogous way. If P_1 Γ=ρ and P_2 Γ=σ, we write Γ↦ (ρ, σ). For the sake of convenience, we introduce the following notation: for ∈{1, …, d }, we use λ_i(A) to denote the i-th smallest element of the spectrum of A. In particular, λ_1(A) is the smallest eigenvalue of A. For every continuous function φℝ→ℝ, we define the lifting of φ to the space of Hermitian matrices the operator[With a slight abuse of notation, we denote the lifting of φ also with the same letter φ.] φ() →() given by φ(A):=∑_i=1^d φ(λ_i(A)) |ξ_i⟩⟨ξ_i|, where we used the spectral decomposition A = ∑_i=1^d λ_i(A) |ξ_i⟩⟨ξ_i|. When clear from the context, we may omit the dependence on A and write λ_i instead of λ_i(A). For U ∈(_1) and V ∈(_2), we define U ⊕ V := Id__1⊗ V + U ⊗ Id__2. The unbalanced optimal transport is obtained by replacing the constraint with two penalization terms tuned by parameters τ_1, τ_2>0. The choice of penalization we use in this paper (as typically adopted in the commutative setting as well) is given by the relative entropy with respect to the marginals. Given a finite-dimensional Hilbert space and an operator η∈_≥(), the relative entropy functional (· | η): _≥() → [0, + ∞] is the functional defined as α↦(α | η) := [ α( logα - logη - Id_) + η] if η⊂α , + ∞ otherwise . Here we are using the convention that α( logα - logη) = 0 on η⊂α. The nonnegativity of the relative entropy is a direct consequence of Klein's inequality <cit.> applied to the convex function t ↦ t log t. Finally, it is a (strictly) convex function on its domain, and its Legendre transform is given by ^*(A|η) := [ e^A+logη - η] , A ∈() . This follows from the equality [ e^A+logη - η] = g(A+logη) - [η] , where g(B) := [exp(B)] , together with the fact that, for α∈(), g^*(α) = sup_A ∈(){⟨α, A ⟩ - g(A) } = [ α( logα - Id_) ], α∈_≥(), +∞, otherwise . By convexity we know that g^** = g, hence we get (<ref>). Note that in general ^*(A|η) ≠[ ( e^A - Id_) η], in particular when A and η do not commute. We shall fix φ [0,+∞) → a convex, superlinear at infinity, and bounded from below function, namely φ [0,+∞) → , convex , lim_t → +∞φ(t)/t =+∞ , infφ≥ l > - ∞ , for some l ∈. In particular we assume that φ(0)∈. The following operators are given and fixed throughout the whole paper: * A cost operator C ∈(_1 ⊗_2). * Two `marginal' operators ρ∈_>(_1) and σ∈_>(_2) (in particular, with trivial kernel). Sometimes we may assume them to be density matrices as well. The assumption on the kernel of the marginal is classical and not too restrictive. Similar results can be obtained by decomposing the space with respect to the kernels and their orthogonal spaces, in the very same spirit as in <cit.>. For the sake of simplicity, we won't discuss further details in this work. We also fix >0 a regularization parameter. The main objects of study of our work are primal and dual functionals. [Primal problems] We define a functional _≥(_1 ⊗_2) →ℝ as (Γ): = [C Γ] + [ φ( Γ) ] , ∀Γ∈_≥(_1 ⊗_2) . For every τ = (τ_1, τ_2) ∈ (0,+∞)^2, we define : _≥(_1 ⊗_2) → as the functional (Γ) := (Γ)+ τ_1 ( P_1(Γ)| ρ)+ τ_2 ( P_2(Γ)| σ) , We also define the unbalanced primal problem with parameters τ_1, τ_2>0 as := min{ (Γ) Γ∈_≥(_1 ⊗_2) }. Moreover, if ρ∈(_1) and σ∈(_2) we define the balanced primal problem as := min{(Γ) Γ↦ (ρ, σ) } , which (formally) corresponds to the case τ_1=τ_2 = +∞. To each primal problem, we associate the respective dual problem. Throughout the whole paper, we work with a function ψ∈ C() which is convex, superlinear at infinity, and bounded from below, i.e. ψ∈ C() , convex , lim_t → +∞ψ(t)/t = + ∞ , m:= infψ > - ∞ . When dealing with duality results for the non-commutative optimal transportation problems, ψ will typically be the Legendre transform of a φ satisfying (<ref>), namely of the form ψ(t) = sup_x ∈ [0,+∞){ tx - φ(x) } = φ^*(t) , ∀ t ∈ , where when talking about Legendre transform we may implicitly extend the definition of φ on the full real line by setting φ(x) ≡ +∞, for every x < 0. The validity of (<ref>) readily follows in this case from the properties (<ref>) of φ. A direct computation for the relation between the primal and the dual can be seen in Appendix <ref>. [Dual problems] Fix ψ∈ C() satisfying (<ref>). We define the dual functional (_1) ×(_2) → as (U,V) := [ U ρ] + [ V σ] - [ ψ(U⊕ V - C/) ] . For every τ_1, τ_2>0, we also define (_1) ×(_2) → as the map (U,V) ↦ -τ_1 [ e^-U/τ_1+logρ- ρ] - τ_2 [ e^-V/τ_2+logσ- σ] -[ ψ(U⊕ V-C/) ] . We define (respectively) the unbalanced (respectively balanced) dual problem as := sup{ (U,V) (U,V) ∈(_1) ×(_2) } , := sup{ (U,V) (U,V) ∈(_1) ×(_2) } . The main results of our work can be divided in four parts: duality results and analysis of primal/dual problem for the balanced optimal transport problem under suitable regularity assumptions on ψ, the corresponding results for the unbalanced problem, a generalization of the duality for possibly nonsmooth ψ, and finally convergence results for the unbalanced problem towards the balance one was τ→ +∞. §.§.§ Balanced optimal transport The first contribution is a duality result for the balanced quantum optimal transport problem, including existence, (suitable) uniqueness, and characterization of the optimizers, both in the primal and dual problems. [Duality and optimizers in the balanced case] Let φ:[0,+∞) → [0,+∞) satisfy (<ref>), and assume that ψ=φ^* is strictly convex and C^1. Assume additionally that ρ∈(_1) and σ∈(_2). Then the following statements hold: 1) (Duality) The dual and primal problem coincide, namely =. 2) (Existence of maximizers) There exists (U^*,V^*) which maximizes D^, i.e. = D^ (U^*,V^*). Moreover, any other maximizer (Ũ, Ṽ) satisfies (Ũ - U^*, Ṽ - V^*) = (λ Id__1 , - λ Id__2 ), for some λ∈. 3) (Existence of minimizers) There exists a unique maximizer Γ^* ∈(_1 ⊗_2) for the primal problem, i.e. = (Γ^*), and it is given by Γ^* = ψ' [ U^* ⊕ V^* - C/ε] , where (U^*,V^*) are (any) maximizers for the dual problem. In the balanced setting, our second contribution concerns the definition of a Sinkhorn-type iterations and their convergence guarantee. To this end, a crucial tool to consider is the so-called (C,ψ,)-transform (associated with ψ). We assume once again that ψ=φ^* is strictly convex and C^1. We define ℱ_2^(C,ψ,)(_1)→(_2) as ℱ_2^(C,ψ,)(U) = {[ V σ] - [ ψ(U⊕ V - C/) ] V ∈(_2) }. Analogously, we define the corresponding ℱ_1^(C,ψ,)(_2)→(_1) as ℱ_1^(C,ψ,)(V) = {[ U ρ] - [ ψ(U⊕ V - C/) ] U ∈(_1) }. For the well-posedness of the (C,ψ,)-transforms, see Section <ref>. We use this notion to define the following algorithm. Recall that λ_1(A) denotes the first eigenvalue of an operator A. Step 0: fix any initial (U^0,V^0)∈(_1)×(_2). Step 2n-1: for n ∈, for given U^n-1∈(_1), we define V^n as V^n := ℱ_2^(C,ψ,)(U^n-1) - λ_1(ℱ_2^(C,ψ,)(U^n-1)) Id__2 . Step 2n: for given V^n ∈(_2), we define U^n as U^n := ℱ_1^(C,ψ,)(ℱ_2^(C,ψ,)(U^n-1)) + λ_1(ℱ_2^(C,ψ,)(U^n-1)) Id__1. Then we have the following convergence result. [Convergence of the Sinkhorn iterations] Under the same assumptions of Theorem <ref>, there exists a limit point (U^*,V^*) of {(U^n,V^n)}_n≥ 0, which is a maximizer of D^. Moreover, we have that Γ^n = ψ' ( U^n⊕ V^n - C/) ∈(_1 ⊗_2) and we have that Γ^n →Γ^* ∈(_1 ⊗_2) as n →∞, where Γ^* = ψ' ( U^* ⊕ V^* - C/) ↦ (ρ,σ) is the unique minimizer for the primal problem . §.§.§ Unbalanced optimal transport Taking advantage of the techniques and results provided in the previous section, we next tackle the same questions in the unbalanced setting. In particular, we have the validity of the following duality result. [Duality and optimizers in the unbalanced case] Let φ:[0,+∞) → [0,+∞] satisfy (<ref>), and assume that ψ=φ^* is strictly convex and C^1. Let τ = (τ_1,τ_2) ∈^2 be trade-off parameters. Then the following holds: 1) (Duality) The dual and primal problem coincide, namely =. 2) (Existence of maximizers) There exists a unique (U^*,τ,V^*,τ) which maximizes D^,τ, i.e. = D^,τ (U^*,τ,V^*,τ). 3) (Existence of minimizers) there exists a unique minimizer Γ^*,τ∈(_1 ⊗_2) for the unbalanced primal problem, i.e. = (Γ^*,τ), and it is given by Γ^*,τ = ψ' [ U^*,τ⊕ V^*,τ - C/ε] , where (U^*,τ,V^*,τ) are the unique maximizers for the unbalanced dual problem. Note that the unbalanced case is characterized by the uniqueness of maximizers in the dual problem, in contrast with the balanced case. §.§.§ General duality theorem The assumptions on ψ are classical: in particular, the strict convexity of ψ ensures the uniqueness (up to the trivial translation) of the maximizers of the dual problem. The smoothness assumption ψ∈ C^1 ensures that φ is strictly convex in its domain, as well as it allows us to write the explicit formula for Γ^* as given in (<ref>). On the other hand, the validity of the duality result does not require such regularity but holds true for general convex, superlinear, and bounded from below ψ, which automatically follows from the general assumptions on φ. [Duality for nonsmooth ψ] Let φ:[0,+∞) → [0,+∞) be a convex, superlinear, and bounded from below. Then for every τ_1, τ_2 ∈ [0,+∞], the primal and dual problem coincide, i.e. =. If ρ∈(_1) and σ∈(_2), the same holds true for the balanced case, i.e. =. It would be interesting to investigate the validity of a formula in the same style as (<ref>) for nonsmooth ψ, possibly involving the subdifferential of ψ. We leave this to future investigations. §.§.§ Convergence results Having at disposal a complete understanding of both the balanced and unbalanced setting, our final contribution is the study of the limit behavior of both primal and dual problems as the trade-off parameters τ_1, τ_2 goes to +∞. In particular, we show that not only do the unbalanced primal and dual functionals Γ-converge to the corresponding ones in the balanced case, but we also provide the convergence of the associated optimizers (up to a suitable renormalization for the dual problem). [From unbalanced to balanced optimal transport: convergence result] Under the same assumptions of Theorem <ref>, we have the following convergence results: * The functional Γ-converge as τ_1,τ_2 →∞ to the functional Γ↦(Γ) if Γ↦ (ρ,σ) +∞ otherwise . Similarly for the dual problem, we have that -D^,τ Γ-converge to -D^. * The unique minimizer Γ^*,τ∈_≥(_1 ⊗_2) of converges (up to subsequence) as τ_1,τ_2 → +∞ to a minimizer Γ^* ∈_≥(_1 ⊗_2) of with constraint Γ^* ↦ (ρ,σ). * The unbalanced optimal transport problems converge as τ_1,τ_2 → +∞ to the balanced optimal transport problem, i.e. lim_τ_1,τ_2 → +∞ = = = lim_τ_1,τ_2 → +∞ . * Let (U^*,τ, V^*,τ) ∈(_1) ×(_2) be the unique maximizer of D^,τ. Define the recentered potentials ( Û^*,τ, V̂^*,τ) := ( U^*,τ + λ_1(V^*,τ) , V^*,τ - λ_1(V^*,τ) ) ∈(_1) ×(_2) . Then (Û^*,τ, V̂^*,τ) converges (up to subsequence) as τ_1,τ_2 → +∞ to a maximizer (U^*,V^*) ∈(_1) ×(_2) of D^ which satisfies λ_1(V^*)=0. Assume additionally that ψ∈ C^1 and strictly convex. Then the convergence of the minimizers and maximizers in 2. and 4. are true without taking subsequences, and Γ^*,τ (resp. (Û^*,τ,V̂^*,τ)) converge to the unique minimizer Γ^* (resp. unique maximizer (U^*,V^*) with λ_1(V^*)=0). Finally, the final convergence result we obtain is about the convergence of the (C,ψ,, τ)-transforms associated with the dual functional of the unbalanced optimal transport problem. The corresponding transform in the unbalanced case is given by the map ℱ_2^(C,ψ,,τ)(_1)→(_2) such that ℱ_2^(C,ψ,,τ)(U) = _V ∈(_2){ - τ_2 [e^-V/τ_2+logσ] - [ψ(U⊕ V -C/)] }. Analogously, we define ℱ_1^(C,ψ,,τ)(_2)→(_1) as ℱ_1^(C,ψ,,τ)(V) = _U ∈(_1){ - τ_1 [e^-U/τ_1+logρ] - [ψ(U⊕ V -C/)] }. [Convergence of C-transforms] Assume that ψ is strictly convex. Then we have that the associated (C,ψ,,τ)-transforms converge, as the parameter τ→ +∞. More precisely, one has that, for i=1,2, ℱ_i^(C,ψ,,τ)(W_τ) →ℱ_i^(C,ψ,)(W_∞) ∀ W_τ→ W_∞ . A similar result could be proved as well without assuming that ψ is strictly convex. Of course, in this case, the definition of (C,ψ,)-transform is not well-defined, due to the fact that we don't have a unique maximizer in (<ref>), (<ref>). Nonetheless, it not hard to prove that unbalanced C-transforms will converge (up to subsequence) to a maximizer in (<ref>) (resp. (<ref>)). § BALANCED NON-COMMUTATIVE OPTIMAL TRANSPORT In this section, we study the non-commutative optimal transport problems, both the primal and the dual, in the balanced case. We begin by analyzing the dual problem, and discussing the existence of maximizers and their properties. We then use them to prove a duality result, namely the equality between the primal and the dual problem. In the last part of this section, we discuss the notion of (C,ψ,)-transform associated with a general regularization ψ. Throughout the whole section, _1, _2 denotes Hilbert spaces of finite dimensions d_1 and d_2, respectively. We fix C ∈(_1 ⊗_2) and consider ρ∈(_1), σ∈(_2) with trivial kernel. Recall the definition of the dual functional (U,V)=[ U ρ] + [ V σ]-[ ψ(U⊕ V - C/) ], where ψ∈ C() is a convex, superlinear function at infinity, and bounded from below, i.e. lim_t → +∞ψ(t)/t = + ∞ m:= infψ > - ∞ . A crucial property that we use later in this section is that if ψ is superlinear, then the function ∋ x ↦ x - ψ(x) has superlevels which are bounded from above, namely for every α∈, { x ∈ x - ψ(x) ≥α}⊂{ x ∈ x ≤ R_α} , for some R_α∈ (0,+∞). This directly follows from the fact that for every R>0, { x ∈ x - ψ(x) ≥α}∩{ x ∈ x > R }⊂{ x ∈ψ(x)/x≤max{ 1 - α/R , 1 }} , and the fact that the set on the right-hand side is bounded by the definition of superlinearity. Note that the dual functional in the balanced case enjoys special symmetry properties. In particular, for every (U,V) ∈(_1) ×(_2) we have that (U + λ Id__1 , V - λ Id__2 )= (U, V) for every λ∈ℝ. This in particular shows that is not coercive. Nonetheless, as shown at the end of this section, this is the unique invariance of the problem, see Remark <ref>. §.§ Existence of maximizers in the balanced case First of all, we observe that the maximization problem sup{ (U,V) U∈(_1), V∈(_2) }, is equivalent, due to the symmetry argument in Remark <ref>, to the restricted maximization sup{ (U,V) U∈(_1), V∈(_2), λ_1(V)=0 }, where recall that λ_1(U) denotes the smallest eigenvalue of U. The next result shows that once restricted to the class of operators (U,V) with λ_1(V) = 0, the dual functional is coercive. For every U∈(_1), V∈(_2) such that λ_1(V)=0 D^ (U,V)≥ M >-∞ for some M∈, there exists a constant 0≤𝒜=𝒜(M) <∞ such that ||U||_∞,||V||_∞≤. For any given U∈(_1), V∈(_2), we denote by S:= U ⊕ V ∈(_1 ⊗_2) and write its spectral decomposition as S= ∑_k=1^d λ_k(S) |ξ_k ⟩⟨ξ_k | where d= d_1 d_2. Note that every eigenvalue of S is given by the sum of an eigenvalue of U and one of V, hence to show that the ||U||_∞, ||V||_∞ norms are bounded it suffices to provide an upper and a lower bound the spectrum of S, since λ_1(S) = λ_1(U) ≤λ_d_1(U) and λ_d(S) = λ_d_1(U)+λ_d_2(V) ≥max{λ_d_1(U) , λ_d_2(V) + λ_1(S) } . Writing the dual functional in terms of S, one can apply the Peierl's inequality (e.g., Theorem 2.9 in <cit.>) for the base {|ξ_k⟩}_k=1^d and obtain M ≤ (U,V) = [S (ρ⊗σ)] - [ψ(S-C)] ≤∑_k=1^d( ⟨ξ_k| S(ρ⊗σ) |ξ_k⟩ - ψ( ⟨ξ_k|S - C|ξ_k⟩) ) = ∑_k=1^d( λ_k(S) ⟨ξ_k|ρ⊗σ|ξ_k⟩ - ψ( λ_k(S) - ⟨ξ_k| C |ξ_k⟩) ) = ∑_k=1^d( Λ_k ω_k - ψ( Λ_k - C_k) ), where for simplicity we set Λ_k:= λ_k(S), ω_k = ⟨ξ_k|ρ⊗σ|ξ_k⟩, and C_k=⟨ξ_k| C |ξ_k⟩. On the one hand, using that ∑_k=1^dω_k=1, Λ_1 ≤Λ_d, as well as ω_1 ≥λ_1(ρ⊗σ), we have that M ≤Λ_1 ω_1 + ∑_k=2^d Λ_k ω_k - ∑_k=1^d ψ( Λ_k - C_k) ≤Λ_1 ω_1 + Λ_d ( 1-ω_1) - d m ≤Λ_1 λ_1(ρ⊗σ ) + Λ_d (1-λ_1(ρ⊗σ )) - d m , which yields the lower bound Λ_1 ≥λ_1(ρ⊗σ )^-1( M+ d m - Λ_d (1-λ_1(ρ⊗σ )) ), where we used that λ_1(ρ⊗σ ) > 0 due to the fact that ρ and σ have trivial kernel. Note that λ_1(ρ⊗σ) ∈ (0,1], therefore we are left to show that Λ_d is bounded from above, which follows by superlinearity of ψ. Indeed, from (<ref>) we see that M ≤Λ_d - ψ( Λ_d - C_d) - ∑_k=1^d-1ψ( Λ_k - C_k) ≤( Λ_d - C_d - ψ( Λ_d - C_d) ) + C _∞ - m (d-1) , which yields Λ_d - C_d - ψ( Λ_d - C_d ) ≥M+ m (d_1 d_2 -1) - ||C||_∞ . The conclusion the directly follows from the superlinearity of ψ (see in particular Remark <ref>) and the fact that |C_d| ≤ C _∞. Notice that without loss of generality, due to the symmetry of the functional , the condition λ_1(V)=0 can be replaced with λ_1(U)=0. In fact, the claimed compactness works with any other constraint of the form λ_1(U)∈ B (or V instead of U), for a given bounded set B ⊂. Following the proof of the latter Proposition, one can see that depends on C _∞, and the dimensions d_1, d_2 of the underlying Hilbert spaces. Nonetheless, it is clear from the proof that the second dependence disappears whenever ψ≥ 0, as well as in the limit as → 0, which suggests the possibility of (partially) extending these results to infinite dimensional setups. We may now proceed to show the existence of the maximizer for the dual functional (<ref>). [Existence of the maximizer] Let ε>0, _1, _2 be finite-dimensional Hilbert spaces. Let ρ∈(_1) and σ∈(_2) with trivial kernel and let ψ:→ be a superlinear convex function bounded from below. Then the dual functional D^ defined in (<ref>) admits a maximizer (U^*,V^*)∈(_1)×(_2). The proof follows from the direct method. Indeed, take a maximizing sequence {(U^n,V^n)}_n≥ 1⊂(_1)×(_2) such that sup = lim_n →∞ (U^n,V^n) . Thanks to the observation in (<ref>), we can assume λ_1(V^n) = 0. In particular, we then have {(U^n,V^n)}_n≥ 1⊂{(U,V)∈(_1)×(_2) λ_1(V)=0, (U,V) ≥ M }, for some M ∈. Due to Proposition <ref>, we may conclude that the sequence {(U^n,V^n)}_n≥ 1 is bounded, and we extract a converging subsequence (U^n_k,V^n_k) (U^*,V^*)∈(_1)×(_2). By the continuity of (<ref>), taking the limit as k →∞ we obtain that (U^*,V^*) = sup, hence a maximizer. §.§ properties of maximizers, and duality Recall the setting introduced at the beginning of Section <ref>. We now introduce the notion of the (C,ψ,)-transform, in analogy with the regularization given by the Shannon entropy in the quantum case in <cit.>. [(C,ψ,)-transform] Assume additionally that ψ is strictly convex. We define ℱ_2^(C,ψ,)(_1)→(_2) as ℱ_2^(C,ψ,)(U) = {[ V σ] - [ ψ(U⊕ V - C/) ] V ∈(_2) }. Analogously, we define the corresponding ℱ_1^(C,ψ,)(_2)→(_1) as ℱ_1^(C,ψ,)(V) = {[ U ρ] - [ ψ(U⊕ V - C/) ] U ∈(_1) }. The definition of the transform is well-posed, as the contains exactly one element. Indeed, due to Remark <ref>, we may provide the proof only for ℱ_1^(C,ψ,). For every given V∈(_2) we have that _U∈(_1){[ U ρ] - [ ψ(U⊕ V - C/) ] } = _U∈(_1){ (U,V) } = _U∈(_1){ (U + λ_1(V) Id__1,V - λ_1(V) Id__2) }. In particular, as we are looking for a maximizer, we can restrict the maximization set to { U∈(_1) (U + λ_1(V) Id__̋1,V - λ_1(V) Id__2)≥ M } , for some M ∈. Under this condition, we have that (U + λ_1(V) Id__̋1,V - λ_1(V) Id__2) satisfies the assumption of the Proposition <ref>, and therefore the set (<ref>) is bounded. Arguing as in Theorem <ref>, one can show that a maximizer in (<ref>) exists. Uniqueness follows from the strict concavity of , as a consequence of (<ref>). Under the standing assumptions of this section, let additional suppose that ψ is strictly convex and C^1. Then, for every U ∈_1, its transform ℱ_2^(C,ψ,)(U) is the unique solution V of the equation σ =_2 [ ψ'( U ⊕V - C/)]. Analogously, the characterization holds for ℱ_1^(C,ψ,)(V), replacing σ with ρ and _2 with _1. If ψ is only C^1 and convex (not necessarily strictly), then transform ℱ_2^(C,ψ,)(U) is not uniquely determined, but any (possibly not unique) maximizer of (<ref>) satisfies the same optimality condition (<ref>). Set V:= ℱ_2^(C,ψ,)(U) (or in general any maximizer for (<ref>)). For every δ∈ℝ and A ∈(_2), we define the map f(δ):=[ ψ(U ⊕ (V+δ A)-C/) ] - [ (V+δ A) σ] . By optimality, we know that f'(0)=0, and by Lemma <ref> this translated into [ ψ' (U ⊕ V-C/)( Id⊗ A) ] - [ A σ] = 0 , ∀ A ∈(_2) , which provides the sought characterization, by the very definition of partial trace _2. The second statement follows by arguing similarly. Uniqueness comes directly from the strict concavity of the function V ↦ (U,V) whenever ψ is strictly convex. [Quadratic regularization] A typical choice of regularization which differs from the usual von Neumann entropy is given by the quadratic case φ(t)=1/2 t^2, t ≥ 0. See for example <cit.> for the study of the regularized optimal transport problem with such ψ, including numerical methods to seek for maximizers. They are based on the fact that in this case, the associated Legendre transform is given by ψ(t) = 1/2 (t_+)^2. The optimality conditions therefore read as σ = _2 [ ( U ⊕ℱ_2^(C,ψ,)(U) - C/)_+ ] . Note this is an example of ψ∈ C^1 which is not strictly convex. In particular, ψ' is not globally invertible and an explicit solution cannot be found, and the maximizers of the dual problem are not necessarily determined by λ_1(U) (cfr. <cit.> for the commutative setting). [Equivalent characterizations for maximizers] Under the standing assumptions of this section, let (U^*, V^*) ∈(_1) ×(_2). Assume that ψ is strictly convex and C^1. Then the following are equivalent: 1) (Maximizers) (U^*,V^*) maximizes D^, i.e. = D^ (U^*,V^*). 2) (Maximality condition) U^* = ℱ_1^(C,ψ,)(V^*) and V^* = ℱ_2^(C,ψ,)(U^*). 3) (Complementary slackness) Let Γ^*:=ψ' [ U^* ⊕ V^* - C/ε], then Γ^* ↦ (ρ,σ). 4) (Duality) There exists Γ^* ∈(_1 ⊗_2) so that Γ^* ↦ (ρ, σ) and (Γ^*)= D^ (U^*,V^*). If one (and thus all) condition holds, Γ^*, as defined in 3) is the unique minimizer to . As an immediate corollary of Theorem <ref> and Theorem <ref>, we obtain the sought duality. [Duality for balanced QOT] Under the standing assumptions of this section, assume that ψ = φ^* is strictly convex and C^1. Then =. We shall show 1) ⇒ 2) ⇒ 3) ⇒ 4) ⇒ 1). 1) ⇒ 2). By definition of ℱ_1^(C,ψ,)(V^*), we have that (ℱ_1^(C,ψ,)(V^*),V^*) ≥ (U^*,V^*) =. By uniqueness of maximizer of (·,V^*), we have that U^*=ℱ_1^(C,ψ,)(V^*). A similar argument proves the second part of the statement. 2) ⇒ 3). This holds by Proposition <ref>. 3) ⇒ 4). Denote by simplicity W^*:=(U^* ⊕ V^*- C)/ε, and compute (U^*,V^*) = [ U^* ρ]+[V^* σ] -ε[ψ(U^*⊕ V^*-C/ε) ] 3)=[ (U^*⊕ V^*) Γ^*] -ε[ψ(U^*⊕ V^*-C/ε) ] = ε[ W^* ψ'(W^*)-ψ(W^*)]+[C Γ^* ] , As a consequence of (<ref>), we have that [ W^* ψ'(W^*)-ψ(W^*) ] = [ φ( ψ'(W^*) ) ]= [ φ(Γ^*)]. Therefore, we continue the computation in (<ref>) using (<ref>) and we have (U^*,V^*) =[C Γ^* ] +ε[ φ(Γ^*)] = (Γ^*). This proves 4). 4) ⇒ 1). Since (Γ^*) = (U^*,V^*)≤ by Remark <ref>, we have that Γ^* is a minimizer for the primal problem, hence (U^*,V^*)=. Moreover, once again by Remark <ref> it holds ≥ (U,V) for every (U,V) ∈(_1) ×(_2), hence we also deduce that (U^*,V^*) ≥ (U,V) for every (U,V) ∈(_1) ×(_2), which precisely shows that (U^*,V^*) is a maximizer. As we have seen in Remark <ref>, the dual functional is invariant by translations that sum up to zero. Whenever ψ is strictly convex and C^1, we claim that every pair of maximizers can be obtained by translation of one another. More precisely, given (U_1,V_1) , (U_2,V_2) ∈(_1) ×(_2) two pairs of maximizers, then there must exist λ∈ so that (U_1,V_1) = (U_2+ λ Id__1, V_2 - λ Id__1). To see this, we first observe that ψ being smooth implies that φ is strictly convex on the interior of its domain, see e.g. <cit.>. In particular, the primal functional admits a unique minimizer, hence thanks to Theorem <ref>, we deduce that ψ' [ U_1 ⊕ V_1 - C/ε] = ψ' [ U_2 ⊕ V_2 - C/ε] . By strict convexity, we know that ψ' is injective, hence we infer that U_1 ⊕ V_1 - C/ε = U_2 ⊕ V_2 - C/ε ⇒ U_1 ⊕ V_1 = U_2 ⊕ V_2 . For simplicity, for λ∈ we here adopt the shorthand notation λ := λ Id__i. By computing the marginals from (<ref>), we see that U_1 d_2 + d_1(V_1) = U_2 d_2 + d_1 (V_2) , (U_1) d_2 + d_1 V_1 = (U_2) d_2 + d_1 V_2 , ⇒ U_1 - U_2 = d_1/d_2( (V_2) - (V_1) ) =: λ_1 , V_1 - V_2 = d_2/d_1( (U_1) - (U_2) ) =: λ_2 . The fact that λ_1 = -λ_2 clearly follows by using these relations together with (<ref>), due to the fact that (λ_1 Id__1) ⊕ (λ_2 Id__2 ) = ( λ_1 + λ_2 ) Id__1 ⊗_2. §.§ Sinkhorn iterations and their convergence In this section, we define the Sinkhorn iterations associated with a general regularization ψ and show it provides a maximizing sequence for the dual functional defined in (<ref>). Thus, we prove the qualitative convergence of the iteration. Under the standing assumptions of the section, assume additionally that ψ is strictly convex and C^1. Let ℱ_1^(C,ψ,):(_2)→(_1) and ℱ_2^(C,ψ,):(_1)→(_2) be defined as in (<ref>) and (<ref>), respectively. We define the following algorithm: for n ∈, Step 0: fix any initial (U^0,V^0)∈(_1)×(_2). Step 2n-1: for given U^n-1∈(_1), we define V^n as V^n := ℱ_2^(C,ψ,)(U^n-1) - λ_1(ℱ_2^(C,ψ,)(U^n-1)) Id__2 . Step 2n: we define U^n as U^n := ℱ_1^(C,ψ,)(ℱ_2^(C,ψ,)(U^n-1)) + λ_1(ℱ_2^(C,ψ,)(U^n-1)) Id__1. Then we have the convergence of the iteration. More precisely: (a) There exists a limit point (U^*,V^*) of {(U^n,V^n)}_n≥ 0, which is a maximizer of D^. (b) The operator Γ^n := ψ' ( U^n⊕ V^n - C/) belongs to (_1 ⊗_2), and moreover we have that Γ^n →Γ^* ∈(_1 ⊗_2) as n →∞, where Γ^* = ψ' ( U^* ⊕ V^* - C/) ↦ (ρ,σ) is a minimizer of the primal problem . The iteration is defined in the following way. We first start with an arbitrary initialization (U^0,V^0). Then, we compute iteratively (U^n,V^n) as follows. Denote by Ṽ^n the (C,ψ,ε)-transform of U^n-1. On one hand, V^n is obtained recentering Ṽ^n so that λ_1(V^n)=0. On the other hand, U^n is obtained by computing the (C,ψ,ε)-transform of Ṽ^n and translating it so that (U^n,V^n) = (ℱ_1^(C,ψ,ε)(Ṽ^n),Ṽ^n). Intuitively, this procedure enjoys good compactness properties by Proposition <ref>, by ensuring that λ_1(V^n)=0, for every n ∈. Finally, the fact that limit points are maximizers for the dual functional is suggested by the properties of the (C,ψ,)-transform. Note indeed that by the optimality conditions (<ref>) _1 [ ψ' ( U^n⊕ V^n - C/) ] = ρ whereas _2 [ ψ' ( U^n⊕Ṽ^n+1 - C/) ] = σ , and compare with 3) in Theorem <ref>. We set M:= (U_0,V_0) ∈. Proof of (a). See that, by construction, for any n≥ 0 we have λ_1(V^n) = 0 and additionally, due to the invariance by translation (Remark (<ref>)), from the definition of ℱ_1^(C,ψ,) and the one of ℱ_2^(C,ψ,), we obtain that (U^n,V^n) = ( ℱ_1^(C,ψ,)(ℱ_2^(C,ψ,)(U^n-1)) , ℱ_2^(C,ψ,)(U^n-1) ) ≥ ( U^n-1 , ℱ_2^(C,ψ,)(U^n-1) ) ≥ ( U^n-1 , V^n-1 ) ≥…≥ ( U^0 , V^0 ) = M. We conclude that the sequence {(U^n,V^n)}_n≥ 0 satisfies the assumption of Proposition <ref>, and therefore is bounded. Thus, there exist an accumulation point (U^*, V^*)∈(_1)×(_2) and a subsequence (U^n_k, V^n_k) (U^*, V^*). Now, using the continuity of : (U^*,V^*) = lim sup_k→∞ (U^n_k, V^n_k) ≥lim sup_k→∞ (U^n_k-1+1, V^n_k-1+1) = lim sup_k→∞ ( ℱ_1^(C,ψ,)(ℱ_2^(C,ψ,)(U^n_k-1)) , ℱ_2^(C,ψ,)(U^n_k-1) ) ≥ lim sup_k→∞ ( U^n_k-1 , ℱ_2^(C,ψ,)(U^n_k-1) ) ≥ lim sup_k→∞ ( U^n_k-1 , ℱ_2^(C,ψ,)(U^*) ) = ( U^* , ℱ_2^(C,ψ,)(U^*) ), where in the last inequality, we used once again that ℱ_2^(C,ψ,)(U^n_k-1) is a maximizer of V ↦ (U^n_k-1,V). This, and a similar computation for U^*, implies that V^* = ℱ_2^(C,ψ,)(U^*), and U^* = ℱ_1^(C,ψ,)(V^*), hence (U^*, V^*) is a maximizer due to Theorem  <ref>. In addition, one observes that λ_1(V^*)= 0, as if A_k → A then λ_1(A_k) →λ_1(A) as k →∞. Let (U^n_j, V^n_j) be any converging subsequence of (U^n, V^n) and denote by (Ũ, Ṽ) their limit point. For the first part of the proof, we obtain that (<ref>) also holds for (Ũ, Ṽ), thus it is also a maximizer. On the other hand, due to Remark <ref>, we must then have (Ũ,Ṽ) = (U^*+λId__1, V^*-λId__2) for some λ∈. However, λ_1(Ṽ)=0 as well, and thus λ=0 or (Ũ,Ṽ) = (U^*, V^*). Proof of (b). It follows by the optimality conditions that _1 [ ψ' ( U^n⊕ V^n - C/) ] = ρ, therefore, by taking the trace at both sides we get that [Γ^n]=1, thus proving Γ^n ∈(_1 ⊗_2). The convergence Γ^n →Γ follows from (a) and the fact that ψ' is continuous. § UNBALANCED NON-COMMUTATIVE OPTIMAL TRANSPORT In this section, we turn our attention to the non-commutative optimal transport problems, both the primal and the dual, in the unbalanced case. We begin by analyzing the dual problem, discussing the existence of maximizers. We work under the same standing assumptions of Section <ref>, namely throughout the whole section, _1, _2 denotes Hilbert spaces of finite dimensions d_1 and d_2, respectively. We fix C ∈(_1 ⊗_2) and consider ρ∈_>(_1), σ∈_>(_2) (in particular with trivial kernel). §.§ Existence of the maximizers to the unbalanced dual problem In this section, we prove the existence of maximizers in the dual problem for the unbalanced case for a general regularization. We recall the definition of the dual functional (U,V) = -τ_1 [ e^-U/τ_1+logρ- ρ] - τ_2 [ e^-V/τ_2+logσ- σ] -[ ψ(U⊕ V-C/) ] . In order to show the existence of maximizers, we show directly that the functional is coercive, differently from what happens in the balance case with (due to the presence of the symmetries, cfr. Remark <ref>). Nonetheless, we significantly simplify the proof by observing that the unbalanced dual functional is in fact always bounded from above by the balanced dual functional, as we show in the proof below. [Existence of the maximizer, unbalanced] Let _1, _2 be finite-dimensional Hilbert spaces. Let ρ∈_>(_1) and σ∈_>(_2) with trivial kernel, ε, τ_1, τ_2 >0 and let ψ:→ be a superlinear convex function bounded from below. Then the unbalanced dual functional D^,τ defined in (<ref>) admits a (unique) maximizer (U^*,V^*)∈(_1)×(_2). First of all, we observe that the functional is continuous over (_1) ×(_2), arguing in the same way as in (<ref>). Therefore, the existence of a maximizer follows by showing that is coercive, i.e. it has bounded superlevel sets. Moreover, without loss of generality, we can assume that ψ≥ 0. We introduce the following notation: for (U,V) ∈(_1) ×(_2), we set F_1(U) = τ_1 [e^-U/τ_1+logρ] ≥ 0 , F_2(V) = τ_2 [e^-V/τ_2+logσ]≥ 0 , G(U,V) = [ψ(U⊕ V -C/)] ≥ 0 , so that by construction, for every (U,V) ∈(_1) ×(_2) one has (U,V) = -F_1(U) - F_2(V) - G(U,V) + τ_1 [ρ] + τ_2 [σ] . We start by claiming that (U,V) ≥ (U,V) , for all (U,V) ∈(_1) ×(_2) , τ_1, τ_2 ∈_+ . This is consequence of Klein's inequality <cit.> applied to the exponential function, which implies (for F_1, and similarly for F_2) F_1(U) - τ_1 [ρ] = τ_1 [ e^-U/τ_1+logρ- e^logρ] ≥τ_1 [ e^logρ( - U/τ_1) ] = - [ U ρ ] , thus providing the claimed lower bound (<ref>). Now, we want to show that, given M_0 ∈ℝ, the set 𝒮_M_0:= { (U,V) ∈(_1) ×(_2): (U,V) ≥ - M_0 } is bounded. In particular, up to a translation that depends only on the marginals, there exists M∈ so that 𝒮_M_0 is a subset of { (U,V) ∈(_1) ×(_2): F_1(U)+F_2(V)+G(U,V) ≤ M }. For (U,V) ∈𝒮_M, we have that 0≤F_1(U)+F_2(V) + G(U,V) ≤ M, which, by the positivity of all the terms, gives that F_1(U) ≤ M, F_2(V) ≤ M, G(U,V) ≤ M. We define Û:=-U/τ_1+logρ. We write the spectral decomposition Û =∑_i λ_i|ξ_i⟩⟨ξ_i|, where λ_i := λ_i(Û). Therefore M ≥F_1(U) = τ_1 [e^Û] = τ_1 ∑_i e^λ_i≥τ_1 e^λ_d ⇒ λ_d_1≤log(M/τ_1). Since U=-τ_1 Û+τ_1logρ, we have λ_1(U)≥λ_1(-τ_1 Û)+λ_1(τ_1logρ)=-τ_1λ_d_1(Û)+τ_1log(λ_1(ρ)). Combining the previous inequalities, we conclude that λ_1(U) ≥ -τ_1 log( M/λ_1(ρ) τ_1) =: _1 > -∞ . Arguing analogously for V and F_2(V), we also obtain that λ_1(V) ≥ -τ_2 log( M/λ_1(σ) τ_2) =: _2 > -∞ . In order to obtain the sought upper bound on the spectrum of U and V, we take advantage of (<ref>), which implies (U,V) ≥ M_0. In particular, we apply Proposition <ref> and we get ||U+λ_1(V)Id_H_1|| ≤ and ||V-λ_1(V)Id_H_2|| ≤, hence in particular for all i=1,…,d_1 and j=1,…,d_2, the two-sided bounds -≤λ_i(U) +λ_1(V) ≤ , 0 ≤λ_j(V) -λ_1(V) ≤ . Along with (<ref>) and (<ref>) one easily obtains _1 ≤λ_i(U) ≤ - _2 , _2 ≤λ_j(V) ≤ 2 - _1 , which yields 𝒮_M_0 is bounded, thus the sought coercivity property for . §.§ The -transform and its properties Recall the setting introduced at the beginning of Section <ref>. Similarly as done in the balanced case, we now introduce the corresponding notion of (C,ψ,)-transform associated to . Note that in this case, we do not require ψ to be strictly convex, due to the presence of the exponential (which is strictly convex) in the marginal penalization terms. In particular, the existence of the maximizer in the definition below follows from the concavity and continuity of , together with the coercivity proved within the proof of Theorem <ref>, whereas uniqueness comes exactly from the strict concavity of . For simplicity of notation, we set τ := (τ_1,τ_2). Recall the decomposition as given in (<ref>), in particular the definitions of F_1, F_2, and G in (<ref>). [(C,ψ,,τ)-transform] We define ℱ_2^(C,ψ,,τ)(_1)→(_2) as ℱ_2^(C,ψ,,τ)(U) = { - F_2(V) - G(U,V) V ∈(_2) }. Analogously, we define the corresponding ℱ_1^(C,ψ,,τ)(_2)→(_1) as ℱ_1^(C,ψ,,τ)(V) = { - F_1(U) - G(U,V) U ∈(_2) }. The definition of the transform is well-posed, as the contains exactly one element, as mentioned above. The next step is to characterize the optimality condition in the same spirit as done in the balanced case in Proposition <ref>. Under the standing assumptions of this section, assume additionally that ψ∈ C^1. Then, for every U ∈(_1), its transform ℱ_2^(C,ψ,,τ)(U) is the unique solution V of the equation e^-V/τ_2+logσ =_2 [ ψ'( U ⊕V - C/)]. Analogously, the characterization holds for ℱ_1^(C,ψ,,τ)(V), replacing σ with ρ, _2 with _1. Set V:= ℱ_2^(C,ψ,,τ)(U). For every δ∈ℝ and A ∈(_2), arguing as in the proof of Proposition <ref>, from the fact that the map δ↦ - F_2(V+δ A) - G(U,V+δ A) has vanishing derivative in δ =0, we obtain that [ e^-V/τ_2 + logσ A ] - [ ψ' (U ⊕ V-C/)( Id⊗ A) ] = 0 , ∀ A ∈(_2) , which provides the sought (<ref>), by the very definition of partial trace _2. The second statement follows by arguing similarly. Uniqueness comes directly from the strict concavity of the function V ↦ (U,V). Recall the definition of the primal functional (<ref>) and problem (<ref>) in the unbalanced case. [Equivalent characterizations for maximizers, unbalanced] Under the standing assumptions of this section, assume ψ∈ C^1. Let (U^*, V^*) ∈(_1) ×(_2). Then the following are equivalent: 1) (Maximizers) (U^*,V^*) maximizes D^,τ, i.e. = D^,τ (U^*,V^*). 2) (Maximality condition) U^* = ℱ_1^(C,ψ,,τ)(V^*) and V^* = ℱ_2^(C,ψ,,τ)(U^*). 3) (Duality) There exists Γ^* ∈_≥(_1 ⊗_2) so that (Γ^*)= D^,τ (U^*,V^*). If one (and thus all) condition holds, the unique Γ^* which satisfies 3) is given by Γ^*:=ψ' [ U^* ⊕ V^* - C/ε] , and it is the unique minimizer to . As in the balanced case, from Theorem <ref> and Theorem <ref>, we obtain the sought duality. [Duality for unbalanced QOT] Under the standing assumptions of this section, assume that ψ = φ^* is C^1. Then =. We shall show 1) ⇒ 2) ⇒ 3) ⇒ 1). 1) ⇒ 2) . This follows as in the proof of Theorem <ref>. 2) ⇒ 3). Consider Γ^* as given in (<ref>). From the properties of the Legendre transform and (<ref>) (by computing the optimality condition in the definition of ^*(·|η)), we infer that (α|η) = [ η - e^A+logη] + ⟨ A, α⟩ , if α = e^A+logη . By applying the above equation to η= ρ (resp. η = σ), A = -U^*/τ_1 (resp. A=-V^*/τ_2), noting that α = _1(Γ^*) (resp. α=_2(Γ^*)) from the optimality condition Proposition <ref>, we obtain τ_1 (_1(Γ^*)|ρ) = τ_1 [ ρ - e^ -U^*/τ_1+logρ] - [ U^* _1(Γ^*) ] . τ_2 (_2(Γ^*)|σ) = τ_2 [ σ - e^-V^*/τ_2+logσ] - [ V^* _2(Γ^*) ] . On the other hand, the same computation as in (<ref>) shows that [ C Γ^*] + [ φ(Γ^*) ] = [ U^* _1(Γ^*) ] + [ V^* _2(Γ^*) ] - ε[ ψ(U^*⊕ V^*-C/ε) ] . Putting the last three equalities together we end up with (Γ^*) = τ_1 [ ρ - e^ -U^*/τ_1+logρ] + τ_2 [ σ - e^-V^*/τ_2+logσ] - ε[ ψ(U^*⊕ V^*-C/ε) ] = (U^*,V^*) , which is the sought equality. 3) ⇒ 1). The proof is completely analogous to the one provided in the proof of Theorem <ref>, including the fact that Γ^* as given in (<ref>) is the unique minimizer for the primal problem. § EXTENSIONS OF THE DUALITY THEOREM We finally show that the duality result holds with the weaker assumption of ψ being (not necessarily strictly) convex, superlinear, and bounded from below. In particular, no additional smoothness is required. [Duality for non-smooth ψ] Let ψ be a convex function satisfying (<ref>). Under the standing assumptions of Section <ref>, duality holds, i.e. = , for every τ∈ (0,+∞] (where τ = +∞ denotes the balanced problem). Before proving the theorem, we discuss some preliminary constructions. Let ψ be a superlinear function, bounded from below, and convex (but not necessarily strictly convex nor C^1). Fix a parameter n ∈, a smooth, nonnegative, monotone increasing and strictly convex function ψ̅, and a function ρ∈ C_c^∞(), strictly positive, symmetric, and supported on [-1,1], and such that ∫ρ x = 1. Construct the sequence of mollifiers ρ_n in the usual way, i.e. ρ_n(x) := n ρ(nx), so that ρ_n satisfies the same properties of ρ (on the interval [-1/n, 1/n]), for every n ∈. Finally, define ψ_n: → , ψ_n := ( ψ + 1/nψ̅) * ρ_n . Note that in the unbalanced case τ_1,τ_2 <∞, one could choose ψ̅=0 (due to the fact that no strict convexity is required in Theorem <ref>). First of all, we note that ψ_n ∈ C^∞, and if ψ≥ m for some m ∈_+, then ψ_n ≥ m for every n ∈. Furthermore, ψ_n is also superlinear at +∞, since by Fatou's lemma lim inf_x → +∞ψ_n(x)/x≥lim inf_x → +∞∫ψ(x-z)/xρ_n(z) ≥∫lim inf_x → +∞( ψ(x-z)/x) ρ_n(z) = +∞ . Finally, observe ψ_n = ψ̅_n * ρ_n, where ψ̅_n := ψ + n^-1ψ̅ is a strictly convex function (being a sum of a convex and a strictly convex function). As a consequence, using that ρ_n ≥ 0, we see that for every x,y ∈, λ∈ [0,1] ψ_n(λ x + (1-λ) y ) = ∫ψ̅_n ( λ (x-z) + (1-λ) (y-z) ) ρ_n(z) ≤λ∫ψ̅_n(x-z) ρ_n(z) + (1-λ) ∫ψ̅_n(y-z) ρ_n(z) = λψ_n(x) + (1-λ) ψ_n(y) , with equality if and only if, for every z ∈, ψ̅_n ( λ (x-z) + (1-λ) (y-z) ) ρ_n(z) = ( λψ̅_n(x-z) + (1-λ) ψ̅_n(y-z) ) ρ_n(z) . Using that ρ_n >0 on [-1/n,1/n], and that ψ̅_n is strictly convex, we infer that x=y, which precisely shows that ψ_n is strictly convex as well. We thus constructed a smooth, bounded from below, strictly convex function ψ_n, which we claim to be a good approximation of ψ, in a suitable uniform sense. First of all, note that ψ being the Legendre transform of a convex map φ satisfying (<ref>) implies that ψ is monotone non-decreasing as well as lim_x → -∞ψ(x) = inf_x ∈ψ(x) = -φ(0) ∈ , see Remark <ref> in the appendix for a proof. In particular, this implies that ψ∈ L^∞(I) ∩Lip(I) , for every interval I = (-∞, b] , b ∈ . The same holds by construction for ψ̅. As a consequence, we deduce that, for every b ∈, sup_x ∈ (-∞,b] | ψ_n(x) - ψ(x)| ≤sup_x ∈ (-∞,b] | ψ̅_n * ρ_n(x) - ψ * ρ_n(x) | + | ψ * ρ_n(x) - ψ(x) | = sup_x ∈ (-∞,b]1/n | ψ̅* ρ_n(x) | + | ψ * ρ_n(x) - ψ(x) | ≤1/nsup_x ∈ (-∞,b+1] |ψ̅(x)| + Lip(ψ)( (-∞,b+1]) ∫ |y| ρ_n(y) ≤1/n( sup_x ∈ (-∞,b+1] |ψ̅(x)| + Lip(ψ)( (-∞,b+1]) ) . which clearly goes to 0 as n →∞. This in particular shows that ψ_n →ψ in L^∞_loc(). Note due to the fact that ρ (hence ρ_n) is symmetric (and thus in particular ∫ z ρ_n(z) = 0) and ψ̅≥ 0, we have that ψ_n (x) ≥ψ * ρ_n (x) Jensen≥ψ( ∫ (x-z) ρ_n(z) ) = ψ(x) , ∀ x ∈ . We consider the non-commutative optimal transport problems associated with ψ_n and φ_n := ψ_n^*, and denote by , respectively the dual and the primal functional associated to ψ_n, and with , the dual and the primal problem, respectively. By definition of dual functional, together with (<ref>), this shows that ≤≤, for every n ∈. In particular, it is straightforward to check that the validity of Proposition <ref> for implies the following equi-coercivity property (up to translation) for . For every U∈(_1), V∈(_2) such that D^,τ,n (U,V)≥ M >-∞ ( and λ_1(V)=0 if τ = +∞) for some M∈, there exists a constant 0≤𝒜=𝒜(M) <∞ independent of n such that ||U||_∞,||V||_∞≤. In particular, whenever a sequence (U_n,V_n) ∈(_1) ×(_2) satisfies, for every n∈, inf_n ∈D^,τ,n (U_n,V_n)≥ M >-∞ ( and λ_1(V_n)=0 if τ = +∞) , then { (U_n,V_n)}_n is bounded, and therefore pre-compact. We are ready to prove Theorem <ref>. We apply Theorem <ref>, Theorem <ref> to ψ_n and deduce that = , ∀ n ∈ . We claim we can pass to the limit the previous equality as n →∞ to conclude the duality result for ψ. We start by observing that for every (U_n,V_n) → (U, V) ∈(_1) ×(_2), we have that lim_n →∞ (U_n,V_n) = (U,V) , which is a stronger statement than Γ-convergence (every approximating sequence is a recovery sequence). This follows from the fact that converging sequences in (_1) ×(_2) are bounded and that ψ_n →ψ in L^∞_loc(). This in particular shows that -Γ→ -. Together with (<ref>) and Proposition <ref>, by the fundamental theorem of Γ-convergence, it provides the claimed convergence at the level of the dual problems lim_n →∞ = . We are left to show the convergence of the primal problems, which follows from very similar arguments, namely by showing that Γ→ plus suitable compactness. In order to show the first statement, we first claim that ψ_n →ψ in L^∞_loc(I) for I bounded from above (cfr. (<ref>)) implies φ_n →φ in L^∞_loc(_+). To prove this, we start observing that for every compact K ⊂_+, φ is continuous, and thus bounded, on K. Therefore, for every p ∈ K, we have that - φ_L^∞(K)≤φ(p) = sup_x ∈{ p x - ψ(x) } . This in particular shows that the sup in the previous equation can be taken over the set C(p) := { x ∈ p x - ψ(x) ≥ - φ_L^∞(K) -1 } . As a consequence of the Fenchel–Young inequality, we infer that, for p ∈ K, φ(p) = sup_x ∈ C(p){ p x - ψ(x) } ≤sup_x ∈ C(p){φ_n(p) + ψ_n(x) - ψ(x) } ≤φ_n(p) + ψ - ψ_n _L^∞(C(p)) . On the other hand, we know that φ_n ≤φ everywhere (as ψ≤ψ_n), hence we conclude that φ - φ_n _L^∞(K)≤sup_p ∈ Kψ - ψ_n _L^∞(C(p)) , ∀ n ∈ . Let p_K be the max of K. Then, by definition of the sets themselves, we have that for every p ∈ K, C(p) ∩_+ ⊂ C(p_K) C(p_K) ⊂ (-∞ , α_K] , for some α_K < +∞, by superlinearity of ψ, see Remark <ref>. This, together with (<ref>), provides the estimate φ - φ_n _L^∞(K)≤ψ - ψ_n _L^∞ ((-∞,α_K] ) , ∀ n ∈ . Recalling that from (<ref>) we know that ψ_n →ψ uniformly on every set which is bounded from above, we conclude that φ_n →φ in L^∞_loc(_+). As in (<ref>), we now take advantage of this fact to ensure that lim_n →∞(Γ_n) = (Γ) , ∀ Γ_n →Γ∈(_1 ⊗_2) . Secondly, let (U_n^*,V_n^*) be the maximizer of (with λ_1(V_n^*)=0 if τ = +∞). By Theorem <ref>, Theorem <ref> we know that Γ_n^* := ψ_n' [ U_n^* ⊕ V_n^* - C/ε] is the unique minimizer of . Thanks to Proposition <ref>, we know that, up to a non-relabeled subsequence, (U_n^*,V_n^*) → (U^*,V^*) as n →∞. Moreover, with ψ being convex, we have that ψ is locally Lipschitz. In particular, by the properties of the convolution this implies that, for every bounded set I ⊂, one has that sup_n ∈ψ_n' _L^∞(I)≤Lip (ψ;B_1(I)) + Lip (ψ̅;B_1(I)) < ∞ . This, together with the boundedness of (U_n^*,V_n^*), implies that {Γ_n^* }_n is also bounded, and therefore pre-compact. Thanks to the Γ-convergence showed above in (<ref>), this in particular shows that every limit point (which exists) must necessarily be a minimizer (the marginal constraints are closed by uniform convergence), thus concluding the proof of Theorem <ref> passing to the limit in (<ref>). § CONVERGENCE RESULTS: FROM UNBALANCED TO BALANCED NON-COMMUTATIVE OPTIMAL TRANSPORT In this section, we discuss limit behaviors for the unbalanced optimal transport as the parameters τ_1, τ_2 → +∞. Recall that we set τ=(τ_1,τ_2), and use the slight abuse of notation τ→ +∞. We discuss convergence results both at the level of primal and dual problems, as well as the convergence of minimizers/maximizers. [Convergence as τ→ +∞] Under the standing assumption of Section <ref>, we have the following convergence results: * The functionals Γ-converge as τ→∞ to the functional Γ↦(Γ) if Γ↦ (ρ,σ) +∞ otherwise . * The unique minimizer Γ^*,τ∈(_1 ⊗_2) of converges (up to subsequence) as τ→ +∞ to a minimizer Γ^* ∈(_1 ⊗_2) of , under the constraint Γ^* ↦ (ρ,σ). * We have the following convergence result for the dual: for every (U_τ,V_τ) → (U,V) ∈(_1) ×(_2) as τ→ +∞ one has that lim_τ→ +∞D^,τ (U_τ,V_τ) = D^ (U,V) . In particular, -D^,τ Γ-converges to -D^. * The unbalanced optimal transport problems converge as τ→ +∞ to the balanced optimal transport problem, i.e. lim_τ_1,τ_2 → +∞ = = = lim_τ_1,τ_2 → +∞ , and the minimizers Γ^*,τ converge (up to subsequence) to a minimizer of with marginal constraints. * Let (U^*,τ, V^*,τ) ∈(_1) ×(_2) be the unique maximizer of D^,τ. Define the recentered potentials ( Û^*,τ, V̂^*,τ) := ( U^*,τ + λ_1(V^*,τ) Id__1 , V^*,τ - λ_1(V^*,τ) Id__2) ∈(_1) ×(_2) . Then (Û^*,τ, V̂^*,τ) converges (up to subsequence) as τ→ +∞ to a maximizer (U^*,V^*) ∈(_1) ×(_2) of D^ which satisfies λ_1(V^*)=0. If we assume additionally that ψ∈ C^1 and strictly convex, then the Γ^*,τ converge to the unique minimizer in , while (Û^*,τ, V̂^*,τ) converges to the unique maximizer (U^*,V^*) ∈(_1) ×(_2) of D^ which satisfies λ_1(V^*)=0. Note that is not coercive, since the sequence {(a Id,-a Id)}_a ∈ℝ is such that (a Id,-a Id) =C_0 ∈, but it does not admit a convergent subsequence. By Theorem <ref>, and by the fundamental theorem of Γ-convergence, this shows that the family of functionals is not equi-coercive (note indeed that the bounds obtained in the proof of Proposition <ref> are not uniform in τ_1,τ_2). This explains why it is not directly possible to infer the main results for the balanced dual problem directly from the unbalanced setting by sending the parameter to infinity. For a similar reason, the Γ-convergence result proved in 1. in the above theorem does not directly provide the validity of 3., which instead requires a different argument. 1. First, we show the claimed Γ-convergence for the primal functionals. We prove the lim inf and the lim sup inequality, starting from the first one. Let Γ_τ→Γ∈(_1 ⊗_2) as τ→ +∞, and assume without loss of generality that sup_τF^,τ (Γ_τ) <∞. By the continuity of , we know that lim_τ→ +∞(Γ_τ) = (Γ) < ∞ , and therefore we also infer that sup_τ_1, τ_2 > 0τ_1 ( P_1(Γ_τ)| ρ)+ τ_2 ( P_2(Γ_τ)| σ) < ∞ . Note that the partial trace is a continuous operation with respect to the Hilbert-Schmidt convergence of operators. Therefore we have that P_i(Γ_τ) → P_i(Γ) as τ→∞. In particular, we deduce that sup_τ_1, τ_2 > 0( P_1(Γ_τ)| ρ) + ( P_2(Γ_τ)| σ) < ∞ . Combining the estimates in (<ref>) and (<ref>), from the convergence of the marginals and the continuity of the relative entropy we conclude that ( P_1(Γ)| ρ) + ( P_2(Γ)| σ) = lim_τ→ + ∞( P_1(Γ_τ)| ρ) + ( P_2(Γ_τ )| σ) = 0 . By using that the relative entropy is a nonnegative functional, this shows that both relative entropies must vanish, which in particular yields Γ↦ (ρ,σ). Combining this with (<ref>), we end up with lim inf_τ→ +∞F^,τ (Γ_τ) ≥(Γ) Γ↦ (ρ,σ) , which concludes the proof of the lim inf inequality. For the lim sup inequality, it is enough to choose the constant recovery sequence. 2. Note that {F^,τ}_τ is equi-coercive, due to the fact that F^,τ≥, for every τ>0, together with the fact that has bounded sublevels. As a consequence, 2. follows directly from 1. by the fundamental theorem of Γ-convergence. 3. Let us now show the convergence claimed in (<ref>). To this purpose, we observe that lim_τ_1 → +∞ -τ_1 [ e^-U_τ_1/τ_1 + logρ - ρ] = lim_t → 0^-[ e^logρ + t Ũ_t - e^logρ] /t = / t |_t=0 g(logρ + t U) , where we set Ũ_t := U_-1/t and g(A):= [e^A]. The last equality is a consequence of the following observation: let B⊂(_1) be the ball of radius 1 centered in logρ∈(_1). For t sufficiently small, both logρ + t Ũ_t and logρ + t U belong to B, hence by (<ref>) we can estimate | 1/t[ e^logρ + t Ũ_t - e^logρ + t U] | ≤Lip (g ; B) (Ũ_t -U) _HS→ 0 , as t → 0^-. The claimed equality readily follows. An application of Lemma <ref> yields ∇ g(A) = e^A as well as that the lim inf is in fact a limit, and it holds lim_τ_1 → +∞ -τ_1 [ e^-U/τ_1 + logρ - ρ] = ⟨ (∇ g)(logρ) , U ⟩ = [ ρ U ] . The very same argument applied on _2 ensures that lim_τ_2 → +∞ -τ_2 [ e^-V/τ_2 + logσ - σ] = [ σ V ] . The latter two equalities prove exactly that lim_τ→ +∞D^,τ (U_τ,V_τ) = (U,V), hence providing the claimed convergence. 4. The convergence of the primal problems follows from the same argument above in the proof of 2. The convergence (and equality) of the dual problems follows by duality, in particular Theorem <ref>. 5. We take advantage of Remark <ref> and once again of (<ref>), to deduce that ( Û^*,τ, V̂^*,τ) = ( U^*,τ, V^*,τ) ≥D^,τ( U^*,τ, V^*,τ) = 𝔇^,τ = 𝔉^,τ , where in the last step we used the duality result proved in Theorem <ref>. We use the convergence proved in 3. to infer lim inf_τ→ +∞( Û^*,τ, V̂^*,τ) ≥lim inf_τ→ +∞𝔉^,τ = > - ∞ . This implies that the sequence {( Û^*,τ, V̂^*,τ) }_τ belongs to a superlevel set of , and thus by construction satisfies the hypothesis of Proposition <ref>. It follows that it is a bounded sequence of operators, and therefore it is pre-compact. We conclude the proof of 4. by showing that any limit point is necessarily a maximizer (U^*,V^*) of with λ_1(V^*)=0. Assume indeed that ( Û^*,τ, V̂^*,τ) → (U^*,V^*) as τ→ +∞. The fact that λ_1(V^*,τ)=0 for every τ>0 ensures that λ_1(V^*)=0. Finally, by (<ref>) together with the continuity of we deduce that (U^*,V^*) = lim_τ→ +∞( Û^*,τ, V̂^*,τ) ≥ = , where at last we used the duality result for the balanced case, i.e. Theorem <ref>. We thus conclude that (U^*,V^*) is indeed a maximizer with λ_1(V^*)=0. §.§ Convergence of Assume that ψ is strictly convex and C^1. In this short section, we take advantage of the convergence results proved in Theorem <ref> to show that also the associated (C,ψ,,τ)-transforms converge, as the parameters τ→ +∞. More precisely, we claim that ℱ_1^(C,ψ,,τ)(V_τ) →ℱ_1^(C,ψ,)(V) ∀ V_τ→ V . Recall from the previous section that the following property (stronger than Γ-convergence) holds: one has that (<ref>) lim_τ→ +∞D^,τ (U_τ,V_τ) = (U,V) , for every (U_τ, V_τ) → (U,V) . In particular, we have the Γ-convergence of the one-parameter functionals, given by - D^,τ (·, V_τ) -D^ (·,V) , for every V_τ→ V. Using once again the inequality (<ref>) together with the coercivity of the dual functional in the balanced case (cfr. Proposition <ref>, see also Remark <ref>), it readily follows that the sequence of functionals {- D^,τ (·, V_τ) }_τ is equi-coercive. This, together with the aforementioned Γ-convergence, provides the convergence of the associated minimizers, which precisely means lim_τ→ +∞ℱ_1^(C,ψ,,τ)(V_τ) = ℱ_1^(C,ψ,)(V) ∀ V_τ→ V , as claimed. § LEGENDRE'S TRANSFORM IN A NON-COMMUTATIVE SETTING In this appendix, we study the properties and formula for the Legendre transform of functions obtained by composing the trace operator with (the lifting of) a convex map on the real line. More specifically and throughout all the section, let φ [0,+∞) → [m,+∞) be convex, superlinear at infinity, namely lim_t → +∞φ(t)/t =+∞, and m > -∞. Recall that, given such φ, its Legendre transform is given by ψ: → , ψ(x) := sup_t ∈{ x t - φ(t) } , ∀ x ∈ , where we extend φ to +∞ over the negative half-line. Under the aforementioned assumptions on φ, we claim that ψ is monotone non-decreasing and satisfies lim_x → -∞ψ(x) = inf_x∈ψ(x) = - φ(0) ∈ . Indeed, any non-constant real-valued convex function ψ on is either (i) monotone non-decreasing (ii) monotone non-increasing (iii) admits a global minimizer x^*∈. For (i) the conclusion trivially follows, and we claim that both (ii) and (iii) fail under the assumptions on φ, since (ii)-(iii) must imply that lim_x→-∞ψ(x) = +∞ or/and there exist x_1<x_2 such that ψ(x_1)>ψ(x_2). We can exploit this fact to obtain that φ(p)<+∞ for some p <0, contradicting the original assumption on φ. Indeed, let p_1∈∂ψ(x_1), then ψ(x)≥ψ(x_1) + p_1(x-x_1). Since x_1 is not a minimizer by construction, then p_1≠ 0, and in fact p_1 must be strictly negative, as ψ(x_2)≥ψ(x_1) +p_1(x_2-x_1) > ψ(x_2) + p_1(x_2-x_1). However, then we have φ(p_1) = ψ^*(p_1) = p_1 x_1 - ψ(x_1) < ∞, contradicting that φ(p)=+∞ for all p<0. §.§ Properties of operator liftings Given a d-dimensional Hilbert space ℋ, the map that sends f ∈ C(ℝ) into Lin((ℋ),(ℋ)) via spectral calculus is a vector space homomorphism and preserves the natural composition and product operations in C(ℝ) and Lin((ℋ),(ℋ)), namely given f_1,f_2 ∈ C() and λ∈ℝ we have (f_1+f_2)(A) = f_1(A) + f_2(A), (λ f_1)(A) = λ f_1(A), (f_1· f_2)(A) = f_1(A) f_2(A), (f_1 ∘ f_2)(A) =f_1(f_2(A)), for every A ∈(). It can be proved (see for instance <cit.>) that, if φ [0,+∞) →ℝ is convex (respectively strictly convex), then _≥()∋Γ↦[φ(Γ)] ∈ℝ is convex (respectively strictly convex). In particular, since real-valued convex functions on ℝ^n are locally Lipschitz we have that _≥()∋Γ↦[φ(Γ)] ∈ℝ is locally Lipschitz. Additionally, using that () ∋ W ↦[Wξ] is continuous for every ξ∈(), we infer (_1) ×(_2) ∋(U,V) ↦ (U,V) ∈ℝ is (strictly) concave and continuous whenever ψ is (strictly) convex. Note that Γ↦φ(Γ) is not necessarily convex (e.g. φ(x) = x^3, see <cit.>). By (<ref>) with W:= (U ⊕ V - C)/, we obtain ⟨ U ⊕ V - C, Γ⟩≤[ φ(Γ)] + [ ψ( U ⊕ V - C/) ] . Therefore, for every Γ↦ (ρ,σ), we have that (Γ) = [C Γ] + [φ(Γ)] ≥⟨ U ⊕ V , Γ⟩ - [ ψ( U ⊕ V - C/) ] = [U ρ] + [V σ] - [ ψ( U ⊕ V - C/) ] = (U,V) , for every (U,V) ∈(_1) ×(_2). The same holds true for the unbalanced case, using the fact that the Legendre transform of the relative entropy is the exponential function, cfr. (<ref>). Let φ satisfy the standing assumption of the section and let us denote the lifting of φ to the space of Hermitian matrices with a slight abuse of notation by φ. For every W ∈() sup_Γ∈_≥()( ⟨ W, Γ⟩ - [ φ(Γ)] ) = max_Γ∈_≥()( ⟨ W, Γ⟩ - [ φ(Γ)] ) Given M ∈ℝ, we claim that the superlevel set 𝒮_M:={Γ : ⟨ W,Γ⟩ -[ φ(Γ)] ≥ M} is bounded. By contradiction, there exists a sequence {Γ^n }_n ⊂𝒮_M such that, denoting {λ_i^n}_i the eigenvalues of Γ^n with λ_1^n ≤λ_2^n ≤…≤λ_d^n, we have that λ_d^n → +∞. We estimate ⟨ W, Γ^n ⟩ -[ φ(Γ^n) ] ≤ d W λ_d^n -∑_i=1^d φ(λ_i^n)= d W λ_d^n - φ(λ_d^n)-∑_i=1^d-1φ(λ_i^n) ≤ d W λ_d^n - φ(λ_d^n)-min{φ}(d-1), which converges to -∞, using the property that φ is superlinear at infinity. Thus, the claim is proved. Moreover, notice that the map Γ↦⟨ W, Γ⟩ - [ φ(Γ)], is continuous by (<ref>), hence the maximum exists. Let φ satisfy the standing assumption of the section. We have that Ψ(W):=sup_Γ∈_≥()( ⟨ W, Γ⟩ - [ φ(Γ)] ) satisfies Ψ(W)=[ψ'(W)], where ψ' is the lifting of ψ=φ^* to the space of Hermitian matrices. We claim that ⟨ W, Γ⟩≤[ φ(Γ)] + [ ψ'(W)]. To do this, let us write Γ = ∑_i=1^d Γ_i |γ_i⟩⟨γ_i| and W = ∑_i=1^d W_i |ξ_j⟩⟨ξ_j| and compute ⟨ W, Γ⟩ = [ W Γ] = ∑_iΓ_i ⟨γ_i|W|γ_i⟩ = ∑_i∑_j Γ_i W_j |⟨ξ_j |γ_i⟩|^2 ≤∑_i ∑_j (φ(Γ_i) + ψ(W_j)) |⟨ξ_j |γ_i⟩|^2 = ∑_i φ(Γ_i) (∑_j|⟨ξ_j |γ_i⟩|^2) + ∑_j ψ(W_j) (∑_i |⟨ξ_j |γ_i⟩|^2) = ∑_i φ(Γ_i) + ∑_j ψ(W_j) = [φ(Γ)]+ [ ψ'(W) ]. This gives that Ψ(W)≤[ ψ'(W) ]. We prove that Ψ(W)≥[ ψ'(W) ]. To do so, we choose a specific Γ̅:=∑_j Γ_j |ξ_j⟩⟨ξ_j| with Γ_j ∈∂ψ(W_j). This in particular shows that the inequality in (<ref>) is an equality, proving the claimed inequality. Incidentally, during the proof above, we explicitly construct a maximizer. The next lemma considers maps induced via functional calculus by convex functions on the real line, and discusses their properties. Let ψ∈ C^1() a convex function and define the map Ψ: () → as Ψ(A) := [ ψ (A)]. Then we have that / t|_t=0Ψ(A+tB) = ⟨∇Ψ(A) , B ⟩ = [ ∇Ψ(A) B ] , where ∇Ψ(A) = ψ'(A) , for every A ∈() and B ∈(). In particular, for every A ∈() it holds [A ψ'(A)] = [ ψ(A)] + [ ψ^* ( ψ'(A))] . The proof of (<ref>) can be found e.g. in <cit.>. The equality in (<ref>) follows directly from Proposition <ref> and the fact that ⟨ A, ∇Ψ(A) ⟩ = Ψ(A) + Ψ^* (∇Ψ(A)) , which is a property of real, convex functions and their Legendre transform <cit.>. §.§ Acknowledgements E.C. acknowledges the support of the New Frontiers in Research Fund (NFRFE-2021-00798) and the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 948021). Both A.G. and N.M. acknowledge the support of the Canada Research Chairs Program, the Natural Sciences and Engineering Research Council of Canada and the New Frontiers in Research Fund (NFRFE-2021-00798). L.P. gratefully acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, Projekt-ID 390685813. We thank Luca Tamanini for the critical reading of a preliminary version of the manuscript. abbrv
http://arxiv.org/abs/2409.02857v1
20240904163047
Physically constrained quantum clock-driven dynamics
[ "Dario Cilluffo", "Lea Lautenbacher", "Giovanni Spaventa", "Susana F. Huelga", "Martin B. Plenio" ]
quant-ph
[ "quant-ph" ]
§ ABSTRACT Thermal machines are physical systems specifically designed to make thermal energy available for practical use through state transformations in a cyclic process. This concept relies on the presence of an additional element equipped with a clock, controlling which interaction Hamiltonian between the system and the reservoirs must act at a certain time and that remains unaffected during this process. In the domain of quantum dynamics, there is substantial evidence to suggest that fulfilling this final condition is, in fact, impossible, except in ideal and far-from-reality cases. In this study we start from one such idealized condition and proceed to relax the primary approximations to make the model more realistic and less ideal. The main result is a fully quantum description of the engine-clock dynamics within a realistic quantum framework. Furthermore, this approach offers the possibility to address the deeper and more fundamental challenge of defining meaningful time operators in the realm of quantum mechanics from a different standpoint. Corresponding author:[email protected] Institute of Theoretical Physics & IQST, Ulm University, Albert-Einstein-Allee 11 89081, Ulm, Germany Institute of Theoretical Physics & IQST, Ulm University, Albert-Einstein-Allee 11 89081, Ulm, Germany Institute of Theoretical Physics & IQST, Ulm University, Albert-Einstein-Allee 11 89081, Ulm, Germany Institute of Theoretical Physics & IQST, Ulm University, Albert-Einstein-Allee 11 89081, Ulm, Germany Institute of Theoretical Physics & IQST, Ulm University, Albert-Einstein-Allee 11 89081, Ulm, Germany Physically constrained quantum clock-driven dynamics Martin B. Plenio September 9, 2024 ==================================================== § INTRODUCTION An engine can generally be defined as an open system coupled to many portions of the surrounding environment, which act as reservoirs. The primary aim of an engine is generating power in the form of mechanical work <cit.>. The realization of thermodynamic cycles, whether classical or quantum, relies on time-dependent Hamiltonians, which often incur significant energy costs that are difficult to fully capture in theoretical models. This problem can be addressed by embedding the system within an expanded framework that includes an additional "clock" system to keep track of time. The engine and clock together evolve under a time-independent Hamiltonian in a larger Hilbert space <cit.>. While this approach increases the dimensionality and complexity of the system due to the expanded Hilbert space, it compensates by making the overall dynamics autonomous, removing the need for external time-dependent driving. This framework has offered valuable insights into how the intrinsic physics of the time-keeping system <cit.> and the process of time sampling <cit.> affect the dynamics of generic quantum machines, with notable implications for quantum computing protocols <cit.>. Additionally, this perspective has become foundational in the resource theory of thermodynamics <cit.>. In the most general framework, a quantum system that can either access or generate a time signal and use it to control the evolution of another system, while remaining robust to perturbations (such as back reactions from the engine), can serve as a clock <cit.>. A minimal model including effectively all these features has been recently proposed in <cit.>: the time-dependent Hamiltonian coupling system and environment is modelled as a set of time-independent operators acting sequentially during the evolution time. Each operator is correlated to the position of an external particle (the clock) freely moving under a Hamiltonian that is assumed to be linear in the momentum. The control mechanism is provided by an effective coupling between the particle and the engine that is assumed to commute with the free Hamiltonian of the engine (covariant operation). As pointed out in <cit.>, this model is inherently nonphysical because of the particular free evolution assumed for the clock (pure translation), that requires an unbounded clock Hamiltonian. This argument can be seen as another facet of the celebrated Pauli objection to the existence of a time operator in standard quantum mechanics <cit.>. Namely, the equation of motion of a self-adjoint time operator T̂ reads Ṫ̂̇ = i [Ĥ,T̂] = 1 ⇒ [T̂,Ĥ] = i 1 , thus if such operators were to exist, they would need to be unitarily equivalent to x̂ and p̂, implying that their spectra are continuous and unbounded from below. This situation would result in unstable and nonphysical descriptions of interacting quantum theories. In essence, the commutation relation involving T̂ and the Hamiltonian operator, as outlined in (<ref>), lacks exact physical solutions within the framework of standard non-relativistic quantum mechanics. This issue has deep historical roots within the genesis of quantum theory <cit.>. Over the years, various strategies have emerged to confront this challenge, each presenting distinctive viewpoints. These strategies include the proposition of non-self-adjoint time operators <cit.>, as well as explorations within the framework of relativistic quantum field theory <cit.>. Remarkably, an alternative perspective has advocated for the retention of unbounded operators <cit.>. Other attempts in the literature, as documented in <cit.>, often rely on reasonable finite-dimensional approximations of T̂. In this study, we extend the ideal model presented in <cit.> to encompass a realistic quantum framework, addressing complexities previously unaccounted for. Specifically, our approach involves utilizing a quantum field as our clock-system and tracking time by monitoring the motion of a coherent pulse as it propagates along a linear trajectory. Under the same fundamental assumptions as in <cit.> regarding the coupling Hamiltonian between the system and the engine, we demonstrate that this coupling directly influences the dispersion law of the wave-packet, resulting in a significant deterioration of the clock's performance. This result paves the way to address the challenge of approximating the solution of (<ref>) by means of infinite-dimensional operators that form an integral part of the model. The paper is structured as follows: in Sec. <ref> we define the global model of engine and clock and describe the dynamics of the system using the framework of quantum collision models (QCM) <cit.>. Subsequently, in Sec. <ref> we introduce a model for the interaction between the control system and the engine. This model, designed for situations characterized by minor degradation, serves to restrict deviations from the desired dynamics we aim to implement. Sec. <ref> explicitly addresses the issue of degradation and in Sec. <ref> we return to the problem of time operators to quantitatively assess how the unavoidable degradation of physical clocks affects the precise definition of time operators in QM. § DEFINITION OF ENGINE AND CLOCK Following the reasoning in <cit.> we consider a bipartite Hilbert space ℋ = ℋ_E ⊗ℋ_C, where E identifies a generic open system and its surrounding environment, and C an auxiliary system denoted as clock. The role of the clock is controlling the evolution of the system without any external control. The total Hamiltonian reads Ĥ_ tot = Ĥ_E + Ĥ_C + V̂_EC , where H_E and H_C are the free Hamiltonians of engine and clock, respectively. The identities over the complementary Hilbert spaces are omitted. In <cit.> the interaction between engine and clock takes the form V̂_EC = ∫_ℝ d x V̂_E (x)⊗|x⟩_C⟨x| , where |x⟩_C⟨x| projects over the different position states of the clock and V̂_E∈ℋ_E. We choose as a clock a one-dimensional bosonic field with free Hamiltonian Ĥ_C = ∫ d k ω_k â_k^†â_k . The initial state of the clock is a pulse described by |ψ⟩^(0)_C = ∫ dx ξ_x_0(x) 𝒳(x) |vac⟩≡ψ̂^†_C (x_0) |vac⟩ , where ξ_x_0(x) is the pulse envelope function centered on x_0, with velocity c=1 and 𝒳 is an operator acting on the ladder operators of the field x (e.g. displacement or squeezing). Within the quasimonochromatic approximation <cit.>, i.e. the spectral width is much smaller than the average frequency of the wave-packet. In the following, we will assume a linear dispersion relation ω_k = v_g k in the free Hamiltonian (<ref>). These assumptions correspond to taking as our clock a single-photon pulse traveling in a vacuum or a particle with a very narrow distribution in momentum. For simplicity, we assume a gaussian envelope with frequency bandwidth Ω ξ(x) = (Ω^2/2π)^1/4 e^-Ω^2 (x-x_0)^2/4 , with 𝒳 = ℐ, where ℐ is the identity map, i.e. a single-particle pulse. We can distinguish between two different times only when the pulse has moved by an amount that is larger than its width (in <ref> there's an example of this process). This leads naturally to a first discretisation of the interaction defined by a time window of size W and discretising V̂_E(s) on them. In light of this intuition we make the following replacements in (<ref>): V̂_E (s) →∑_i V̂_E(s_i) Θ_W(s-s_i) , |s _C s | →1/W∫_𝒟_s dx  xx , where Θ(s-s_i) is the rectangle function of size W centered on the time s_i, which can be chosen among the times within the ith window. We denote with 𝒟_q the integration range corresponding to a time window centered on the value q. Let's assume that s_i corresponds to the centre of the window. In the second equation the measure of position is defined, i.e. we integrate the number of excitations per length over the region corresponding to s. More detailed information about W and the discrete potential V̂(s_i) will come in subsequent sections. Plugging Eqs. (<ref>) and (<ref>) into Eq.(<ref>), the interaction term can be expressed as V̂_EC→ ∫_ℝ ds ∑_i V̂_e(s_i) Θ_W(s-s_i) ⊗1/W∫_𝒟_s dx  xx = 1/W∑_i V̂_e(s_i) ⊗∫_ℝ ds Θ_W(s-s_i) ∫_𝒟_s dx  xx = 1/W∑_i V̂_e(s_i) ⊗∫_𝒟_i dx  xx , where 𝒟_i is now the region including s_i (following the previous assumption it is [s_i-W/2,s_i+W/2]). In the interaction picture with respect to the free Hamiltonian of the clock and the engine, we have V̂_EC^I(t) = 1/W∑_i V̂^I_E(s_i) ⊗∫_𝒟_i dx  x-tx-t , with V̂^I_E(s_i)=e^i Ĥ_E tV̂_E(s_i) e^-i Ĥ_E t (we omit the time dependence) and the total propagator is given by 𝒰̂_t,0 = e^-i (Ĥ_E + Ĥ_C) t 𝒯exp{-i ∫_0^t dτV̂^I_EC(τ) } , where 𝒯 is the time ordering operator. We now discretise the time axis into shorter intervals t = n Δ t with n∈ℕ and the time step Δ t≪ W. Thus the propagator in (<ref>) is decomposed as 𝒰̂_t,0 = ∏_n 𝒰̂_n = ∏_n e^-i (Ĥ_E + Ĥ_C) Δ t 𝒯exp{-i ∫_t_n-1^t_n dτV̂^I_EC(τ) } . In the limit of Δ t → 0 we can approximate the propagator (<ref>) through the first order of the Magnus expansion <cit.>: ℋ_n^(0) = 1/Δ t1/W∑_i V̂_e^I(s_i) ⊗∫_𝒟_i dx  ∫_t_n-1^t_n dτx-τx-τ = 1/W∑_i V̂^I_e(s_i) ⊗∫_t_i-W/2-t_n^t_i+W/2-t_n dx  δn̂_x = 1/W∑_i V̂^I_e(s_i) ⊗N̂_W(i+1/2) -n , where δn̂_α:=1/Δ t∫_t_n-1^t_n dτα-τα-τ and N̂_W(i+1/2) -n := ∫_t_i-W/2-t_n^t_i+W/2-t_n dx  δn̂_x. Thus we obtain the discrete-time propagator 𝒰̂_n ≃ e^-i (Ĥ_e + Ĥ_c) Δ t exp{-i Δ t /W∑_i V̂^I_e(s_i) ⊗N̂_W(i+1/2) -n} . Note that in our approximation we used the operator in Eq. (<ref>) instead of the field operator ψ̂^†_c (s) ψ̂_c (s) to avoid biasing by the shape of the pulse. None of the previous assumptions prevent pulse broadening, as it depends on the clock-engine interaction Hamiltonian. Note that unlike the ideal scenario proposed in <cit.>, we cannot decompose the propagator in Eq.(<ref>) into two separate operators for the clock and the engine. Consequently, the engine and the clock exhibit correlations during their evolution. This back-reaction affects the clock's state and provides the physical basis for degradation. § CLOCK DEGRADATION AND DISPERSION LAW The free evolution of the clock ladder operators reads e^i Ĥ_C tâ_x e^-i Ĥ_C t = â_x- t , i.e. the free Hamiltonian of the clock only translates the state of the field. It becomes evident that when the clock evolves exclusively under this Hamiltonian with the assumption of a linear dispersion, it might behave as an ideal clock in the sense of <cit.>. Consequently, we can explore the potential for degradation by examining the emergence of nonlinearity in the dispersion law. Starting from (<ref>) we have, in Fourier space Ĥ_C + V̂_EC = ∫ dk v_g k 1_E â^†_kâ_k + ∫ dk dk' ℱ[V̂_E](k-k') ⊗â^†_kâ_k' , where ℱ[V̂_E](k) = ∫ dx V̂_E(x) e^- i k x denotes the Fourier transform of the engine Hamiltonian. Even in the case in which ℱ[V̂_E]_E ∝ f(k,t) δ(k-k'), the interaction term still introduces a complex and non-trivial deviation from linearity into the dispersion relation. One of the consequences of the non-linearity of the dispersion relation is wave-packet broadening <cit.> [We are not considering Airy wavepackets <cit.>, which are well-known for being the only nonspreading-wavepacket solutions to Schrödinger's equation. Nonetheless, it's important to note that Airy wavepackets exhibit acceleration, making them unsuitable as candidates for position-based timekeeping. ]. When the wave-packet width becomes comparable to W, from the point of view of the engine, we are not able to select which of the transformations is the one we must implement to reproduce the target dynamics: in other words, we do not know what time it is. We interpret this phenomenon as a manifestation of clock degradation arising from the interaction with the engine. This interaction leads to the emergence of an effective mass ∝ (∂^2_k ω_k)^-1, even when the free clock field is originally massless. On the other hand, the coupling (<ref>) is a translation of (<ref>) for quantum fields, which in turn finds its origins in the broader concept of the ancilla-clock system initially introduced in <cit.>. Therefore, for clocks falling within this broad class, our findings provide direct evidence of the intrinsic connection between degradation and the effective mass of the system operating as a clock. Directly solving the clock's dynamics under (<ref>) in order to quantify the effects of degradation is unpractical. In the next section we take advantage of our model to introduce a time operator formalism that will enable us to achieve a comprehensive characterization of clock degradation. § TIME OPERATORS Inspired by the functional form of the Hamiltonian in Eq.(<ref>), we define a continuous clock time operator T̂ = 1/c∫ dx x â_x^†â_x , where c is the speed of light in the vacuum. It is interesting to note that given any continuous dispersion relation ω(k) for the clock system, we can find an expression for the corresponding (<ref>), as shown by the following Given the time operator defined above, and given a continuous dispersion relation ω(k), the commutator with Ĥ_C reads [T̂,Ĥ_C] = i Λ̂ where Λ̂=1/c∫ dk ω'(k) â_k^†â_k , with ω'(k) = dω(k)/d k. See Appendix <ref>. The theorem above implies the Heisenberg equation of motion Ṫ̂̇_H(t) = i [T̂_H(t),Ĥ_C] = ∫ dk ω'(k)â_k^†â_k = Λ̂ , which is generally far off from the ideal case of (<ref>), and the clock states will in general spread, a phenomenon that causes superpositions of different time states and reduces the resourcefulness of the clock system. Having introduced a time operator, we are in the position of defining an associated quantifier of degradation by means of the variance of the operator T̂: Given a time operator T̂ and an initial clock state |ψ_C⟩, the degradation D(t) of the clock is defined as D(t) = √(Var(T̂)) = (⟨T̂^2⟩ - ⟨T̂⟩ ^2)^1/2 . Given a dispersion relation and its corresponding modified Heisenberg algebra [T̂,Ĥ_C]=iΛ̂, we find that the time dependency of D(t) can be extracted: it is linear for any (continuous) choice of dispersion relation ω(k), and proportional to the variance of the operator Λ̂: Given a dispersion relation ω(k), and its corresponding modified commutator [T̂,Ĥ_C]=iΛ̂, the clock degradation obeys D(t) = √(Var (Λ̂)) t See Appendix <ref>. The theorem above suggests that it might always be possible to completely eliminate the problem of degradation at all future times by preparing initial clock states such that Var(Λ̂)=0, i.e. eigenstates of Λ̂. Interestingly enough, under fairly general assumptions this possibility is ruled out, unless the dispersion relation is linear, as shown by the following Given a single-particle clock state |ψ_C⟩, whose support in k-space is defined by a compact interval Ω∈ℝ with non-zero length, and given a continuous and injective dispersion relation ω(k):Ω→ℝ, one has D(t)=0 ∀ t if and only if ω(k) is linear in the whole support Ω. See Appendix <ref>. As a corollary to the theorem above, we can consider the limit case in which our initial clock state is extremely well localised in a spatial window of width W. Since good clock states must be sufficiently localised in position, they must necessarily be sufficiently delocalised in momentum and therefore the support Ω in the theorem above becomes the whole real line as W→ 0, forcing the dispersion relation to be linear everywhere. We can now investigate what happens to (<ref>) in the case of a linear dispersion relation ω(k)=ck. Following the results above one has Λ̂=∫ dk â_k^†â_k=N̂_C , where N̂_C counts the excitations of the field. Remarkably, this is the unique case in which we can construct resourceful clock pulses that have Var(Λ̂)=0 but are not eigenstates of the clock Hamiltonian (rendering their resourcefulness as clock states void, due to their stationarity). The initial state of the clock (<ref>) has a fixed number of excitations N, and furthermore such number of excitations is exactly conserved during the dynamics due to the fact that [N̂_C,Ĥ]=0. Thus, within our assumptions, when starting from a clock state with well-defined particle number N̂_C|ψ_C⟩=N|ψ_C⟩, we can always define a rescaled time operator by introducing the projector ℙ_N onto the N-particles sector τ̂ = 1/Nℙ̂_NT̂ℙ̂_N such that τ̇̂̇ = 1 , that corresponds to the original Pauli relation[The rescaling procedure outlined here has a classical counterpart. The angular velocity of the arm is the same in all the classical watches and is achieved by rescaling the tangential speed of the arm's tip with the length of the arm.]. This definition allows us to conclude that, as long as the joint clock-engine dynamics does not couple sectors with different number of particles, the Pauli relation can be obeyed exactly if the effective dispersion relation of the clock after being coupled to the engine can be maintained linear. In other words, as expected, we found that any wave-packet with a fixed total number of excitations traveling through a medium with linear dispersion law (i.e. any massless wavepacket) behaves as an ideal clock, in the sense of <cit.>. However, as pointed out in the previous section, this is fundamentally impossible, due to the fact that the interaction with the engine will inevitably correct the bare dispersion relation of the clock, generating a finite effective mass and thus introducing unavoidable degradation of any clock state that is sufficiently localised in position. This trade-off between degradation and resourcefulness of the clock states is a consequence of the Heisenberg uncertainty relation for the clock's position and momentum, and can then be translated into a lower-bound D(t) ≥| ⟨Λ̂⟩ |/2 ΔĤ_C, where ΔĤ_C is the variance of Ĥ_C, which follows directly from the general Robertson-Schrödinger uncertainty relation associated to the pair of noncommuting observables T̂ and Ĥ_C <cit.>. Since Λ̂ is a constant of motion either in the presence of inherent nonlinearity of the clock's dispersion relation or of an engine, we can explicitly compute |Λ|. Indeed only the initial state of the clock matters and we have control on that. Let the initial state of the clock, in the frequancy domain, be |ψ⟩_0 = ∫ dk ξ̃(k) â^†_k |0⟩ , where the ξ̃(k) is the spectral density of the pulse and we are considering a single-particle pulse. Then we can calculate the expectation value of Λ at any time as Λ̂ = 1/c∫ dk dk' dk”ξ̃(k) ξ̃(k”) ω'(k') â_kâ^†_k'â_k'â^†_k”_0 = 1/c∫ dk dk' dk”ξ̃(k) ξ̃(k”) ω'(k') δ_k k'δ_k' k” = 1/c∫ dk|ξ̃(k)|^2 ω'(k) Using the chain rule we find Λ̂ = 1/c∫ dk|ξ̃(k)|^2 ω'(k) = - 1/c∫ dk d/dk(|ξ̃(k)|^2) ω(k) where we exploited the fact that the initial state of the clock is defined with a finite frequency bandwidth Ω (ξ̃ decays exponentially elsewhere). This also means that we can rescale k → k - k_0, where k_0 is the central momentum of the pulse, and plug into the previous expression the expansion of ω up to the second order in k. Λ̂ = - 1/c∫ dk d/dk(|ξ̃(k)|^2) (c k + k^2/2 M) = - N - ∫ dk |ξ̃(k)|^2 k/Mc = - N - k/Mc where N is the number of excitations of the pulse. We put N=1 and plug this result into the uncertainty relation, thus ΔT̂ΔĤ_C ≥1/2|Λ̂| = 1/2+ k /M c , which introduces a positive finite correction to the ideal minimum uncertainty. This aligns with a scenario where the clock's performance deviates from ideal behavior. § CONCLUSIONS Our study digs into the realm of driven systems, and has revealed the fundamental challenge of achieving the precise time keeping condition in realistic scenarios, which has been shown possible except in idealized and far-from-reality situations. Building upon one such ideal scenario, our study relaxes primary approximations, striving to create a more realistic and pragmatic model of control systems. The main achievement of our research lies in the development of a comprehensive quantum description of engine-clock dynamics within a truly realistic quantum framework. While a connection between the presence of mass and the possibility of keeping time has been explored also experimentally <cit.>, we focused on the characterization of the clock's degradation phenomenon. Our formalism has proven effective in capturing its complexity. Notably, it enables us to establish a clear connection between the degradation of a quantum clock and its mass, even at low energies. It's worth noting that we make an implicit but reasonable assumption that the markers on the timeline, allowing us to track the clock's motion, are uniformly spaced and considerably wider than the initial pulse width. Consequently, this degradation will inevitably lead to the clock's diminished performance over an extended period. As for the possibility of implementing a protocol involving adaptive adjustments to the time window widths, this falls outside the scope of our current work. It's important to notice that such a procedure would likely necessitate the involvement of an external agent, which contradicts our goal of achieving autonomous evolution for the machine. For the same reasons, we also exclude packets that exhibit acceleration, such as Airy's <cit.>. Finally we point out that we operated under the assumption of linear susceptibility within the propagation medium, a premise in models similar to the one outlined in <cit.>. However, it is important to note that nonlinear effects are well-documented for their capacity to induce phenomena that can counteract dispersion effects, and in some cases, even lead to their complete suppression. For instance, solitons are a prime example of such nonlinear effects. This perspective opens up intriguing avenues for the exploration of moving-particle clocks within the domain of nonlinear quantum mechanics. § ACKNOWLEDGEMENTS § PROOFS OF THE STATEMENTS IN SECTION VI For the sake of clarity, in this appendix we will focus exclusively on the dynamics of the clock setting aside the explicit goal of implementing a particular transformation. Thus we consider V(t) a continuous variable as well as in <cit.>, i.e. we work in the limit of W→ 0. Therefore the clock Hamiltonian keeps the form in (<ref>) while the total Hamiltonian is turned into Ĥ = Ĥ_E⊗1_C + 1_E⊗Ĥ_C + ∫ dx V̂_E(x) ⊗â_x^†â_x . When considering the time operator T̂ = 1/c∫ dx x â_x^†â_x , where c is a reference speed, we are interested in the commutator [T̂_H(t),Ĥ_C], where T̂_H(t) is the Heisenberg picture representation of T̂, which will give us the equation of motion we are looking for. In order to compute the commutator with Ĥ_C, we make use of the following T̂ = i/clim_ϵ→ 01/ϵ∫ dk( â_k^†â_k+ϵ - â^†_k â_k ) We start by expressing â_x,â_x^† as anti-transforms of â_x,â_x^†, and we make use of the fact that, in the sense of distributions ∫ dx x e^-ikx = 2π i δ'(k) where δ'(x) is such that ∫ dx δ'(x)f(x) = - f'(0). Putting this together we can write T̂ = 1/2π∫ dx ∫ dk ∫ dk' x e^i (k'-k) xk^†k' = 1/2π∫ dk ∫ dk' (∫ dx x e^i (k'-k) x) k^†k' =1/2π∫ dk ∫ dk' (-2π i δ'(k'-k) ) k^†k' = -i ∫ dk k^†∫ dk' δ'(k'-k) k' = i ∫ dk k^†[ ∂/∂ k'k']_k'=k . Finally, by using the definition of derivative ∂/∂ kâ_k = lim_ϵ→ 01/ϵ( â_kϵ - â_k ) , we obtain the result. Given the time operator defined above, and given a continuous dispersion relation ω(k), the commutator with Ĥ_C reads [T̂,Ĥ_C] = i Λ̂ where Λ̂=1/c∫ dk ω'(k) â_k^†â_k . The quantity we need to compute is [T̂,Ĥ_C] = i/clim_ϵ→ 01/ϵ∫ dk ∫ dk' ω_k'[ â_k^†â_k+ϵ - â^†_k â_k , â_k'^†â_k'] . By making use of the relations [k^† a_k+ϵ, a^†_k'k'] = k^†k'δ(k+ϵ-k') - k'^† a_k+ϵδ(k-k') [k^†k, a^†_k'k'] = ( k^†k' - k'^†k) δ(k-k') , we get [T̂,Ĥ_C] = i/clim_ϵ→ 01/ϵ(∫ dk ω_k+ϵk^†k+ϵ - ∫ dk ω_kkk+ϵ) = i/clim_ϵ→ 01/ϵ∫ dk (ω_k+ϵ - ω_k )kk+ϵ . Now, if ω(k) ≡ω_k is a continuous function of k we can write lim_ϵ→ 0(ω_k+ϵ - ω_k /ϵkk+ϵ) =lim_ϵ→ 0ω_k+ϵ - ω_k /ϵ×lim_ϵ→ 0kk+ϵ = ω'(k)kk , and therefore [T̂,Ĥ_C] = i/c∫ dk ω'(k)k^†k From the theorem above, we can easily drawn some initial conclusions, as exemplified in the following two corollaries: Given the time operator defined above, and given any continuous dispersion relation ω(k), the resulting modified commutator [T̂,Ĥ_C] = i Λ̂ is such that [Λ̂,Ĥ_C]=0, i.e. Λ̂ is a constant of motion. In particular, the Heisenberg picture representation of Λ̂ reads Λ̂_H(t) = e^iĤ_CtΛ̂ e^-iĤ_Ct = Λ . Given the time operator defined above, and given a linear dispersion relation ω(k)=ck, the commutator with Ĥ_C reads [T̂,Ĥ_C] = i N̂ where N̂=∫ dk â_k^†â_k is the number operator. As a consequence, note that the Heisenberg equations of motion for this time operator read Ṫ̂̇_H = i [Ĥ_C,T̂_H] = i[Ĥ_C,T̂] = Λ̂ . We are now in the position to define the concept of a clock's degradation, i.e. the spreading of a clock state under its own Hamiltonian dynamics. Given a time operator T̂ and an initial clock state |ψ_C⟩, the degradation D(t) of the clock is defined as D(t) = √(Var(T̂)) = (⟨T̂^2⟩ - ⟨T̂⟩ ^2)^1/2 . Given the modified Heisenberg algebra [T̂,Ĥ_C]=iΛ̂, we can characterize the resulting degradation of a clock state. We find that the time dependency of D_0(t) can be extracted: it is linear for any (continous) choice of dispersion relation ω(k), and proportional to the variance of the operator Λ̂. Given a dispersion relation ω(k), and its corresponding modified commutator [T̂,Ĥ_C]=i Λ̂, the clock degradation obeys D(t) = √(Var (Λ̂)) t First, we exploit the freedom of writing expectation values in the Schrödinger or Heisenberg picture as follows ⟨T̂^2⟩ = ⟨ψ_C(t)|T̂^2 |ψ_C(t)⟩ = ⟨ψ_C|T̂^2_H(t) |ψ_C⟩ . Then, we use the Heisenberg equation of motion Ṫ̂̇_H = Λ̂_H(t) = Λ̂ implying the solution T̂_H(t) = Λ̂ t, which we can plug in the expression for D(t) and get D_0(t) = √(⟨ψ_c|Λ̂^2 t^2|ψ_c⟩ - ⟨ψ_c|Λ̂ t |ψ_c⟩^2) . By extracting the time dependency we get the result. It is interesting to note that, under fairly general assumptions, linear dispersion relations are the only ones that guarantee the existence of non-degrading clock states, as shown by the following Given a single-particle clock state |ψ_C⟩, whose support in k-space is defined by a compact interval Ω∈ℝ with non-zero length, and given a continuous and injective dispersion relation ω(k):Ω→ℝ, one has D(t)=0 ∀ t if and only if ω(k) is linear in the whole support Ω Let us suppose that D(t)=0 identically. Then Var(Λ̂)=0 on |ψ_C⟩, which means that |ψ_C⟩ is an eigenstate of Λ̂. Furthermore, since ω(k) is an injective function, |ψ_C⟩ cannot be an eigenstate of Ĥ_C if Ω has non-vanishing length. However, since [Λ̂,Ĥ_C]=0, the only possibility is that Λ̂ is degenerate on Ω, i.e. the function ω'(k) is constant on Ω. Therefore ω(k) is linear on Ω. Conversely, let us suppose that ω(k) is linear. Then Λ̂ = N̂ and, since N̂|ψ_C⟩=|ψ_C⟩ we have Var(N̂)=0 and therefore D(t)=0. § ON THE FORM OF THE INTERACTION The choice of the window W and the interaction is crucial for the scenario we want to describe. We operate under the assumption that the interaction in (<ref>) does not allow energy exchange between the engine and the clock. Thus apart from translation of the clock states, the pulse may suffer degradation, i.e. pulse broadening in time. The most critical parameter for this timekeeping approach based on the position is then the width of the wave-packet. One possibility can be choosing W in order to include all the pulse envelope (W ≫Ω^-1 for the Gaussian pulse (<ref>), as depicted in <ref>) and V̂^I_e(s_i)= v̂_i , for i  even ; 1 , for i  odd . This choice guarantees that a specific Hamiltonian v_i acts on the engine at a specified time window, at the price of a phase shift due to the identity. The presence of intermittent dead windows naturally introduces the concept of a “period" in our system, similar to mechanical clocks: the interval between two successive ticks corresponds to a phase of the evolution during which the oscillating component disengages from the surrounding mechanism (<ref>). More precisely, for each Δ t, the potential v_i undergoes modulation proportional to the radiation flux over the ith time window. In this scenario the width of the pulse in terms of Δ t is crucial for the speed at which one can induce a certain V̂_e(t) on the engine. Let us consider W(i+1/2) - n' ∈ [2m W, (2m+1)W], with m ∈ℕ. In this case the pulse encompasses two time windows. According to (<ref>) and considering the action of N̂_α on the state of the clock, the generator reads 1/W∑_i V̂^I_e(s_i) ⊗N̂_W(i+1/2) -n' =α_n'V̂^I_e(s_2m) + β_n'V̂^I_e(s_2m+1) =α_n'v̂_2m + β_n'1 , where we define α_n' = 1/WN̂_W(2m+1/2) -n'_c^1/2 and β_n' = 1/WN̂_W(2m+1+1/2) -n'^1/2_c. Thus in this case 𝒰̂_n' = exp{-i (α_n'v̂_2m + β_n'1)Δ t} = exp{-i α_n'v̂_2mΔ t}exp{-iβ_n'1Δ t} = exp{-i α_n'v̂_2mΔ t} e^-iβ_n'Δ t , and the global phase factor can be neglected. At each n' the potential v̂_2m is weighted by the factor α_n' that ranges between 0 (the pulse is crossing another window) and a maximum that depends on the normalisation of the pulse (when the pulse is exactly centered on the 2mth time window). This scheme allows us to approximate the action of the target time-dependent generator V_e(t) on a mesh of time windows of size W. Nevertheless, a notable limitation is that each generator V_e(s_i) demands an effective time frame of 2W for execution. This, in conjunction with the modulation, imposes significant constraints on the accuracy of the achieved transformation. Notably, as W increases, our discrete series offers a closer approximation to the continuous generator. Such trade-offs, that arise from the need to bridge the gap between continuous and discrete representations, are a common challenge in the realm of approximating continuous-time systems. Up to this point, we have not directly accounted for any clock degradation process. In the upcoming section, we explore the impact of clock degradation on the wave-packet dynamics, examining how the Hamiltonian in (<ref>) affects also its shape. Understanding the underlying reasons for this degradation is crucial, given our reliance on the position of the particle in our time-keeping protocol.
http://arxiv.org/abs/2409.02739v1
20240904141654
Properties of Central Regions of the Dark Matter Halos in the Model with a Bump in the Power Spectrum of Density Perturbations
[ "Yu. N. Eroshenko", "V. N. Lukash", "E. V. Mikheeva", "S. V. Pilipenko", "M. V. Tkachev" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
e-mail: [email protected] Institute for Nuclear Research, Russian Academy of Sciences, Moscow, 117312 Russia e-mail: [email protected] Astro Space Center, P. N. Lebedev Physical Institute, Russian Academy of Sciences, Moscow, 117997 Russia e-mail: [email protected] Astro Space Center, P. N. Lebedev Physical Institute, Russian Academy of Sciences, Moscow, 117997 Russia e-mail: [email protected] Astro Space Center, P. N. Lebedev Physical Institute, Russian Academy of Sciences, Moscow, 117997 Russia e-mail: [email protected] Astro Space Center, P. N. Lebedev Physical Institute, Russian Academy of Sciences, Moscow, 117997 Russia § ABSTRACT A surprisingly large number of galaxies with masses of ∼10^9-10^10M_⊙ at redshifts of z≥9 are discovered with the James Webb Space Telescope. A possible explanation for the increase in the mass function can be the presence of a local maximum (bump) in the power spectrum of density perturbations on the corresponding scale. In this paper, it is shown that simultaneously with the growth of the mass function, galaxies from the bump region must have a higher density (compactness) compared to cosmological models without a bump. These more compact galaxies have been partially included in larger galaxies and have been subjected to tidal gravitational disruption. They have been less destructed than “ordinary” galaxies of the same mass, and some of them could survive to z = 0 and persist on the periphery of some galaxies. The formation and evolution of compact halos in a cube with a volume of (47 Mpc)^3 with (1024)^3 dark matter particles in the redshift range from 120 to 0 have been numerically simulated and observational implications of the presence of such galaxies in the current Universe have been discussed. Properties of Central Regions of the Dark Matter Halos in the Model with a Bump in the Power Spectrum of Density Perturbations M. V. Tkachev September 9, 2024 ============================================================================================================================== § INTRODUCTION The James Webb Space Telescope (JWST) opened new opportunities to study the evolution of the Universe, allowing the observation of the first galaxies and quasars at the end of the so-called “dark ages” era lasting from recombination to reionization of hydrogen. At redshifts of z>9, a surprisingly large number of galaxies with masses of ∼10^9-10^10M_⊙ (see <cit.>), which is noticeably larger than those predicted by the standard cosmological ΛCDM model, were discovered with JWST <cit.>. The Standard Cosmological Model implies the power-law spectrum of primordial density perturbations, the slope and amplitude of which are determined from the anisotropy of the cosmic microwave background and from the abundance of galaxies (normalized to σ_8). In this case, the spectrum of density perturbations with a comoving wavenumber of k>1 Mpc^-1 is less defined on small scales <cit.>. The observed excess of galaxies can be in particular explained by a non-power-law form of the initial perturbation spectrum, for example, with an additional maximum or bump <cit.>. This bump can be due to the presence of a flattened section in the inflaton potential <cit.> or to other physical processes in the early Universe (see review <cit.>). The presence of the bump means that galaxies with masses corresponding to the position of the bump formed earlier than would have happened in the model without the bump. In turn, due to the earlier formation of galaxies, they are denser[In the spherical collapse model, the average density of forming objects exceeds the average density of the Universe at the time of their formation by a factor of κ=18π^2 (see, for example, <cit.>).] and more compact. Thus, in the presence of the bump in the spectrum of density perturbations, a separate class of galaxies, which we call compact galaxies (CGs) to distinguish them from ordinary galaxies with the same masses, should exist in the Universe. This research is aimed at studying the observational consequences of the presence of CGs in the Universe. Unlike previous studies <cit.> focused on large z values, attention here is focused on the properties of CGs at z=0. To solve the problem, we performed a series of analytical calculations and carried out numerical modeling of the formation and evolution of CGs in a cube with a volume of (47 Mpc)^3 (which is equivalent to (32 h^-1 Mpc)^3) with a particle number of (1024)^3 in the redshift range from 120 to 0. The theory does not predict specific bump parameters, and they can vary widely. In <cit.>, a number of such models were considered and it was shown that models, in which the bump has a characteristic wavenumber of k_0=4-20 Mpc^-1, demonstrate noticeable differences from ΛCDM at high redshifts in the mass function and spatial distribution of galaxies. In <cit.>, preference was given to a model called gauss_1 with a bump position at k_0=4.69 Mpc^-1 since it is better than others in explaining JWST observations of massive galaxies at z>10. In the presence of the bump, the hierarchical clustering of dark matter (DM) differs from that in the model without the bump. At each redshift, the number of massive galaxies in the model with the bump is larger. Therefore, there is a good chance of identifying a class of CGs genetically related to the bump in the observational data. For this reason and because the bump with such relatively small k_0 requires moderate numerical resolution, we chose this model as the main one to perform numerical calculations. An analytical estimate of the Press–Schechter mass function <cit.>, which is confirmed with good accuracy by direct numerical simulation, shows that the average distance between CGs for the gauss_1 model from <cit.> is ∼1 Mpc. The distance from the Earth to the nearest CG has the same order of magnitude. In this paper, we also examine the fate of CGs included in other galaxies. One of the promising directions in the search for CGs is the detection of signals from the annihilation of DM particles in CGs. Annihilation radiation from the Galactic Center and other galaxies is actively sought. Gamma-ray emission from the galaxy M31 in Fermi-LAT observations in the range of 0.3-100 GeV was identified in <cit.> with a confidence level of 4.7σ (see also <cit.>). This radiation can be produced by cosmic rays while the annihilation nature of the signal is also possible. It was shown in <cit.> that to detect gamma-ray emission from the galaxy M87 and nearby dwarf spheroids, approximately an order of magnitude is insufficient in the signal-to-noise ratio. Compact galaxies have densities on average 3.4 times higher than those of ordinary galaxies, so the annihilation signal from them that is proportional to the square of the density is ∼ 11 times greater. In this regard, here, we consider the prospect of observing CGs in the gamma range at favorable properties of DM particles. All analytical and numerical calculations were performed for a cosmological model with the parameters Ω_m=0.31, Ω_Λ=0.69, Ω_b=0.048, h=0.67, and n_s=0.96 (see <cit.>). § FORMATION OF A HALO IN A MODEL WITH A BUMP Following <cit.>, we consider the spectrum of density perturbations, which is the product of the standard spectrum of the ΛCDM model and an additional factor in the form of a Gaussian bump 1+A·exp(-(log(k)-log(k_0))^2/σ_k^2), where A=20, k_0=4.69 Mpc^-1, and σ_k=0.1 (gauss_1 model). The rms amplitude σ_0(R) of the relative perturbation of the density field δ≡δρ/ρ at z=0 (t=t_0) smoothed on scale R is expressed in terms of the power spectrum P(k) as σ_0^2(R) = 1/2π^2∫_0^∞ k^2 P(k) W^2(kR) dk , where W(x) is the smoothing “window”. If the relative density perturbation extrapolated to t_0 by the linear theory of perturbation growth is equal to δ_0, then the height of the peak is determined as ν=δ_0/σ_0. Figure <ref> illustrates the ratio of variances for spectra with the bump and for the standard ΛCDM model. For the gauss_1 model (black solid line), this ratio reaches 1.5. In the spherical collapse model, the condition for the formation of the halo (galaxy) from a peak with the height ν has the form νσ_0D(z)=δ_c, where δ_c=3(12π)^2/3/20≃1.686 while the growth factor of density perturbations is normalized so that D(0)=1. For specified M and ν, it is possible to find the redshift z at which the halo is formed. These z values are shown in Fig. <ref>. The most numerous objects are with ν∼1, but galaxies are apparently associated with perturbations with ν∼ 2 <cit.>. It can be seen that the average value of 1+z, at which galaxies are formed in the bump region, increases by ∼1.5 times compared to the cosmological model without the bump while its average density ρ̅_s=κρ_ critΩ_m(1+z)^3 will increase by a factor of 1.5^3=3.4, where κ=18π^2. Thus, a new class of compact galaxies will be formed in the bump area. The virial radius of CGs given by the formula R=(3M/4πρ̅_s)^1/3 is 1.5 times less than that of galaxies with the same mass from the region outside the bump; i.e., CGs are on average more compact. § COMPACT GALAXIES THAT ARE NOT INCLUDED IN OTHER GALAXIES To study the evolution of DM, two numerical simulations within the gauss_1 and standard ΛCDM models were performed using the N-body method in a cube with a volume of (47 Mpc)^3 with 1024^3 particles in each. The size of the cube and the number of particles were chosen as a compromise between high resolution (the Nyquist frequency must be significantly higher than bump scale k_0, and CGs must contain at least several hundred particles) and a large cube size, so that the fundamental mode of perturbations (with a wavelength equal to the side of the cube) did not reach the nonlinear mode at z=0. The initial conditions for the simulations were created at z=120 using the free code [https://github.com/ginnungagapgroup/ginnungagap], and the matter power spectrum was determined for each simulation separately. It was generated using the free code CLASS <cit.> for the standard ΛCDM model and using function (<ref>) for the model with the bump. To simulate the evolution of the density field, we used the code [http://wwwmpa.mpa-garching.mpg.de/ volker/gadget/] <cit.>, which is widely used to simulate the evolution of the Universe structure. Sixty two “snapshots” were stored for each simulation at redshift intervals from z = 25 to z = 0. The halo analysis was performed using the code [https://bitbucket.org/gfcstanford/rockstar] <cit.>. The resulting map of the DM density projection is shown in Fig. <ref>. The mass function of halos that were not included in the more massive halos at redshifts of z=0 and z=10 that are found in the simulation in models with and without the bump is shown in Fig. <ref>. As already noted in <cit.>, the presence of the bump in the perturbation spectrum leads to an increase in the number density of galaxies by z=10. The number density of independent halos (not included in more massive objects) at the time t is theoretically determined by the Press–Schechter formula <cit.> dn/dM=√(2/π) ρ̅(z)/Mδ_c/D(z)σ_0^2|dσ_0/dM| exp[-δ_c^2/2D(z)^2σ_0^2], where ρ̅(z) is the average density of DM. Comparison of Eq. (<ref>) with the results of numerical simulation shown in Fig. <ref> demonstrates excellent agreement at z=0 and 10. Assuming that Eq. (<ref>) adequately describes modern CGs in the model with the bump, we found that the number density of CGs in one logarithmic mass interval of Δln M∼1 for a halo with masses of M∼10^9M_⊙ is ∼0.46 Mpc^-3. Then, the average distance between neighboring CGs can be estimated as follows: l̅= (n̅)^-1/3≃ 1.3. This distance is the order of the distance from the Earth to the nearest CG. Thus, on scale of the Local Group of galaxies, one can expect ∼1 CG not included in the more massive virialized halos, although the Local Group itself is gravitationally bound. Massive galaxies similar to the Milky Way are usually formed in elements of a large-scale structure with a relatively high density, i.e., in “walls” and “filaments.” At these places, the number density of low-mass halos can significantly differ from the average over the Universe (see, for example, <cit.>). To test the “substrate” effect, 19 halos in the mass range of (0.9-1.1)×10^12M_⊙ and all the less massive halos in spheres with a radius of 1 Mpc around the centers of these massive halos were separated in the simulation. Their mass functions are also shown in Fig. <ref>. The number of CGs within 1 Mpc from the center of our Galaxy obtained in numerical simulation is an order of magnitude larger. Some of these CGs probably were included in the halos of larger galaxies and were destroyed by tidal gravitational forces. Note that the problem of halo overproduction in numerical modeling was known for a long time in the standard ΛCDM model as well. § ANNIHILATION OF DARK MATTER IN COMPACT DARK MATTER HALOS The physical nature of DM in the Universe remains unknown. In one of the existing models, dark matter particles are weakly interacting massive particles, for example, the lightest supersymmetric particles called neutralinos. Assuming that DM particles can annihilate and that their mass m corresponds to annihilation gamma-ray photons, we consider whether the presence of the bump in the spectrum of density perturbations promotes the observation of annihilation gamma rays from the nearest CGs. Atmospheric Cherenkov detectors are promising tools for detecting annihilation radiation from individual objects <cit.>. They have a high spatial resolution while the observation of relatively high energies makes it possible to exclude confidently the gamma background, which decreases rapidly with increasing energy. Next, we follow the calculation method proposed in <cit.> (see Appendix). We calculated the number of photons N_γ above the detection threshold in three cases E^ th=50, 100, and 250 GeV for each of four masses m=0.1, 1, 10, and 100 TeV. In all cases, we used a thermal cross section of ⟨σ v⟩=3×10^-26 cm^3 s^-1 as the annihilation cross section. Three maximum S/N ratios from the specified 10-parameter sets are shown in Fig. <ref> for A_ effT=0.01 km^2 yr^-1. Note that in <cit.>, data on the parameters of the gas in hydrostatic equilibrium are used for the density profile of the DM halo in the galaxy M87 while the King profile with the core is used for the density profile of dwarf spheroids. As a result, the dependence of S/N on θ has a maximum at θ>0. In our calculations, the Navarro–Frenk–White profile, for which the maximum annihilation flux comes from the center of the halo, was used for the CG density profile. As a result, the ratio S/N decreases monotonically with increasing θ. For sufficiently reliable detection, S/N≥3 is required. However, as seen in Fig. <ref>, this ratio in the typical considered cases is an order of magnitude smaller even for the central part of CGs, which could be distinguished using modern gamma-ray telescopes. Thus, we conclude that despite the gamma-ray flux increased by a factor of 11.4, CGs will not stand out in the sky as bright gamma-ray sources due to their great distance from the Earth. It should be noted that the presence of the bump in the power spectrum at k_0=4.69 Mpc^-1 leads to an increase in σ_0(M) in a wide range of small masses, see Fig. <ref>. Although the increase in σ_0(M) at M≤10^9M_⊙ is smaller, it still contributes to the formation of more compact dwarf galaxies than those in the model without the bump. This means the possibility of a population of dark dwarf galaxies with an increased density, which can also be sources of an annihilation signal in the gamma-ray range. § FATE OF COMPACT DARK MATTER HALOS INCLUDED TO OTHER GALAXIES The fate of CGs included in other structures, for example, our Galaxy, is interesting in comparison with the fate of ordinary galaxies (substructures) with the same masses. Inside larger objects, CGs experience dynamic friction and gradually approach the center of the larger object. Since CGs are on average denser than regular galaxies, they are less susceptible to tidal disruption. The destruction of objects in a hierarchical structure at the stage of its formation was considered in <cit.>. Using the estimates obtained in <cit.>, it can be shown that ≃98% of CGs with ν∼1 are destroyed in hierarchical structures while 60% of CGs with ν∼2 survive. Consequently, such CGs from the bump region will become a part of larger objects and will remain there as individual concentrations of DM until they approach the center under the effect of dynamic friction. We consider the tidal radius and dynamic friction for CGs inside the Galaxy. We assume that the ρ_H(r) density profile of the Galaxy halo corresponds to the Navarro–Frenk–White profile <cit.> ρ_NFW(r)=ρ_c/r/r_c(1+r/r_c)^2 , where the parameters ρ_c=1.44·10^-23 g cm^-3 and r_c=5.95 kpc are obtained from the conditions that the mass of the halo within a radius of 100 kpc is 10^12M_⊙ and the velocity dispersion at a distance of 8.5 kpc from the Galactic center is 200 km s^-1. We denote the mass of the Galaxy halo inside a sphere of the radius r as M_G(r). The virial radius R of the CG is determined by Eq. (<ref>), and its average density ρ̅_s is given by Eq. (<ref>). The tidal radius (the distance from the center of CG, to which its halo is destroyed) at the distance r from the center of the Galaxy with allowance for the action of centrifugal forces is given by the expression (see, for example, <cit.>) r_t=r(M(r_t)/M_G(r)/3-dln M_G(r)/dln r)^1/3. To determine the radius r_d at which the CG destruction begins, we set r_t=R and determine the average density from Eq. (<ref>): ρ̅_s=3ρ_c f(r_d/r_c), where f(x)=3ln(1+x)/x^3-4x+3/x^2(1+x)^2. The distance r_d determined from Eq. (<ref>), at which the destruction of the CG outer layers begins, is shown in Fig. <ref> versus the CG density. This parameter will be called the “tidal distance” in contrast to the tidal radius, which determines the radius of a collapsing object at the given distance r. Comparison with the case of ordinary galaxies (when A=0) shows that CG is significantly more stable with respect to tidal disruptions in the large halo. Calculation of the compression of the circular orbit of the CG in the galactic halo under the effect of dynamic friction shows that the mass limit, at which the CG from the periphery of the Galaxy manages to descend to the center of the halo in the Hubble time, is between 10^9M_⊙ and 10^10M_⊙. A similar result for objects with a mass of >0.01M_G was obtained in <cit.>. Thus, CGs that were a part of large galaxies and have wide orbits can currently remain on their periphery without significant destruction. A more accurate approach can include the loss of the CG mass as it approaches the center of the Galaxy due to tidal stripping of DM outer layers. This DM lost by the CG will contribute to the final density profile of the Galaxy, changing it slightly. § CONCLUSIONS Several models were proposed to explain the excess of massive galaxies at high redshifts that is observed with the James Webb Space Telescope. Among them are the astrophysical explanation through non-standard star formation <cit.>, the cosmological effect of changes in the expansion rate of the Universe <cit.>, and primordial black holes <cit.>. The excess of galaxies was also explained by the presence of an additional maximum (bump) on the scale of galaxies in the perturbation spectrum <cit.>. In this study, we have considered a number of observational consequences of the presence of the bump in the spectrum of cosmological density perturbations. First of all, the presence of the bump leads to an increased number density of galaxies at a redshift of z∼10, as already shown in <cit.>. In this study, a more detailed calculation has confirmed this conclusion. It has also been shown that the dark halo mass function arising in the model with the bump is well described by the Press–Schechter formula. This made it possible to estimate the average distance between neighboring CGs at present as ∼1 Mpc. Although observations do not show the presence of CGs within the virial radius of our Galaxy at the present time, such CGs could exist in the Galaxy earlier and were destroyed by tidal forces. Perhaps, CGs can be identified through a detailed study of substructures in other large galaxies. Among the Local Group galaxies, a candidate for the role of the CG can be the compact elliptical galaxy M32, a satellite of the Andromeda galaxy (M31), which has a mass of ∼10^9M_⊙ and a radius of R=2.5 kpc, although the galaxy M32 may also be the central part of a larger and less dense (on average) galaxy stripped by the tidal gravitational forces of the host galaxy M31. The work was supported by the Russian Science Foundation (project no. 23-22-00259). § APPENDIX CALCULATION OF THE ANNIHILATION SIGNAL We describe the method for calculating the annihilation signal from the CG. The annihilation radiation flux is expressed in terms of the line of sight integral (l.o.s.) <cit.>: Φ_γ=1/4π⟨σ v⟩ N_γ/m^2∫_l.o.s.ρ^2 ds, where N_γ is the number of photons during one annihilation event above the energy detection threshold E^ th of a detector. The detector with an effective area of A_ eff over time T from a solid angle with opening θ (centered on the observed CG) will record the number of photons given by the formula N_s=A_ effT∫_0^θΦ_γ 2πα dα, where α is the angular distance from the CG center. By comparing the number of photons N_s with the number of photons N_ bg from background gamma radiation, we find the signal-to-noise ratio S/N=N_s/√(N_ bg). As the background, we use the sum of signals produced in the upper layers of the atmosphere by the electronic and hadronic components of cosmic rays <cit.>. We assume that the density profile of the CG has the form of Eq. (<ref>) and calculate the signal from DM annihilation in the CG halo. As an example, we assume that the CG is located at a distance of 1 Mpc, the parameter r_c=0.1R, the CG mass within the virial radius is equal to M=10^10M_⊙ while its average density is given by Eq. (<ref>), and the CG originated from a density peak with ν=2. In <cit.>, scanning was carried out across the parameter space of supersymmetric models. We consider one simple, but most plausible option (although other models cannot be excluded either) when annihilation occurs in the hadron channel, i.e., through b quarks. Then, gamma-ray photons are born in the decays of π^0 mesons, and the spectra of gamma radiation at various m values have the form shown, for example, in Fig. 3 in <cit.>. The result of our calculations is shown in Fig. <ref>. 99 b1 M. Castellano, A. Fontana, T. Treu, et al., Astrophys. J. 938, L15 (2022). b2 R. P. Naidu, P. A. Oesch, P. van Dokkum, et al., Astrophys. J. 940, L14 (2022). b3 S. L. Finkelstein, M. B. Bagley, P. Arrabal Haro, et al., Astrophys. J. 940, L55 (2022). b4 C. T. Donnan, D. J. McLeod, J. S. Dunlop, R. J. McLure, A. C. Carnall, R. Begley, F. Cullen, M. L. Hamadouche, R. A. A. Bowler, D. Magee, H. J. McCracken, B. Milvang-Jensen, A. Moneti, and T. Targett, Mon. Not. R. Astron. Soc. 518, 6011 (2023). b5 I. Labbé, P. van Dokkum, E. Nelson, R. Bezanson, K. A. Suess, J. Leja, G. Brammer, K. Whitaker, E. Mathews, M. Stefanon, and B. Wang, Nature (London, U.K.) 616, 266 (2023). b6 M. Boylan-Kolchin, Nat. Astron. 7, 731 (2023). b7 S. Chabanier, M. Millea, and N. Palanque-Delabrouille, Mon. Not. R. Astron. Soc. 489, 2247 (2019). b8 H. Padmanabhan and A. Loeb, Astrophys. J. Lett. 953, L4 (2023). b9 M. V. Tkachev, S. V. Pilipenko, E. V. Mikheeva, and V. N. Lukash, Mon. Not. R. Astron. Soc. 527, 1381 (2024). b10 A. A. Starobinskii, JETP Lett. 55, 489 (1992). b11 P. Ivanov, P. Naselsky, and I. Novikov, Phys. Rev. D 50, 7173 (1994). b12 K. Inomata, M. Braglia, and X. Chen, J. Cosmol. Astropart. Phys. 4, 007 (2023). b13 C. Lacey and S. Cole, Mon. Not. R. Astron. Soc. 262, 627 (1993). b14 S. V. Pilipenko, S. A. Drozdov, M. V. Tkachev, and A. G. Doroshkevich, arXiv: 2404.17803 [astro-ph.CO]. b15 W. H. Press and P. Schechter, Astrophys. J. 187, 425 (1974). b16 M. S. Pshirkov, V. V. Vasiliev, and K. A. Postnov, arXiv: 1501.03460 [astro-ph.GA]. b17 C. M. Karwin, S. Murgia, I. V. Moskalenko, S. P. Fillingham, A.-K. Burns, and M. Fieg, Phys. Rev. D 103, 023027 (2021). b18 A. E. Egorov, Phys. Rev. D 108, 043028 (2023). b19 E. A. Baltz, C. Briot, P. Salati, and J. Silk, Phys. Rev. D 61, 023514 (1999). b20 P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, et al. (Planck Collab.), Astron. Astrophys. 571, A16 (2014). b21 J. M. Bardeen, J. R. Bond, N. Kaiser, and A. S. Szalay, Astrophys. J. 304, 15 (1986). b22 D. Blas, J. Lesgourgues, and T. Tram, J. Cosmol. Astropart. Phys., No. 7, 034 (2011). b23 V. Springel, Mon. Not. R. Astron. Soc. 364, 1105 (2005). b24 P. S. Behroozi, R. H. Wechsler, and H.-Y. Wu, Astrophys. J. 762, 109 (2013). b25 R. K. Sheth and G. Tormen, Mon. Not. R. Astron. Soc. 308, 119 (1999). b26 N. A. Arkhipova, B. V. Komberg, V. N. Lukash, and E. V. Mikheeva, Astron. Rep. 51, 787 (2007). b27 M. Hütten and D. Kerszberg, Galaxies 10, 92 (2022). b28 V. Berezinsky, V. Dokuchaev, and Y. Eroshenko, Phys. Rev. D 77, 083519 (2008). b29 J. F. Navarro, C. S. Frenk, and S. D. M. White, Astrophys. J. 462, 563 (1996). b30 F. C. van den Bosch, G. Ogiya, O. Hahn, and A. Burkert, Mon. Not. R. Astron. Soc. 474, 3043 (2018). b31 G. Taffoni, L. Mayer, M. Colpi, and F. Governato, ASP Conf. Proc. 253, 273 (2002). b32 J. Mirocha and S. R. Furlanetto, Mon. Not. R. Astron. Soc. 519, 843 (2023). b33 N. Menci, M. Castellano, P. Santini, E. Merlin, A. Fontana, and F. Shankar, Astrophys. J. Lett. 938, L5 (2022). b34 B. Liu and V. Bromm, Astrophys. J. Lett. 937, L30 (2022). b35 S.-Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu, and B. Zhu, arXiv: 2306.17022 [hep-ph].
http://arxiv.org/abs/2409.02607v2
20240904104510
Fully-Polarized Topological Isostatic Metamaterials in Three Dimensions
[ "Zheng Tang", "Fangyuan Ma", "Feng Li", "Yugui Yao", "Di Zhou" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
These authors contribute equally to this work. Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing Institute of Technology, Beijing, 100081, China These authors contribute equally to this work. Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing Institute of Technology, Beijing, 100081, China [email protected] Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing Institute of Technology, Beijing, 100081, China Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing Institute of Technology, Beijing, 100081, China [email protected] Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (MOE), School of Physics, Beijing Institute of Technology, Beijing, 100081, China § ABSTRACT Topological surface states are unique to topological materials and are immune to disturbances. In isostatic lattices, mechanical topological floppy modes exhibit softness depending on the polarization relative to the terminating surface. However, in three dimensions, the polarization of topological floppy modes is disrupted by the ubiquitous mechanical Weyl lines. Here, we demonstrate, both theoretically and experimentally, the fully-polarized topological mechanical phases free of Weyl lines. Floppy modes emerge exclusively on a particular surface of the three-dimensional isostatic structure, leading to the strongly asymmetric stiffness between opposing boundaries. Additionally, uniform soft strains can reversibly shift the lattice configuration to Weyl phases, reducing the stiffness contrast to a trivially comparable level. Our work demonstrates the fully-polarized topological mechanical phases in three dimensions, and paves the way towards engineering soft and adaptive metamaterials. Fully-Polarized Topological Isostatic Metamaterials in Three Dimensions Di Zhou July 2024 ======================================================================= Introduction—Isostatic structures <cit.>, also known as Maxwell structures <cit.>, are mechanical frames that perfectly balance the degrees of freedom and constraints. These structures, ranging from molecular to architectural-scales <cit.>, are viewed as networks of nodes and links. Their significance lies in providing insights into stability and adaptability for innovative material and structure design, particularly in soft-matter systems <cit.>. Isostatic structures host zero-frequency edge modes that exhibit topological protection <cit.>, because the boundary mechanical softness and rigidity remain unchanged even when disturbances and damage occur <cit.>. In one- and two-dimensional isostatic lattices, mechanical floppy modes can be “topologically fully-polarized", as they emerge exclusively on a single boundary, while the opposing surface is completely devoid of floppy modes. This behavior results in a highly asymmetric contrast of boundary stiffness in a uniform structure. Fully-polarized isostatic lattices establish the connection between elasticity and topological electronic band theory <cit.>, laying the foundation for topological mechanics <cit.>. However, for two-dimensional isostatic lattices, this conceptual correspondence is not applicable to out-of-plane motions, which lack topological protection and mechanical polarization. In three-dimensional (3D) isostatic lattices, the fully-polarized topological mechanics is disrupted by the ubiquitous Weyl lines <cit.> that close the mechanical bandgap and reduce the contrast of boundary stiffness to a trivially comparable level. Recent studies have theoretically proposed 3D isostatic lattices that eliminate Weyl lines <cit.>, but these designs allow floppy modes to emerge on both opposing surfaces, resulting in a stiffness contrast that is still trivially comparable. Furthermore, in the precedented experiments <cit.>, the continuous mechanical junctions introduce finite bending stiffness that pushes the 3D-printed specimens beyond the isostatic point <cit.>, making topological numbers undefined. Thus, the elimination of Weyl lines and full polarization of mechanical topology in 3D isostatic lattices remains challenging. In this work, we demonstrate, both theoretically and experimentally, the fully-polarized topological mechanical phase in 3D. Using the three-dimensional example known as the generalized pyrochlore lattice, we illustrate this topological mechanical phase and the resulting distinctive boundary elasticity. Our analytic design principle is based on the mechanical transfer matrix <cit.> that polarizes all floppy modes to concentrate on a single open boundary of the lattice, whereas the opposite surface is clear of floppy modes. Consequently, the lattice is topologically fully-polarized and exhibits highly contrasting boundary mechanics. Moreover, isostatic lattices grant uniform soft strains of the entire structure, known as Guest-Hutchinson modes <cit.>, that reversibly shear the lattice configuration and induce transitions among topologically polarized and mechanical Weyl phases. This uniform shearing in a mechanical lattice is a nonlinear mechanism that alters the geometric configuration of all unit cells, but without inducing elastic energy. By shearing the lattice to the mechanical Weyl phase, the contrast in local stiffness is reduced to a comparable level, because floppy modes arise on both parallel open surfaces of the lattice. This fully-polarized mechanical phase in 3D, together with the freely switchable topological transition, results in advancements not possible in 2D systems <cit.>, such as topological softness bounded to 3D dislocations, static mechanical non-reciprocity in all spatial dimensions, and topologically protected all terrain tire. 3D fully-polarized topological isostatic lattices—Our prototype is a polymer 3D mechanical structure that uses pyrochlore lattice for network connectivity. Fig. <ref> displays two geometries of pyrochlore lattices, namely the regular and generalized ones in a and b, respectively. Fig. <ref>a shows the unit cell of the regular pyrochlore lattice, with four corners at A^(0) = ℓ(1,1,0)/2, B^(0) = ℓ(0,1,1)/2, C^(0) = ℓ(1,0,1)/2, and D^(0) = ℓ(0,0,0)/2; primitive vectors a^(0)_1 = ℓ(1,1,0), a^(0)_2 = ℓ(0,1,1), and a^(0)_3 = ℓ(1,0,1); and a length scale of ℓ=24 mm. In Fig. <ref>b, the geometry of the generalized pyrochlore lattice deviates from the regular one, with vertex positions X = X^(0)+ΔX for X = A, B, C, D and primitive vectors a_i = a^(0)_i+Δa_i for i=1,2,3. These geometric parameters reside in a vast 14-dimensional space (demonstrated in SI <cit.>), which poses challenges for searching fully-polarized topological mechanical phases. To address this, we employ the technique known as the 3D mechanical transfer matrix <cit.>. By decomposing the isostatic lattice into layers of lower dimensions, we ensure that the lattice geometry associated with the transfer matrix allows for consistent growth of mechanical floppy modes from the top to the bottom layer. Remarkably, this approach substantially reduces the parameter space from 14 to just 3 dimensions. Figure 1 illustrates a geometric example that facilitates the consistent growth of floppy modes from top to bottom. In this configuration, we have ΔA= 0.053(1, 0.3, 0), ΔB =0.053(0.55, 0, 1), ΔC = 0.053(-1,1,-1), ΔD = 0.053(-1.8,-1,1.2), and Δa_i=1,2,3=0. The unit cell of the generalized pyrochlore metamaterial consists of two polymer tetrahedra, each featuring a spherical hinge at every vertex, as depicted in Fig. <ref>c. In Fig. <ref>d, the A, B, C, D vertices of the white tetrahedra are connected to the A', B', C', D' tips of the green tetrahedra, enabling free rotations between neighboring bodies and eliminating bending stiffness. Within the unit cell, the site positions of the green tetrahedron <cit.> are give by A'=A-a_1+a_3, B'=B-a_2+a_3, C'=C, and D'=D+a_3. As a result, each spherical hinge provides three constraints, and each tetrahedron is connected to four hinges. A total of (3× 4)/2=6 constraints are imposed on a tetrahedron, balancing the six degrees of freedom of the rigid body. Consequently, the pyrochlore metamaterial is classified as an isostatic lattice due to the perfectly balanced degrees of freedom and constraints <cit.>. This isostatic point ensures the rigorous definition of topological mechanical indices, distinguishing them from the previously undefined topological numbers in super-isostatic structures <cit.>. Floppy modes refer to tetrahedron movements that do not deform their rigid bodies or mutual hinges, and thus do not involve elastic potential energy. Furthermore, floppy modes occur slowly, and their zero-frequency nature makes kinetic energy negligible. These properties allow us to exactly map the Newtonian statics of the metamaterial to an idealized spring-mass network, which enables the analytic study of the topological phases in static mechanical properties. The idealized spring-mass model is established by assigning a mass particle to each site and representing each edge with a central-force Hookean spring. In both the spring-mass system and the pyrochlore metamaterial, the static mechanics are equivalent, as each set of floppy modes corresponds to undistorted spherical hinges, edges, and tetrahedral bodies. The static mechanics of the spring-mass pyrochlore model can be described by the compatibility matrix C, which maps site displacements to spring elongations <cit.>. In spatially repetitive systems, the compatibility matrix can be Fourier-transformed into reciprocal space <cit.>, C(k), where k represents the wavevector. This compatibility matrix defines three integer-valued winding numbers, N_i(k) = -1/2πi∮_k→k+b_idk·∇_klnC(k), i=1,2,3 that govern the topological phase of static mechanical properties, where the integration trajectory k→k+b_i follows a straight and closed loop that is parallel to the reciprocal vector b_i. For different unit cell geometries, winding numbers can manifest qualitatively distinct behaviors. In most geometries of pyrochlore lattices, winding numbers exhibit jumps between integers as the wavevector moves across the Brillouin zone, and the critical boundaries between different integers are called mechanical Weyl lines. However, in models with gapped mechanical spectra, such as the structure exhibited in Fig. <ref>, winding numbers stay invariant for arbitrary wavevector, and the mechanical bands are clear of Weyl lines. These globally defined winding numbers constitute the vector <cit.> R_ T = ∑_i=1^3 N_ia_i known as the topological polarization. This globally defined vector characterizes the topological phases of static mechanics and reflects the topological robustness of floppy modes in both the spring-mass model and the pyrochlore metamaterial. These zero-frequency mechanisms prefer to localize on the open boundary that terminates this topological polarization, while the opposite parallel boundary has fewer floppy modes. For the lattice configuration in Fig. <ref>, the topological polarization is R_ T=a_1-a_2-2a_3. As topological polarization depends on the choice of the unit cell, we introduce the local polarization vector, denoted as R_ L. This vector characterizes how nodes and bonds are locally connected on the open surface <cit.>, effectively canceling the gauge dependence of the total polarization vector, R_ T. Together, these two polarizations govern the number density, ν, of topological floppy modes on the open surfaces of isostatic lattices. Specifically, we have ν=(1/2π)(R_ T+R_ L)·G, where G represents the reciprocal vector with its normal pointing outward from the open surface. For the pyrochlore structure shown in Fig. <ref>, we find the number density of the top and bottom open surfaces as (ν_↑, ν_↓) = (0,3). The top boundary, with the floppy mode density ν_↑ = 0, lacks any topological floppy modes entirely. In contrast, the bottom boundary hosts three floppy modes per supercell, as indicated by ν_↓ = 3. Due to the contribution of these floppy modes to local softness, the top boundary remains as rigid as the lattice's interior, while the bottom surface becomes significantly softer than the interior. As the lattice thickness increases, the contrast ratio in edge stiffness grows exponentially due to the localization of topological floppy modes near boundaries. This exotic behavior arises uniquely from topological polarization in isostatic lattices, which we term “fully-polarized topological mechanical metamaterials in 3D." In the subsequent section, we validate this highly polarized boundary elasticity both numerically and experimentally. It is worth emphasizing that our 3D topological lattice is fundamentally distinct from its 2D counterparts, because 2D mechanical structures are prone to deform, distort, and lose mechanical stability due to external pressure, thermal expansion, and bending <cit.>. Our work discovers the fully-polarized topological mechanical phase in 3D. This new phase opens up avenues towards novel applications of topological mechanical metamaterials, including asymmetric wave propagation, directional polar elasticity in isostatic media, and topological fracturing protection <cit.>. Uniform Shearing and Transformable Mechanical Weyl Phase— Isostatic lattices are known to host nonlinear and uniform soft strains of the whole structure, namely Guest-Hutchinson modes <cit.>, that reversibly shear the geometry without causing any elastic energy. In the pyrochlore metamaterial, the uniform shearing represents the rotation of the top tetrahedron around the spherical hinge connecting it to the bottom tetrahedron within the unit cell. This rotational mode is uniform across all unit cells in the lattice. The insets of Figs. <ref>c and <ref>e display two configurations of the pyrochlore unit cell that demonstrate how uniform shearing can reversibly evolve from one state to another, as visualized by the Supplementary Video <cit.>. In the unit cell shown in Fig. <ref>a, we rotate the top tetrahedron clockwise around the spherical hinge on the axis a_1×b_3, by angles of 10^∘ and 45^∘, to achieve the configurations depicted in the insets of Figs. <ref>c and <ref>e, respectively. These structures display mechanical responses that are topologically distinct from those in Fig. <ref>a. We first discuss the distinct topological mechanical phases exhibited by the three unit cell configurations shown in Figs. <ref>. In contrast to the globally-defined winding numbers in Fig. <ref>b that originate from the fully-polarized topological phase, the winding numbers in Figs. <ref>d and <ref>f can change as the wavevector moves <cit.>. At the critical wavevectors where the winding numbers jump, the mechanical bandgap closes and form gapless lines in the 3D Brillouin zone. These are called mechanical Weyl lines <cit.>, and are difficult to remove due to their topological robustness. The topological charge of Weyl lines <cit.> is determined by a nontrivial Berry phase N_w = -1/2πi∮_C dk·∇_klnC(k), where the integration path C encloses the gapless lines. We highlight that the Brillouin zones depicted in Figs. <ref>c and <ref>e feature two and four Weyl lines, which correspond to a two-Weyl-line phase and a four-Weyl-line phase, respectively. As we discuss below, these two mechanical Weyl phases exhibit qualitatively different boundary elasticity compared to the topologically fully polarized phase. In Fig. <ref>a, numerical simulations are conducted to analyze the stiffness of the top and bottom open surfaces as the uniform shearing angle increased from θ = 0^∘ to 45^∘. At θ=0^∘, corresponding to the configuration in Fig. <ref>a, the pyrochlore metamaterial is in the topologically fully-polarized phase, exhibiting significantly higher stiffness on the top surface compared to the bottom. This strong asymmetry in surface elasticity is further corroborated by the numerical results against external poking forces shown in Fig. <ref>b. Upon increasing the uniform shearing angle to 10^∘, the lattice undergoes a transition into the two-Weyl-line phase, as illustrated in Fig. <ref>c. This transition is characterized by a significant decrease in stiffness of the top boundary, which is quantitatively supported by the numerical analysis presented in Fig. <ref>a. Further rotation of the uniform shearing angle to 45^∘ induces a shift to the four-Weyl-line phase, depicted in Fig. <ref>e, leading to an additional reduction in stiffness of the top open surface, as evidenced by the data in Fig. <ref>a. The corresponding topological mechanical phase diagram is derived in the Supplementary Information <cit.>, exhibiting sharp and well-defined phase boundaries. These numerical results have a direct correspondence to the boundary experiments of the 3D-printed metamaterial. In the topologically fully-polarized case (Fig. <ref>a), floppy modes emerge exclusively on the bottom boundary, whereas the clearance of floppy modes on the top boundary indicates a topologically protected rigidity. This unprecedented and strongly contrasting boundary stiffness in 3D is experimentally demonstrated using force-displacement measurements in Fig. <ref>c. We rotate the lattice’s unit cell configurations by a uniform shearing angle of 45^∘, and the lattice structure reaches the mechanical four-Weyl-line phase in Fig. <ref>b. Floppy modes arise on both the top and bottom surfaces, which is reflected by the comparable stiffness in Fig. <ref>d. Here, hysteresis in the force-displacement measurements stems from the combined effects of the hinge clearance and friction. We note that the fully-polarized topological and Weyl phases can be reversibly transformed by the uniform soft shearing strain. The highly polarized and flexible boundary elasticity in 3D has significant implications, revolutionizing various applications. One such application involves switchable landing gear for drones (Fig. <ref>e). During landing, the material transitions into a soft-bottomed Weyl phase, effectively absorbing shocks. In flight, the material transforms into a topological phase with a rigid bottom surface, ensuring stability. Remarkably, these extraordinary functionalities persist even if the outer layers of the topological metamaterial are peeled off, showcasing its resilience and durability. Our pyrochlore structure applies to continuum materials, where finite bending stiffness raises the frequencies of topological floppy modes. These “soft modes" propagate asymmetrically between the lattice’s top and bottom surfaces, leading to intriguing consequences. For example, our pyrochlore structure can be incorporated into a cylindrical domain, creating porous wheels that efficiently absorb energy on rugged terrains (see Figs. <ref>f and g). Discussions—We have demonstrated, both theoretically and experimentally, the topologically fully-polarized mechanical phase in 3D. The mechanical metamaterial remains on the isostatic point, ensuring the rigorousness of the topological mechanical index. Topological floppy modes, confined exclusively to a single boundary, result in fully-polarized topological phase without Weyl lines. This mechanical achievement is analogous to the discovery of 3D topological insulators in electronic systems. Using soft uniform shearing modes, the isostatic metamaterial can be transformed from topologically polarized to Weyl phases, reducing the stiffness contrast between opposite boundaries to a trivially comparable level. The topological polarization in 3D paves the way towards physics not possible for 2D lattices, such as higher-order topological floppy modes in 3D, topological mechanical cloaking, and static mechanical non-reciprocity in all spatial dimensions. Acknowledgement—D. Z., F. L., and Y. Y. acknowledge the support from the National Science Foundation of China (Grant Nos. 12374157, 12102039, 12272040, 11734003). D. Z. and F. L. acknowledge insightful discussions with Chiara Daraio and Ying Wu. § EXPERIMENTAL AND NUMERICAL DETAILS OF FORCE-DISPLACEMENT MEASUREMENTS We construct a generalized pyrochlore lattice that is consisted of 8× 8× 6 unit cells. The top and bottom boundaries are left open, whereas four fixed boundary conditions are applied on all four lateral faces (front, back, left, right). These four fixed side boundaries (i.e., 4 FBCs) are consistent with those used in the experiment and the data presented in Figs. 4c and d of the main text. To measure the stiffness of the top and bottom open boundaries, force sensors are placed on a two-dimensional displacement platform that allows free movement in the xy plane. Our focus is primarily on studying the two open boundaries indicated by the a_3 direction (i.e., open surfaces perpendicular to the reciprocal vector b_3). Therefore, except for the one being measured, we ensure that all five boundaries (the bottom and the four side surfaces) are under fixed boundary conditions during measurements. The fixed-boundary constraints on the front and back boundaries are removed here for visual convenience of the internal structure of the lattice. By slowly and continuously pushing and pulling the measuring tip of the force sensor that is connected to the open boundary in the vertical direction, we obtain the force-displacement curves of the top and bottom open boundaries of the generalized pyrochlore lattice, as illustrated in Fig. 4c and d of the main text. Next, we ask on how much effect do the boundary conditions have on the stiffness contrast between the top and bottom open surfaces. To this end, we numerically construct the topologically polarized pyrochlore lattice that is composed of 5×5×8 unit cells. We build two kinds of boundary conditions: One lattice uses fixed conditions for all four lateral sides (4 FBCs), which is consistent with the force-displacement measurement performed in the experiment, see SI. Fig. <ref>a; The other lattice uses open conditions for the front and back, and fixed conditions for the left and right (2 OBCs + 2 FBCs), see SI. Fig. <ref>b. In the topologically polarized phase, the stiffness of the top and bottom boundaries under “4 FBCs" in SI. Fig. <ref>a is significantly higher than that under “2 OBCs+2 FBCs" in SI. Fig. <ref>b. This reduction in SI. Fig. <ref>b is mainly attributed to the additional floppy modes generated on the front and back open boundaries. Nevertheless, topological robustness in rigidity is retained on the upper surface, as evidenced by the significantly higher stiffness of the top boundary compared to the bottom boundary (see SI. Fig. <ref>b). While the front and back open boundaries can reduce the stiffness of the top and bottom boundaries, which are physical properties in real space, these boundary conditions do not alter the topological polarization defined in reciprocal space. This is doubly confirmed by Kane-Lubensky theory <cit.>, where the polarization reflects the intrinsic topological structure of the mechanical bulk bands defined in momentum space, and is therefore unaffected by real-space boundary conditions. § MODELING THE MECHANICS OF THE SPRING-MASS MODEL IN THE GENERALIZED PYROCHLORE LATTICE This section aims to construct the mechanical properties of an idealized spring-mass model that is composed of mass particles connected by Hookean springs. The mass points are positioned at the vertices of a generalized pyrochlore lattice, and the elastic linear springs are arranged along the edges of this lattice. We analyze the mechanical properties of this spring-mass model by establishing the compatibility matrix. Finally, we derive the winding numbers that characterize the topological phases of the static mechanical properties of the generalized pyrochlore lattice. §.§ The geometry of the regular pyrochlore lattice First of all, we define the geometry of the regular pyrochlore lattice, whose unit cell is illustrated in SI. Fig. <ref>. The unit cell is composed of four sites, denoted as A, B, C, and D, and 12 edges, labeled from 1 to 12. The site positions are defined at A^(0) = ℓ(1,1,0)/2, B^(0)= ℓ(0,1,1)/2, C^(0) = ℓ(1,0,1)/2, and D^(0) = ℓ(0,0,0), where ℓ is the length scale of the idealized model, and the superscript “(0)" implies that we are considering the geometry of the regular pyrochlore lattice. The primitive vectors, as indicated by the black arrows in the inset of SI. Fig. <ref>, are given by a^(0)_1 = ℓ(1,1,0), a^(0)_2 = ℓ(0,1,1), and a^(0)_3 = ℓ(1,0,1). Thus, the unit cell can be labeled by three integer numbers n_1, n_2, and n_3, which indicates the position of the unit cell at n_1a_1^(0)+n_2a_2^(0)+n_3a_3^(0). In the rest of this Supplementary Information, we always use a vectorial index n = (n_1, n_2, n_3) to mark the unit cell. Following this convention, the sites are labeled by A^(0)(n), B^(0)(n), C^(0)(n), and D^(0)(n), whose positions are given by X^(0)(n)=X^(0)+n_1a_1^(0)+n_2a_2^(0)+n_3a_3^(0) for X=A,B,C,D. Then, we elaborate on how nodes are connected in the network of regular pyrochlore lattice. To this end, we study the 12 edges within the unit cell, as shown in SI. Fig. <ref>, whose edge vectors are defined below, where again the superscript “(0)" indicates that we are considering the regular pyrochlore lattice: l_1^(0) = B^(0)-A^(0), l_2^(0) = C^(0)-B^(0), l_3^(0) = A^(0)-C^(0), l_4^(0) = A^(0)-D^(0), l_5^(0) = B^(0)-D^(0), l_6^(0) = C^(0)-D^(0), l_7^(0) = A^(0)-a_1^(0)+a_2^(0)-B^(0), l_8^(0) = B^(0)-a_2^(0)+a_3^(0)-C^(0), l_9^(0) = C^(0)+a_1^(0)-a_3^(0)-A^(0), l_10^(0) = D^(0)+a_1^(0)-A^(0), l_11^(0) = D^(0)+a_2^(0)-B^(0), l_12^(0) = D^(0)+a_3^(0)-C^(0). In SI. Eqs. (<ref>), the edge vectors l_7^(0) through l_12^(0) include the primitive vectors a_1^(0), a_2^(0), and a_3^(0) in their expression because these edges connect sites that belong to different unit cells. For example, as shown in SI. Fig. <ref>, the edge vector l_10^(0) connects the sites A^(0)(n_1,n_2,n_3) and D^(0)(n_1+1,n_2,n_3), which belong to different unit cells labeled by (n_1,n_2,n_3) and (n_1+1,n_2,n_3). Thus, the corresponding edge vector is calculated as l_10^(0) = D^(0)(n_1+1,n_2,n_3)-A^(0)(n_1,n_2,n_3). The primitive vector a_1^(0) stems from the different unit cells that the sites A^(0)(n_1,n_2,n_3) and D^(0)(n_1+1,n_2,n_3) belong to. Likewise, the edge l_7^(0) connects the sites B^(0)(n_1,n_2,n_3) and A^(0)(n_1-1,n_2+1,n_3). Its edge vector is therefore given by l_7^(0) = A^(0)(n_1-1,n_2+1,n_3)-B^(0)(n_1,n_2,n_3). In the regular pyrochlore lattice, all edges have the same length of ℓ/√(2). The corresponding unit vectors of these 12 edges are defined as t̂_i^(0) = l_i^(0)/|l_i^(0)|, i=1,2,…, 12. For the regular pyrochlore lattice, we have t̂_1^(0)=(-1,0,1)/√(2), t̂_2^(0)=(1,-1,0)/√(2), t̂_3^(0)=(0,1,-1)/√(2), t̂_4^(0)=(1,1,0)/√(2), t̂_5^(0)=(0,1,1)/√(2), t̂_6^(0) = (1,0,1)/√(2). The other edges labeled from 7 to 12 are parallel to the edges labeled from 1 to 6, yielding t̂_i+6^(0)∥t̂_i^(0) for i=1,2,…, 6. This relationship does not hold for the generalized pyrochlore lattice, as we indicate below. §.§ The geometry of the generalized pyrochlore lattice The network connectivity of the generalized pyrochlore lattice, i.e., how nodes are connected to each other, remains the same as that of the regular pyrochlore lattice. However, the geometry of the generalized pyrochlore lattice, namely the primitive vectors, the site positions, and the edge vectors, are different from the regular pyrochlore lattice. Within the unit cell, the site positions are displaced from those of the regular lattice, as defined by X=X^(0)+Δ X for X=A,B,C,D, where Δ X denotes the change in node positions. The primitive vectors are defined by a_i=a_i^(0)+Δa_i for i=1,2,3, where Δa_i represents the difference of the primitive vectors between the generalized and regular pyrochlore lattices. The site positions in the generalized pyrochlore lattice are given by X(n) = X+n_1a_1+n_2a_2+n_3a_3, with n=(n_1, n_2, n_3) the vectorial index that labels the unit cell of the generalized pyrochlore lattice. Since the focus of this paper is to discuss the boundary mechanics of the pyrochlore structure under distinct geometries, we refer to such a class of pyrochlore lattice geometry that differs from the regular pyrochlore lattice, as the “generalized pyrochlore lattice". In SI. Sec. 2, we will discuss the topologically polarized and Weyl phases of the static mechanical properties by specifying the geometric parameters of the generalized pyrochlore lattice. The edge vectors in the generalized pyrochlore lattice follow the same definitions as the regular pyrochlore lattice. To be specific, the edge vectors are defined as follows, l_1 = B-A, l_2 = C-B, l_3 = A-C, l_4 = A-D, l_5 = B-D, l_6 = C-D, l_7 = A-a_1+a_2-B, l_8 = B-a_2+a_3-C, l_9 = C+a_1-a_3-A, l_10 = D+a_1-A, l_11 = D+a_2-B, l_12 = D+a_3-C. The edge orientations are defined as t̂_i = l_i/|l_i|, i=1,2,…, 12. Finally, the site positions of the top tetrahedron within the unit cell are given by A'=A-a_1+a_3, B'=B-a_2+a_3, C'=C, and D'=D+a_3. Consequently, the geometry of the generalized pyrochlore lattice, such as the site positions and edge vectors, are different from those of the regular pyrochlore lattice. Furthermore, as we indicated before, the edges for i=7,8,…,12 are not parallel to the edges labeled by i=1,2,…,6: t̂_i+6∥ t̂_i for i=1,2,…,6. Finally, we define the reciprocal vectors of the generalized pyrochlore lattice as b_i = 2πϵ_ijk (a_j×a_k) /[a_1 ·(a_2×a_3)] for i,j,k=1,2,3. It is worth noting that, as long as the site positions A, B, C, and D are given, the edge vectors from l_1 to l_6 of the bottom tetrahedron can be determined, as shown by SI. Eqs. (<ref>). Likewise, given the site positions of the bottom tetrahedron and the primitive vectors a_i=1,2,3, all edge vectors of the top tetrahedron, from l_7 to l_12, are determined. §.§ Compatibility matrix of the static mechanics of the spring-mass pyrochlore lattice We consider a spring-mass model that is embedded in the geometry of the generalized pyrochlore lattice. As shown in SI. Fig. <ref>, each site is occupied by a particle with identical mass m. The edges are represented by central-force Hookean springs whose elastic constant are identically k, and their rest lengths match the corresponding edges of the generalized pyrochlore lattice. This is a three-dimensional mechanical model (d=3), whose unit cell possesses four mass points (n_ site=4). Therefore, a total of 12 degrees of freedom (n_ site d = 12) are contained in each unit cell. On the other hand, the unit cell has 12 Hookean springs to offer 12 constraints (n_ bond=12). As a result, the unit cell of the pyrochlore lattice has equal numbers of degrees of freedom and constraints (n_ site d =n_ bond =12), creating a lattice that is on the verge of mechanical instability. In other words, the inner body of the generalized pyrochlore lattice is at the isostatic point. Having defined the ground state of the spring-mass pyrochlore model in which all bonds are un-stretched, we now consider particle displacements away from their equilibrium positions. Particle displacements induce elongations in the Hookean springs and cause elastic potential energy. We denote u_X(n) as the three-dimensional displacement of the site that is away from its equilibrium position, where X=A,B,C,D marks the four sites within the unit cell, and n=(n_1,n_2,n_3) labels the associated unit cell. We consider particle displacements that are small comparing to the edge lengths, which allows for linear approximations in bond elongations: δ l_1(n) = [u_B(n)-u_A(n)]·t̂_1, δ l_2(n) = [u_C(n)-u_B(n)]·t̂_2, δ l_3(n) = [u_A(n)-u_C(n)]·t̂_3, δ l_4(n) = [u_A(n)-u_D(n)]·t̂_4, δ l_5(n) = [u_B(n)-u_D(n)]·t̂_5, δ l_6(n) = [u_C(n)-u_D(n)]·t̂_6, δ l_7(n) = [u_A(n+(-1,1,0))-u_B(n)]·t̂_7, δ l_8(n) = [u_B(n+(0,-1,1))-u_C(n)]·t̂_8, δ l_9(n) = [u_C(n+(1,0,-1))-u_A(n)]·t̂_9, δ l_10(n) = [u_D(n+(1,0,0))-u_A(n)]·t̂_10, δ l_11(n) = [u_D(n+(0,1,0))-u_B(n)]·t̂_11, δ l_12(n) = [u_D(n+(0,0,1))-u_C(n)]·t̂_12, where δ l_i(n) for i=1,2,…, 12 stands for the change in the bond length of the i-th edge in the unit cell marked by n=(n_1, n_2, n_3). Under mixed boundary conditions, such as periodic boundaries in the lattice directions a_1 and a_2, along with open boundaries in the lattice vector a_3, the elastic bonds on the open boundaries need to be severed to create these open boundaries. As a result, the constraints and degrees of freedom on these open boundaries become unbalanced, leading to an excess of degrees of freedom in these conditions. Specifically, in a generalized pyrochlore lattice with mixed boundary conditions, denoted by the numbers of sites and bonds as N_ site and N_ bond respectively, the condition becomes N_ sited > N_ bond. Furthermore, we denote the displacements of all particles as a N_ sited-dimensional vector field u = (…, u_A(n),u_B(n),u_C(n),u_D(n),…)^⊤ (⊤ denotes matrix transpose), and denote the elongations of all bonds as a N_ bond-dimensional vector field δl = (…, δ l_1(n), δ l_2(n), …, δ l_12(n), …)^⊤. Given the particle displacements u, one can always find the bond elongations δl via SI. Eqs. (<ref>). Thus, SI. Eqs. (<ref>) allow us to establish a N_ bond× N_ sited matrix, namely the compatibility matrix 𝐂 <cit.>, that maps particle displacements to bond elongations via 𝐂u = δl. Compatibility matrix is useful in exploring both the dynamical and static properties of the idealized spring-mass lattices, since the mechanical energy is governed by the compatibility matrix via E = T+V = (mu̇^⊤u̇+ku^⊤𝐂^⊤𝐂u)/2, where T and V are the kinetic and potential energy, respectively. Moreover, compatibility matrix can describe the static mechanical properties of the 3D-printed pyrochlore metamaterial, as we have demonstrated in the main text. In what follows, we focus on the properties of this compatibility matrix. In general, particle movements in the idealized spring-mass model induce both elastic deformation and kinetic energy. However, mechanical zero modes <cit.> refer to zero-frequency site displacements that do not deform elastic bonds. Therefore, all δ l_i(n) must be zero (i.e., δl=0), resulting in zero potential energy V=0. Additionally, since mechanical zero modes occur very slowly, their zero-frequency static nature means that the contribution of kinetic energy of mass particles is also negligible, i.e., T=0. As a result, mechanical zero modes yield 𝐂u=δl=0. According to linear algebra, the number of linearly independent mechanical zero modes corresponds to the dimensionality of the null space of the compatibility matrix, i.e., N_ zm = null(C). States of self-stress, however, describe non-zero tensions in elastic bonds that allow for vanishing net forces on every mass point. Thus, states of self-stress constitute the null space of the equilibrium matrix Q, where the equilibrium matrix Q is equal to the transpose of the compatibility matrix C^⊤. Consequently, the number of linearly independent states of self-stress equals to the dimensionality of the null space of the equilibrium matrix, N_ sss = null(Q)= null(C^⊤). According to the rank-nullity theorem, which is also known as the Calladine-isostatic <cit.> theorem in mechanical lattices, the number difference between the mechanical zero modes and states of self-stress equals to the number difference between the degrees of freedom and constraints, N_ zm-N_ sss = N_ sited-N_ bond. Floppy modes, on the other hand, are the (non-trivial) mechanical zero modes excluding the trivial translational and rotational zero modes, which amount to d(d+1)/2=6. Therefore, the number of linearly independent floppy modes is given by N_ fm=N_ zm-d(d+1)/2. Under open boundary conditions, the number of mechanical zero modes grows extensively as the length scale increases. Thus, for sufficiently large lattices, the number of floppy modes and mechanical zero modes, N_ fm ≈ N_ zm, are roughly the same, as the number contribution of trivial zero modes becomes negligible. Throughout the rest of this Supplementary Information, we treat the terminologies of floppy modes and mechanical zero modes interchangeably, without distinguishing between them. Below, we define winding numbers of isostatic lattices that characterize topological phases of these mechanical floppy modes in the generalized pyrochlore lattice. §.§ Topological winding numbers of the static mechanics in the spring-mass pyrochlore lattice Spatially repetitive frames allow for the representation of phonon modes as plane waves. These modes can be mathematically formulated using the Fourier transformation u_X(n) = ∑_ku_X(k)e^ik·r(n). In this equation, u_X(n) represents the mechanical wave for the X=A,B,C,D site in the unit cell, labeled by the vector index n=(n_1,n_2,n_3). The corresponding position is given by r(n)=n_1a_1+n_2a_2+n_3a_3, with a_1, a_2, and a_3 the primitive vectors. k represents the phonon wavevector, and the summation ∑_k runs over the three-dimensional Brillouin zone. Furthermore, u_X(k) denotes the phonon mode at the wavevector k, with X=A,B,C,D. We denote u(k) = (u_A(k), u_B(k), u_C(k), u_D(k))^⊤ as the 12× 1 displacement vector for the momentum k. Likewise, the bond elongations can be decomposed as the plane-wave format via δ l_i(n) = ∑_kδ l_i(k)e^ik·r(n) for i=1,2,…, 12, where δ l_i(k) is the Fourier-transformed bond elongations at the momentum k. Thus, we denote δl(k)=(δ l_1(k), δ l_2(k), …, δ l_12(k))^⊤ as the 12× 1 column vector for the bond elongations of the wavevector k. These Fourier-transformed quantities are related by the Fourier-transformed 12× 12 compatibility matrix 𝐂(k) via 𝐂(k)u(k) = δl(k). For the generalized pyrochlore lattice, the Fourier-transformed compatibility matrix is elaborated as follows, 𝐂(k)= ( [ -t̂_1 t̂_1 0 0; 0 -t̂_2 t̂_2 0; t̂_3 0 -t̂_3 0; t̂_4 0 0 -t̂_4; 0 t̂_5 0 -t̂_5; 0 0 t̂_6 -t̂_6; e^ik·(a_2-a_1)t̂_7 -t̂_7 0 0; 0 e^ik· (a_3-a_2)t̂_8 -t̂_8 0; -t̂_9 0 e^ik·(a_1-a_3)t̂_9 0; -t̂_10 0 0 e^ik·a_1t̂_10; 0 -t̂_11 0 e^ik·a_2t̂_11; 0 0 -t̂_12 e^ik·a_3t̂_12; ]). The compatibility matrix of the generalized pyrochlore lattice has a 14-dimensional gigantic parameter space. As shown by Fig. 1b of the main text, the unit cell of the generalized pyrochlore lattice is composed of two tetrahedra that connect on site C. A total of 7d=21 parameters in the compatibility matrix arise from the site positions A, B, C, and D, as well as the primitive vectors a_1, a_2, and a_3. However, only 14 out of 21 parameters are freely adjustable. This can be understood by noticing that site D can be translated to D=(0,0,0) without affecting the mechanical properties of the pyrochlore lattice. Next, an overall rotation and an overall scaling of the entire structure will not modify the mechanical properties as the compatibility matrix only depends on the bond orientations. Thus, we fix the edge vector l_1=B-A as (1,0,0), and ask the orientation of the Δ ABC-triangle to lie in the horizontal plane. These constraints reduce the independent parameters to 14 in the compatibility matrix. The topological properties of the floppy modes can be captured by the integer-valued winding numbers defined from the compatibility matrix in the reciprocal space. Given the wavevector k in the three-dimensional Brillouin zone, topological winding numbers are defined as follows, 𝒩_i(k) = -1/2πi∮_k→k+b_i dk·∇_kln𝐂(k), where i=1,2,3 labels the three reciprocal vector directions, b_i is the i-th reciprocal vector, and the integration trajectory, k→k+b_i, follows a straight and closed loop that is parallel to b_i. For each k, there are a total of three topological winding numbers in the generalized pyrochlore lattice. In our numerical computation, the evaluation of winding numbers is performed by the following format, 𝒩_i(k) = 1/2π∑_n=1^N |f_n|^-2[ Im f_n Re (f_n+1-f_n) - Re f_n Im (f_n+1-f_n) ], where f_n=𝐂(k+n/Nb_i) is the determinant of the compatibility matrix at wavevector k+n/Nb_i, k = n_j/N'b_j+n_k/N'b_k is the starting wavevector of the integration, i,j,k=1,2,3, N=2000 steps is used in the integration, N'=1000 is used, and we run over 1≤ n_1,n_2≤ N' to scan the winding numbers in the Brillouin zone. These three numbers together govern the topological mechanical phase of isostatic lattices, as we will address in SI. Sec. 2. §.§ Guest-Hutchinson modes Isostatic lattices can host nonlinear and uniform soft strains of the whole structure, known as Guest-Hutchinson modes <cit.>, that reversibly evolve the lattice geometry without causing any elongations of the edges. The 3D spring-mass model integrated into the geometry of the generalized pyrochlore lattice facilitates the Guest-Hutchinson modes, which are characterized by the relative rotations on the spherical hinge that connects the top and bottom tetrahedra within the unit cell, as shown by Fig. 1b of the main text. Therefore, Guest-Hutchinson modes can be quantified by the three Euler angles in the generalized pyrochlore lattice. This can be visualized in Fig. 2a-c and Fig. 4a and b of the main text, where we choose a Guest-Hutchinson mode shearing that rotates about the dashed-line axis on the spherical hinge. By manipulating the bond orientations in the generalized pyrochlore lattice using this shearing Guest-Hutchinson mode, the compatibility matrix, as described in SI. Eq. (<ref>), change accordingly. The topological winding numbers of the static mechanical properties, as shown in SI. Eq. (<ref>), can change as well. This shearing Guest-Hutchinson mode allows the pyrochlore lattice to achieve multiple topological phases of the mechanical properties, which we address below. To facilitate a rich topological phase transition within the metamaterial, encompassing topologically polarized, two-Weyl-line, and four-Weyl-line phases, we follow two key considerations of the spherical hinges: (1) Opening angle of spherical hinges: We carefully selected the opening angle of the spherical hinges to allow a large tunable range among topologically distinct phases. Simultaneously, we ensured that the concave hinge fits into the convex side, preventing separation. Consequently, the opening angle of the concave spherical hinge is set at 110 degrees. (2) Central orientation of spherical hinges: By appropriately orienting the central axes of the concave spherical hinges, we enable a rich transformation from the initial topologically polarized phase to the four-Weyl-line phase after a 45-degree Guest-Hutchinson mode rotation. § TOPOLOGICAL PHASES OF THE STATIC MECHANICS IN THE GENERALIZED PYROCHLORE LATTICE In this section, we will introduce the notions of fully-polarized topological phases, partially-polarized topological phases, and Weyl phases by focusing on the winding numbers. Furthermore, we will discuss the non-trivial mechanical properties of the corresponding isostatic lattices. §.§ Fully-polarized topological mechanical phases For certain geometric configurations of the generalized pyrochlore lattice, such as the unit cell configuration displayed in Fig. 1b of the main text, the phonon band structure can be gapped throughout the three-dimensional Brillouin zone, except for the trivial k=0 point. This k=0 point represents the wavevector of mechanical zero modes that translate or rotate the entire lattice as a whole, and these modes are commonly referred to as “trivial mechanical zero modes" <cit.>. SI. Fig. <ref> provides a visual representation of how the lattice displaces under the translational or rotational mechanical zero modes. This gapped mechanical phase can be realized in the generalized pyrochlore lattice with the following set of geometric parameters Δ A=0.053(1,0.3,0)ℓ, Δ B=0.053(0.55,0,1)ℓ, Δ C=0.053(-1,1,-1)ℓ, Δ D=0.053(-1.8,-1,1.2)ℓ, Δa_1=Δa_2=Δa_3=0. Such a set of parameters has been used in the main text Fig. 1b and Fig. 2a. As a result, all winding numbers, i.e., (𝒩_1(k),𝒩_2(k),𝒩_3(k))=(1,-1,-2), stay invariant for arbitrary k in 3D reciprocal space, as we show the numbers in Figs. 2a and d using the numerical computation presented in SI. Eq. (<ref>). Furthermore, these invariant integer numbers can be combined into a topologically invariant vector, called the topological polarization, R_ T=∑_i=1^3 𝒩_i a_i. For the geometric configuration presented in SI. Eq. (<ref>) (Fig. 2a of the main text), the topological polarization reads R_ T=a_1-a_2-2a_3. The physical significance of this topological polarization is that the topological floppy modes localize preferably on the surface that the topological polarization points to, whereas the parallel opposite surface has fewer topological mechanical floppy modes. In other words, R_ T indicates the topological polarization of the boundary floppy modes in the gapped phase of the considered isostatic lattice. The principle of bulk-boundary correspondence states that the topological polarization obtained from the bulk phonon bands plays a crucial role in determining the number density of mechanical floppy modes on open surfaces of isostatic lattices. However, the local details of how nodes and bonds are connected on the open surfaces also play a role in determining the distribution of floppy modes. This local connectivity is quantitatively captured by the local polarization vector R_ L. Together, topological and local polarizations govern the number distribution of boundary floppy modes via the relationship ν=1/2πG·(R_ T+R_ L), where the reciprocal lattice vector G is perpendicular to the considered open surface and points outward. In Fig. 1d and Fig. 2a, the reciprocal vectors for the top and bottom boundaries that are perpendicular to b_3, are G_↑ = +b_3 and G_↓ = -b_3, respectively. The local polarizations are R_ L↑=-a_2+2a_3 and R_ L↓=a_1-a_3 for the top and bottom open surfaces, respectively. As a result, the number densities of surface floppy modes per supercell are ν_↑=b_3·(R_ T+R_ L↑)/2π=0 and ν_↓=-b_3·(R_ T+R_ L↓)/2π=3 for the top and bottom surfaces. The floppy mode distribution, ν_↑=0, indicates that the top surface of the generalized pyrochlore lattice is completely devoid of mechanical floppy modes that account for the softness, rendering the open surface as rigid as the inner body of the lattice. In contrast, the floppy mode distribution, ν_↓=3, exhibits a much softer local response due to the exclusive emergence of topological floppy modes on the bottom boundary. This conclusion, as indicated from the perspective of momentum space, can be alternatively demonstrated from the perspective of real-space analysis. As shown in Fig. 3 of the main text, we perform a real-space analysis on calculating the numbers of floppy modes on a pair of parallel open surfaces normal to b_3. The highly polarized distribution of floppy modes on the bottom boundary is in agreement with the result indicated by the topological polarization. Since the top surface is completely devoid of mechanical floppy modes, as indicated by ν_↑ = 0, we dub the topological mechanical phase of this generalized pyrochlore lattice as the “fully-polarized topological phase". This highly asymmetric boundary mechanics is topologically protected, in the sense that even if a few lattice layers are peeled off, the polarized mechanics remains unchanged. For instance, new top (bottom) surfaces may become rigid (soft) when the outer layers peel off due to severe conditions. The only way to modify the edge properties is to globally alter the band topology of the phonon spectrum through a uniform twisting, referred to as Guest-Hutchinson modes, which we address in the following section. §.§ Partially-polarized topological mechanical phases Mechanically isostatic lattices with gapped phonon spectrum can manifest globally defined winding numbers and topological polarization vector. The number densities of boundary floppy modes follow SI. Eq. (<ref>). In the aforementioned section, we design the geometry of the pyrochlore lattice such that the top boundary is completely devoid of floppy modes, and all modes are localized on the bottom boundary. This highly polarized floppy mode localization leads to the strongly contrasting surface stiffness. However, in certain geometries of the generalized pyrochlore lattices, both of the parallel and opposite open boundaries can arise topological edge floppy modes despite the gapped phonon band structure and the well-defined topological polarization vector. Such geometric configuration has been illustrated in Ref. <cit.>, in which the bottom boundary hosts ν_↓=2 floppy modes per supercell, and the top surface hosts ν_↑=1 floppy mode. In this case, the stiffness contrast between the top and bottom open boundaries is in a trivially comparable level, which we refer to as “partially-polarized topological mechanical phase". §.§ Weyl phase with mechanical Weyl lines In the previous section, the phonon band structure is gapped everywhere except for the trivial k=0 point. This “gapped phase" is obtained from the geometric configuration elaborated in SI. Eq. (<ref>). In this configuration, all winding numbers, 𝒩_i=1,2,3(k), remain invariant for arbitrary k in the momentum space. However, in most geometries of generalized pyrochlore lattices, the phonon band structure can be gapless at some wavevectors k≠ 0 in the three-dimensional Brillouin zone. For example, using the Guest-Hutchinson mode in SI. Sec. 1(E) that uniformly shears the top tetrahedra relative to the mutual spherical hinges on the bottom tetrahedra throughout the entire lattice, the band structure of the isostatic pyrochlore lattice can be transformed in such a gapless state. To be specific, starting from the lattice configuration in Figs. 1b and 2a of the main text, we shear the top tetrahedron relative to the bottom one for every unit cell, with respect to the axis a_1×b_3 with a counterclockwise rotation angle of 10^∘. In this 10^∘-sheared configuration, Δ A, Δ B, Δ C, and Δ D remain the same as those in SI. Eqs. (<ref>), whereas the change of the primitive vectors, namely Δa_i=1,2,3, are given by Δa_1=(0.075,-0.0966,0.0862)ℓ, Δa_2=(0.0383,-0.0572,0.0477)ℓ, Δa_3=(-0.0212,-0.1049,0.0418)ℓ. This sheared configuration has a phonon band structure that is gapless at finite wavevectors. These band-touching points at finite wavevectors correspond to topologically protected bulk floppy modes that extend throughout the frame, and are called “mechanical Weyl modes". The wavevectors of Weyl modes further constitute gapless lines in the three-dimensional Brillouin zone, which are referred to as “mechanical Weyl lines". Isostatic lattices that host Weyl lines in their Brillouin zones are known to lie in the mechanical Weyl phases. To quantitatively discuss the aforementioned “gapped" and “gapless" mechanical band structures in the topologically polarized and Weyl phases, respectively, we compute the eigenvalues of the mechanical Hamiltonian for isostatic lattices. This Hamiltonian is obtained by taking the “square root" of the Newtonian dynamical matrix <cit.>. By doing so, we arrive at a Schröedinger-like mechanical Hamiltonian for the pyrochlore lattice, H(k) = ([ 0 C(k); C^†(k) 0; ]) where C(k) is referred to as the compatibility matrix in reciprocal space. The mechanical band structures are numerically computed by finding the eigenvalues of H(k). SI. Figs. <ref>a to f present the mechanical band structures for the pyrochlore lattices in the topologically polarized and Weyl phases. As displayed in SI. Fig. <ref>a, we define the high-symmetry lines in the Brillouin zone. Following these lines, SI. Figs. <ref>b and c compute the mechanical band structures for the topologically polarized and Weyl phases, respectively. These bands are gapped for the wavevectors along the high-symmetry lines except for the Γ-point, where the k=0 wavevector corresponds to the trivial mechanical zero modes that translationally or rotationally displace the lattice, as previously illustrated in SI. Fig. <ref>. We further find that, interestingly, for certain wavevector trajectory, the behaviors of the corresponding mechanical bands along this wavevector trajectory are different for the topologically polarized and Weyl phases. This wavevector trajectory is defined in SI. Fig. <ref>d, where the wavevectors follow a Weyl line of the mechanical Weyl phase. By following this wavevector trajectory, we compute the mechanical band structure in the topologically polarized phase, which still reflects the “gapped nature everywhere except for the trivial point" (SI. Fig. <ref>e). However, in the Weyl phase, the mechanical band structure displays gaplessness across all wavevectors along this trajectory, due to the emergence of zero-frequency Weyl bulk states (SI. Fig. <ref>f). Mechanically isostatic lattices, such as the generalized pyrochlore lattice, can have multiple pairs of gapless Weyl lines depending on its geometric configuration. For instance, in the “10^∘-sheared" generalized pyrochlore lattice (see corresponding unit cell configuration in the inset of Fig. 2c and the specific parameters in SI. Eqs. (<ref>)), the lattice has one pair of Weyl lines in the reciprocal space, as shown in SI. Fig. <ref>. Weyl lines have a significant impact on the topological winding numbers as well as the phases of the static mechanical properties in isostatic lattices. First, winding number 𝒩_i(k) for i=1,2,3 are no longer globally defined in the Brillouin zone. They change from one integer to another as the integration trajectory k→k+b_i moves from the exterior to the interior of the Weyl lines, as shown by the red dashed lines in SI. Fig. <ref>. Second, winding numbers become ill-defined when their integration trajectory intersects with Weyl lines, because the compatibility matrix becomes singular at these gapless wavevectors. Thus, we seek for an index that quantitatively characterizes the topological charge of gapless Weyl lines. This topological number is depicted by the following Berry phase, 𝒩_w = -1/2πi∮_C dk·∇_kln𝐂(k), whose integration trajectory C is a closed loop and encloses the considered Weyl line, as illustrated by the red circle in SI. Fig. <ref>. This integer value, as shown in SI. Eq. (<ref>), describes the topological nature of the mechanical Weyl line in reciprocal space, which corresponds to the jump in the winding number in SI. Eq. (<ref>) when the integration trajectory moves from the outside to the inside of the Weyl line. Since the topological charge of Weyl lines cannot change continuously, they are topologically protected against variations in geometric parameters of the pyrochlore lattice. As the total topological charge of Weyl lines are neutral, Weyl lines always emerge in pairs. This can be alternatively understood by noticing the time-reversal-symmetry in Newtonian mechanics, where every gapless wavevector k indicates a partner gapless wavevector at -k. Thus, mechanical Weyl lines are topologically robust and cannot be easily removed until pairs of Weyl lines meet and annihilate, which also reflects the topological robustness of Weyl lines. This property poses the major challenge in realizing mechanical metamaterials clear of Weyl lines in the precedented works <cit.>, which we have realized previously in SI. Sec. 2(A). Mechanically isostatic lattices can have more than one pair of gapless Weyl lines in their reciprocal spaces, depending on their geometric configurations. For example, We further shear the lattice configuration from the configuration in SI. Eq. (<ref>) with the Guest-Hutchinson mode angle of 35^∘. The new geometry of the generalized pyrochlore lattice has Δ A, Δ B, Δ C, and Δ D, remain the same as those in SI. Eq. (<ref>), whereas the change in the primitive vectors, Δa_1, Δa_2, and Δa_3, are given by Δa_1 = (1.1670,0.4673,0.3499)ℓ, Δa_2 = (0.0728,0.6935,1.1896)ℓ, Δa_3 = (0.7755,-0.4388,1.1071)ℓ. This configuration is pictorially depicted in the inset of Fig. 2e, and can be equivalently understood by shearing the unit cell configuration in the inset of Fig. 2a with a Guest-Hutchinson mode angle of 45^∘ (i.e., 10^∘+35^∘). This configuration is also in the mechanical Weyl phase, but a total of four Weyl lines emerge in the Brillouin zone. While both the lattice structures in Figs. 2c and 2e are in the Weyl phases, these two Weyl phases are topologically distinct as they have different numbers of mechanical Weyl lines. The generalized pyrochlore lattice experiences a topological phase transition, from the two-Weyl-line phase in Figs. 2c to the four-Weyl-line phase in Figs. 2e. §.§ Bulk-boundary correspondence of topological floppy modes in isostatic lattices In this section, we demonstrate the bulk-boundary correspondence by relating winding numbers to the number of localized floppy modes on open boundaries of isostatic lattices. To this end, we first impose periodic boundary conditions to the generalized pyrochlore lattice in all three lattice vector directions. This allows the parameterization of the compatibility matrix in terms of the wavevector in the three-dimensional Brillouin zone, through C = C(k), where the matrix elements are itemized in SI. Eq. (<ref>). Likewise, floppy modes u(k) can be parameterized by the wavevectors k as well, and they yield the defining equality C(k)u(k)=0. The non-vanishing solution of u(k) demands the compatibility matrix to satisfy C(k)=0. Under periodic boundary conditions, the numbers of floppy modes and states of self-stress are both zero, implying that SI. Eq. (<ref>) does not have real-valued k solution. However, cutting a periodic boundary can create a pair of open boundaries, as shown in SI. Fig. <ref>. This is equivalent to cut three bonds that are connected to the C-site for every supercell. The resulting generalized pyrochlore lattice is subjected to open boundary conditions in the primitive direction a_3 (i.e., boundaries perpendicular to b_3 are cut open) and periodic boundary conditions in a_1 and a_2. The resulting structure can be viewed as a lattice that is periodic in a_1 and a_2, with a large unit cell, as shown in Fig. 3c of the main text. Elastic waves in this “a_1 and a_2"-periodic lattice can be classified by surface wavevectors k_ surf = (k·a_1, k·a_2) defined in the two-dimensional surface Brillouin zone. Thus, for every large unit cell, the number difference between floppy modes and states of self-stress follow the Calladine-isostatic counting rule ν_ fm-ν_ sss = 3, where ν_ fm and ν_ sss denote the numbers of floppy modes and states of self-stress per large unit cell (supercell), respectively. This counting rule can be further extended to the generalized Calladine-isostatic theorem, in which for each surface wavevector k_ surf in the two-dimensional surface Brillouin zone, the number difference between floppy modes and states of self-stress yields ν_ fm(k_ surf)-ν_ sss(k_ surf)=3. Finally, as there is no states of self-stress in the generalized pyrochlore lattice <cit.> (ν_ sss(k_ surf)=0), a total of three floppy modes per surface wavevector should emerge from the lattice, with ν_ fm(k_ surf)=3. Given the surface wavevector k_ surf, the three mechanical floppy modes can either localize on the top or bottom boundary of the generalized pyrochlore lattice. Thus, the spatial decay rates along a_3, namely Im (k·a_3) for the three floppy modes, can take different values for given surface wavevector k_ surf. Moreover, their signs indicate the localization behaviors of the boundary floppy modes. When Im (k·a_3)<0, the floppy mode is exponentially localized on the “top" boundary that a_3 points to (the normal outward direction of the top boundary is +b_3), whereas floppy modes with Im (k·a_1)>0 are localized on the bottom surface (the normal outward direction of the bottom surface is -b_3). The numbers of top and bottom-boundary floppy modes, ν_↑(k_ surf) and ν_↓(k_ surf), directly correspond to the negative and positive Im (k·a_3) solutions in C(k)=0, and they yield ν_↑(k_ surf)+ν_↓(k_ surf)=ν_ fm(k_ surf)=3, in accordance with SI. Eq. (<ref>). Thus, the determinant equation, C(k)=0, must have three complex-valued k·a_3 solutions for each k_ surf∈[-π,π]× [-π,π]. This can be doubly verified by simply computing the determinant of the 12× 12 compatibility matrix C(k) in SI. Eq. (<ref>), which can be symbolically expressed as the third-order polynomial of e^ik·a_3: C(k)=α e^-ik·a_3∏_l=1^3 (e^ik·a_3-λ_l). Here, α is the coefficient of the polynomial, and λ_l for l=1,2,3 are the three eigenvalues of the compatibility matrix. α and λ_l vary as the surface wavevector k_ surf moves around in the two-dimensional surface Brillouin zone. As a result, along the primitive vector direction a_3, the spatial decay rate of the floppy mode is Im (k·a_3) = -ln |λ_l|. The number of negative (positive) decay rate solutions, i.e., number of -ln|λ_l|<0 (-ln|λ_l|>0) solutions, corresponds to the number of floppy modes that are exponentially localized at the top (bottom) boundary of the lattice. Next, we build the relationship between the signs of -ln|λ_l| and the integer values of the topological winding numbers. To this end, we express the winding numbers as 𝒩_3(k) = 1 - 1/2πi∑_l=1^3 ∫_0^2π dq_3 ∂_q_3ln(e^i(k·a_3+q_3)-λ_l). When |λ_l|>1, the integration contribution is zero to the winding number, whereas for |λ_l|<1, the contribution is -1. Consequently, 𝒩_3(k) takes the values 1,0,-1, or -2, if three, two, one and zero floppy modes are localized on the bottom boundary of the lattice (zero, one, two, or three floppy modes localizing on the top). At this point, we establish a rigorous mapping between the integer values of the topological winding number and the number of exponentially localized floppy modes on the top and bottom boundaries, which is referred to as the topological bulk-boundary correspondence. Mechanically isostatic lattices exhibit different topological phases. For instance, the generalized pyrochlore lattice is in the topologically polarized phase, as described in SI. Sec. 2(A), and it does not have any gapless Weyl lines. In this phase, the winding numbers remain invariant as k traverses the three-dimensional Brillouin zone. Consequently, the localization of boundary floppy modes does not change when the surface wavevector k_ surf is varied in the two-dimensional surface Brillouin zone. In the Weyl phase (SI. Sec. 2(B)), isostatic lattices contain gapless lines in the three-dimensional Brillouin zone, where C(k_w)=0, and the floppy modes have a spatial decay rate that approaches zero, Im (k_w·a_3)=0, at Weyl wavevectors. This means that the wavevector of Weyl floppy modes extends deeply into the lattice. When moving away from the Weyl lines, C(k)≠ 0 for real-valued wavevectors in the three-dimensional reciprocal space. In other words, for real-valued surface wavevectors k_ surf = (k·a_1, k·a_2) that are not on the surface projection of the Weyl lines, the solution for k·a_3 must be complex. The imaginary part of Im (k·a_3) indicates the spatial decay rate of the boundary floppy mode. Since Im (k·a_3) becomes zero when k_ surf lies on top of the Weyl line projections, it changes sign as k_ surf moves from the outside to the inside of the Weyl line projections. This indicates a shift in the localization boundary of the corresponding floppy modes, and the jump in the integer value of the winding number 𝒩_3(k), as k_ surf varies. This indicates that the topological charge of mechanical Weyl lines can be computed by the difference between the winding numbers when k_ surf moves from the outside to the inside of the Weyl line projections. In SI. Fig. <ref>, we plot the spatial decay rates of the floppy modes in terms of the surface wavevectors. In the partially-polarized and fully-polarized topological phases, the signs of Im (k·a_3) remain invariant in the two-dimensional surface Brillouin zone. In particular, in the fully-polarized topological phase, all decay rates are consistently positive (negative) throughout the entire surface Brillouin zone, indicating the bottom (top) boundary of the pyrochlore lattice that is significantly softer than the opposite parallel surface. This is pictorially manifested in SI. Fig. <ref>a, which corresponds to the geometric parameters shown in SI. Eq. (<ref>), and the unit cell configuration in Fig. 2a of the main text. All decay rates are positive for the entire 2D surface Brillouin zone, and all floppy modes are localized on the bottom surface. In the four-Weyl-line phase, which corresponds to the geometric parameters in SI. Eqs. (<ref>) and the unit cell configuration in Fig. 2e of the main text, the decay rate changes from positive to negative as the surface wavevector passes through the surface projection of Weyl lines, and the critical borderline, Im (k·a_3)=0, corresponds to the surface Brillouin-zone-projections of Weyl lines. This result can be seen in SI. Fig. <ref>b. Thus, Weyl lines not only affect the topological winding numbers and phases, but also significantly impact the boundary mechanics of isostatic lattices. Weyl line projections mark the critical surface wavevectors when the decay rates become zero. The decay rates change signs as the surface wavevector moves from the exterior to the interior of the surface projections of Weyl lines. This indicates a flip of localized floppy mode from one open boundary to the opposite. Consequently, for mechanically isostatic lattices in the Weyl phases, topologically localized floppy modes arise on a pair of parallel open surfaces, both of which manifest floppy mode-induced softness. §.§ Topological phase diagram of the generalized pyrochlore lattice Finally, we discuss the topological phase diagram of the generalized pyrochlore lattice as the geometric parameters vary. As shown by SI. Fig. <ref>, we construct the topological mechanical phase diagram by varying the geometric parameter, which is the site position of the C-vertex. The positions of other sites, including A, B, and D, as well as the primitive vectors a_1, a_2, and a_3, remain the same as those depicted in the unit cell of the topologically fully-polarized configuration. Specifically, we define the modified C-vertex site position, denoted as C_ phase, using the following expression: C_ phase = C+x(a_1+a_2)+ya_3. Here, C represents the site position of the C-vertex in the fully-polarized topological mechanical configuration used in our experiment. a_1=ℓ(1,1,0), a_2=ℓ(0,1,1), and a_3=ℓ(1,0,1) are the primitive vectors of the fully-polarized topological mechanical lattice (as defined in the main text). x and y correspond to the horizontal and vertical axes of the phase diagram, respectively. The color-coded regions in SI. Fig. <ref> correspond to distinct topological phases, where yellow, green and blue areas indicate the geometric configurations of pyrochlore lattices containing four, two, and zero mechanical Weyl lines in their reciprocal spaces. We obtain this topological mechanical phase diagram by computing winding numbers (as defined in Eq. (1) of the main text). These winding numbers describe the topological nature of the mechanical band structure and are guaranteed to be integer values. Notably, they exhibit abrupt jumps from one integer to another during topological phase transitions, resulting in the sharp phase boundaries observed. § MECHANICAL TRANSFER MATRIX: FULLY POLARIZE THE TOPOLOGICAL PHASE OF ISOSTATIC LATTICES Among various topological phases in mechanically isostatic lattices, including the fully-polarized, partially-polarized, and Weyl phases, the fully-polarized topological phase distinguishes itself from other two, as the boundary mechanical responses are strongly asymmetric in this phase. Thus, it is crucial to design the lattice geometry such that the generalized pyrochlore lattice lies in the fully-polarized topological phase. However, the parameter space of the compatibility matrix, as shown in SI. Eq. (<ref>), is 14-dimensional. It is almost impossible to perform an ergodic scan of all these parameters and pick out the fully-polarized topological phase among all possible phases. To overcome this challenge, we establish the three-dimensional mechanical transfer matrix that carries out the general design rule of spring-mass isostatic lattices. This analytic transfer matrix polarizes all floppy mode localization to the desired lattice direction by propagating floppy modes through nodes (mass particles) in the mechanical network. Using this methodology, the geometry of the entire structure is polarized to the fully polarized topological phase, and the corresponding floppy modes are consistently driven towards the top or the bottom boundary of the lattice. This analytic transfer matrix significantly reduces the number of adjustable parameters from 14 to 3, making it possible to numerically search the fully-polarized topological mechanical phase in the generalized pyrochlore lattice. To apply this transfer matrix, we consider the unit cell of the pyrochlore lattice, shown in SI. Fig. <ref>, which contains four nodes (mass particles marked by A(n), B(n), C(n), D(n)) and 12 (intracell) + 6 (intercell) Hookean springs. Among these 18 Hookean bonds presented in SI. Fig. <ref>, 12 of them belong to the unit cell and are denoted as l_i(n) for i=1,2,…, 12, whereas other 6 bonds come from the elastic springs in the nearest-neighbor unit cells. These bonds are represented by l_7(n+(1,-1,0)), l_8(n+(0,1,-1)), l_9(n+(-1,0,1)), l_10(n+(-1,0,0)), l_11(n+(0,-1,0)), and l_12(n+(0,0,-1)). These 18 elastic springs are connected to the four nodes within the unit cell. We further analyze the unit cell by decomposing it into four nodes, each of which is linked to six Hookean springs. To be specific, node A(n) is connected to l_1(n), l_3(n), l_4(n), l_7(n+(1,-1,0)), l_9(n), and l_10(n). Node B(n) is connected to l_1(n), l_2(n), l_5(n), l_7(n), l_8(n+(0,1,-1)), and l_11(n). Node C(n) connects to l_2(n), l_3(n), l_6(n), l_8(n), l_9(n+(-1,0,1)), and l_12(n). Node D(n) is linked to l_4(n), l_5(n), l_6(n), l_10(n+(-1,0,0)), l_11(n+(0,-1,)), and l_12(n+(0,0,-1)). Therefore, each site in the unit cell can be viewed as one node connected to six nearest-neighbor bonds. Below, we establish the analytic mechanical transfer matrix for such “one node and six bond" system. The mechanical property of the nodes and bonds can be modeled using the three-dimensional mechanical transfer matrix, which is constructed as follows. As shown in SI. Fig. <ref>, a node is connected to six central-force springs. One of the ends of every spring is connected to the node (mass particle), and the bonds are labeled by i or i' for i=1,2,3. The unit vectors of the bond orientations are denoted as m̂_i = (m_ix, m_iy, m_iz) and m̂_i' = (m_ix', m_iy', m_iz') for i=1,2,3. Consider a displacement u = (u_x, u_y, u_z) of this mass particle. The site displacement induces elongations on the six bonds, given by U_i = u·m̂_i and U_i' = u·m̂_i' for i=1,2,3. Since we are considering floppy modes, each bond can neither be elongated or abbreviated. Thus, the other end of bond i or i' has to induce a displacement whose longitudinal projection must compensate the elongation caused by U_i or U_i'. As a result, every mass particle serves to “propagate" the longitudinal projection of node displacements from bond to bond. To compute how floppy mode projections propagate in a network, we establish the relationship between U_i and U_i' by a 3× 3 transfer matrix 𝐓(m̂_1', m̂_2', m̂_3'|m̂_1, m̂_2, m̂_3), U_i' = ∑_j=1^3 𝐓_ij(m̂_1', m̂_2', m̂_3'|m̂_1, m̂_2, m̂_3)U_j where every matrix element is given by T_ij(m̂_1', m̂_2', m̂_3'|m̂_1, m̂_2, m̂_3) = δ_ij+ ∑_a,b=1^3ϵ_jab/2(m̂_i'-m̂_i)·(m̂_a×m̂_b)/m̂_1·(m̂_2×m̂_3) for i,j=1,2,3. In the context of floppy mode considerations, the longitudinal projections of site displacements from bond to bond are related by this transfer matrix. In other words, the nodes that connect to elastic bonds can be viewed as a “pipeline" that propagates “the flow of the longitudinal component of floppy mode displacements". Given an arbitrary crystalline or amorphous isostatic network, transfer matrix always solves the floppy mode by computing the longitudinal projections of site displacements throughout the entire network. Using the analytic technique of transfer matrix, we demonstrate that all floppy modes in the regular pyrochlore lattice are bulk modes. Subsequently, we show that these bulk floppy modes transition to edge modes when the lattice geometry is deformed into the generalized pyrochlore lattice. In the regular pyrochlore lattice, all filaments (Hookean bonds that connect nearest neighbor sites) form straight lines. In this idealized spring-and-mass model, the regular pyrochlore lattice displays an interesting property: all floppy modes are bulk modes. This can be seen in the following analysis. Because all bonds form straight lines, the bond orientations yield the relationship m̂_i=m̂_i' for all nodes. As a result, the transfer matrices shown in SI. Eq. (<ref>), are 3× 3 identity matrices for all nodes throughout the entire network of the regular pyrochlore lattice. Thus, the longitudinal projections of floppy modes yield U_i'=U_i for i=1,2,3, which are conserved along each straight line. As the site displacements are the linear superposition of the longitudinal projections of floppy modes, the site movements neither decrease nor increase throughout the entire lattice. In summary, all floppy modes are bulk states in the regular pyrochlore lattice. This is similar to the regular kagome lattice, which has straight filaments that also carry bulk floppy modes <cit.>. The regular pyrochlore lattice can be seen as a three-dimensional analog of the regular kagome lattice. In the two-dimensional generalized kagome lattice, the mechanical frame exhibits static mechanical phases with different topologies where the floppy modes localize at different edges. Can the pyrochlore lattice also exhibit such topological phases? The answer is yes. To this end, we slightly deform the geometry of the regular pyrochlore lattice by bending the straight lines composed of Hookean springs into zigzagged ones. This can be realized by simply changing the orientations of nearest-neighbor bonds to m̂_i≠m̂_i'. Moreover, we ask that the orientation difference, δm̂_i = m̂_i'-m̂_i, to satisfy the “slight bending condition" |δm̂_i|≪ 1, which allows for perturbative expansions on the matrix elements of the mechanical transfer matrix. To the leading order, the transfer matrix can be approximated as T_ij(m̂_1', m̂_2', m̂_3'|m̂_1, m̂_2, m̂_3) ≈ δ_ij[1+ ∑_a,b=1^3ϵ_jab/2δm̂_i·(m̂_a×m̂_b)/m̂_1·(m̂_2×m̂_3)]. Below, we use this first-order approximation to discuss how floppy mode projections evolve from bond to bond in the generalized pyrochlore lattice. We consider the four sites in the unit cell of the generalized pyrochlore lattice, as shown in SI. Fig. <ref>, in which the unit vectors of the edges are marked as t̂_i for i=1,2,…,12. First, consider a site displacement u_D(n), whose longitudinal projections on edges l_10(n-(1,0,0)), l_11(n-(0,1,0)), l_12(n-(0,0,1)), l_4(n), l_5(n), and l_6(n), are U_10(n-(1,0,0)), U_11(n-(0,1,0)), U_12(n-(0,0,1)), U_4(n), U_5(n), and U_6(n). The transfer matrix at site D(n) relates these displacement projections via ([ U_4(n); U_5(n); U_6(n); ]) = 𝐓(t̂_4, t̂_5, t̂_6|t̂_10, t̂_11, t̂_12) ([ U_10(n-(1,0,0)); U_11(n-(0,1,0)); U_12(n-(0,0,1)); ]). Given the “input flow" of the floppy mode projections U_10(n-(1,0,0)), U_11(n-(0,1,0)), and U_12(n-(0,0,1)), transfer matrix solves the “output flow" of floppy mode projections U_4(n), U_5(n), and U_6(n). Second, the A(n) site connects six bonds labeled l_4(n), l_7(n+(1,-1,0)), l_9(n), l_10(n), l_1(n), and l_3(n). The floppy mode displacement of this A(n) site can be projected onto these bonds, and these longitudinal projections are related by a transfer matrix ([ U_10(n); U_1(n); U_3(n); ]) = 𝐓(t̂_10, t̂_1, -t̂_3|t̂_4, t̂_7, -t̂_9) ([ U_4(n); U_7(n+(1,-1,0)); U_9(n); ]). Given the input flow of floppy mode projections, U_4(n), U_7(n+(1,-1,0)), and U_9(n), transfer matrix solves the output flow of U_10(n), U_1(n), and U_3(n). Third, the C(n) site is connected to six bonds, marked as l_3(n), l_6(n), l_8(n), l_9(n+(-1,0,1)), l_12(n), and l_2(n). The floppy mode projections are related by ([ U_9(n+(-1,0,1)); U_12(n); U_2(n); ])= 𝐓(-t̂_9, t̂_12, -t̂_2|-t̂_3, t̂_6, -t̂_8) ([ U_3(n); U_6(n); U_8(n); ]). This transfer matrix solves the output flow U_9(n+(-1,0,1)), U_12(n), and U_2(n) provided that the input flow U_3(n), U_6(n), and U_8(n) are given. Finally, the B(n) site is linked to l_1(n), l_2(n), l_5(n), l_7(n), l_8(n+(0,1,-1)), and l_11(n). The floppy mode projections are related by ([ U_7(n); U_8(n+(0,1,-1)); U_11(n); ])= 𝐓(t̂_7, -t̂_8, t̂_11|t̂_1, -t̂_2, t̂_5) ([ U_1(n); U_2(n); U_5(n); ]). Summarizing all these transfer matrices, namely SI. Eqs. (<ref>–<ref>), we use the input flow of floppy mode projections U_7(n+(1,-1,0)), U_8(n), U_9(n), U_10(n-(1,0,0)), U_11(n-(0,1,0)), U_12(n-(0,0,1)) to obtain the output flow of floppy mode projections U_7(n), U_8(n+(0,1,-1)), U_9(n+(-1,0,1)), U_10(n), U_11(n), U_12(n). We now incorporate the geometric deformations, namely Δ A = (Δ A_x, Δ A_y, Δ A_z), Δ B = (Δ B_x, Δ B_y, Δ B_z), Δ C = (Δ C_x, Δ C_y, Δ C_z), and Δ D = (Δ D_x, Δ D_y, Δ D_z), to the initial geometry of the regular pyrochlore lattice. Therefore, the bond vectors in the pyrochlore lattice are changed from l_i to l_i+Δl_i. For example, the bond vector l_1 is now changed into l_1+Δ B-Δ A, as indicated by SI. Eqs. (<ref>). This allows us to reach a new ground state called the generalized pyrochlore lattice. As the geometric changes are small comparing to the bond lengths, the “bending angles", i.e., orientation difference between nearest neighbor bonds, are very small. Using the first-order approximation in Δl_i, we have δt̂_i = t̂_i+6-t̂_i ≈ -2(Δl_i/|l_i|-(l_i·Δl_i)l_i/|l_i|^3) for i = 1,2,…, 6, where l_i is the bond vector of the ith edge, and Δl_i is the change in the bond vector between the generalized and regular pyrochlore lattices. Substituting SI. Eq. (<ref>) into SI. Eqs. (<ref>–<ref>) significantly simplifies the recursion relations among the longitudinal projections of site displacements: (1-√(2)δ t_1y)U_7(n+(1,-1,0)) = U_7(n), (1-√(2)δ t_2z)U_8(n+(0,1,-1)) = U_8(n), (1-√(2)δ t_3x)U_9(n+(-1,0,1)) = U_9(n), (1+√(2)δ t_4z)U_10(n-(1,0,0)) = U_10(n), (1+√(2)δ t_5x)U_11(n-(0,1,0)) = U_11(n), (1+√(2)δ t_6y)U_12(n-(0,0,1)) = U_12(n). Now, we ask that the floppy mode projections consistently and exponentially decrease from the bottom boundary (i.e., the lattice boundary whose normal is -b_3) to the top boundary (i.e., the surface normal is +b_3). To be specific, we demand U_8(n)<U_8(n+(0,1,-1)), U_9(n+(-1,0,1))<U_9(n), and U_12(n+(-1,0,1))<U_12(n), leading to the constraints Δ A_x>Δ C_x, Δ C_y>Δ D_y, and Δ B_z>Δ C_z. Under these constraints, all floppy modes tend to exponentially localize on the bottom boundary of the pyrochlore lattice. SI. Eq. (<ref>) significantly reduces the 14 adjustable parameters in the compatibility matrix to only 3. The geometric parameters used in the fully-polarized topological phase in SI. Sec. 2(C), directly follow the constraints provided by SI. Eqs. (<ref>), with the other parameters, including Δ A_y, Δ A_z, Δ B_x, Δ B_y, Δ D_x, and Δ D_z, are randomly generated by our numerical simulations. Our work realizes, for the first time, the topologically fully polarized isostatic lattices in three dimensions (3D). This achievement enables novel scientific and application advancements not possible for one- and two-dimensional isostatic metamaterials, as detailed below in SI. Sec. 5 and 6, respectively. Here, we give a brief argument to intuitively understand how Δ A, Δ B, Δ C, and Δ D, as well as Δa_i=1,2,3 are chosen to realize the full polarization of mechanical topology. In SI. Fig. <ref>, we exemplify one such node that is connected to six springs. We can think of this “one-node, six-spring" junction as a “one-node, three-filament" system, because we design the red bonds such that the bending angle is small comparing to the crossing angles between the filaments. We consider the spatial evolution of the floppy mode from bond to bond in the central red filament, which is mathematically derived by the mechanical transfer matrix. SI. Fig. <ref> itemizes the spatial growing and decaying behaviors of the floppy mode in the central red filament when it passes through the junction. The spatial growing or decaying behavior is governed by the sign of Φ_1 = ∑_a,b=1^3 ϵ_iabδm_i·(m_a×m_b)/2, namely the bending angle of the central fiber, and the sign of Ω = m_1·(m_2×m_3), namely the crossing angles between neighboring filaments. Based on the design principle presented in SI. Fig. <ref> for every single junction, we ask that in the pyrochlore lattice, floppy modes consistently grow from the top layer to the bottom layer when they pass through all junctions. This is realized by designing the sign of bending angles of the filaments for every junction, which in turn is controlled by site positions and primitive vectors such that every node in the pyrochlore isostatic lattice consistently increases the floppy mode when it propagates from bond to bond in the downward vertical direction. Fig. 1 of the main text illustrates one such geometric configuration. Interestingly, we find that, by changing the site positions only, while the primitive vectors stay unchanged, the isostatic lattice is already capable of increasing the floppy mode from the top boundary to the bottom boundary. In conclusion, SI. Fig. <ref> presents the design principle of the full polarization of mechanical topology in isostatic lattices, such as the pyrochlore lattice. By following this rule, we modify site positions, namely Δ X with X=A,B,C,D, and primitive vectors, namely Δa_i with i=1,2,3, to ensure that the floppy mode consistently grow from the top surface to the bottom boundary. While Fig. 1 of the main text illustrates a design with the Δa_i=0 condition, this is not a necessary condition for the full polarization of mechanical topology. § SCIENTIFIC ADVANCEMENTS FROM THE TOPOLOGICAL PYROCHLORE LATTICE §.§ Topologically insulated mechanical phase in 3D While previous studies <cit.> have managed to create 3D isostatic structures, their systems always exhibit mechanical bulk and surface conduction. Our pyrochlore lattice is the first isostatic structure that demonstrates the fully-gapped and fully-polarized topological mechanical phase in 3D. This results in the mechanical insulation for both the bulk and top-surface of the pyrochlore lattice. To elucidate the fully-gapped nature of the generalized pyrochlore lattice, we establish the Schrödinger-like mechanical Hamiltonian by taking the “square root" of the Newtonian dynamical matrix <cit.>. This mechanical Hamiltonian reads H = ([ 0 C; C^⊤ 0; ]), where C is referred to as the compatibility matrix. We numerically compute the eigenvalues in Eq. (<ref>) using Bloch boundary conditions. We then plot the mechanical band structures for the topologically polarized and Weyl phases in Figs. <ref>c and d, respectively. In both cases, we compute the band structures for wavevectors that follow the orange mechanical Weyl line defined in SI. Fig. <ref>b. The zero-frequency Weyl modes shown in SI. Fig. <ref>d differentiate the band structures between SI. Figs. <ref>c and d, with SI. Fig. <ref>c representing bulk mechanical insulation for the topologically polarized phase and SI. Fig. <ref>d displaying bulk mechanical conduction for the Weyl phase. Furthermore, the results in Figs. 3 and 4 in the main text demonstrate that in the fully-polarized topological phase, the top boundary of the pyrochlore lattice is entirely free of floppy modes, leading to top surface mechanical insulation. As a result, mechanical conduction is solely due to the bottom boundary of the topological pyrochlore lattice. These unique features of the fully-polarized topological phase hold promise for a range of future applications. For instance, the mechanical conduction that occurs solely at one boundary of the fully-polarized topological lattice may transform into chiral floppy modes when the intrinsic spins of phonon waves are considered. This fundamental mechanism, namely the mechanical spin-momentum locking <cit.>, has recently been experimentally realized in 2D isostatic lattices <cit.> that exhibits backscattering-free acoustic propagation. Chiral floppy modes, if realized in 3D systems, could revolutionize the field of isostatic metamaterials by enabling low-frequency sound filtering, novel transducer designs, and mechanical signal processing modulators. Moreover, these modes could facilitate the classification of strong and weak mechanical topological insulators, which are novel and unique concepts in 3D systems. These ideas draw inspiration from similar classifications in 3D electronic topological insulators <cit.>. §.§ Surface topological floppy modes robust against disorders Topological materials should exhibit robustness against disorder, allowing their unique edge modes to persist even when periodicity is disrupted. Here we show robust surface (2D) topological floppy modes against disorders in 3D systems. To incorporate spatial disorder in the generalized pyrochlore lattice, we shift their site positions to new coordinates using the formula X(n_1,n_2,n_3) = X+[1+R rand(n_1,n_2,n_3)]δ X. Here, X represents the four sites A, B, C, D in the unit cell, (n_1,n_2,n_3) denote the three lattice indices that mark the position of the unit cell. δ X corresponds to the geometric change of the regular pyrochlore lattice that leads to the topologically polarized phase (see their definitions in the main text). rand(n_1,n_2,n_3) is the random number ranging from -1 to 1, and R represents the disorder strength. As the disorder strength rises, the compatibility matrix and the mechanical Hamiltonian (see SI. Eq. (<ref>)) change accordingly. We calculate the eigenvalues of the mechanical Hamiltonian with increased disorder strength to investigate its impact on the band structure. A surface wavevector of k=b_1/10+b_2/4 is chosen to illustrate the band structures in SI. Figs. <ref>a for the topologically polarized pyrochlore lattice. SI. Figs. <ref>b depicts the summation of floppy mode weight in the supercell pyrochlore lattice, with the number of floppy modes localized on the top and bottom open surfaces marked beside these plots. Floppy modes exhibit topological robustness. This is evidenced by both the opening of the band gap in SI. Fig. <ref>a, and the spatial profile of floppy mode weight in SI. Fig. <ref>b that remains insensitive to an increase in disorder strength (ν_↓:ν_↑=3:0 remains unchanged as disorder increases, where ν_↓ and ν_↑ denote the number of floppy modes localized on the bottom and top boundaries). The 3D fully-polarized topological lattice is fundamentally distinct from its 2D counterparts, such as the kagome lattice. As we highlight in the following discussions, the out-of-plane displacements in 2D lattices lack topological robustness <cit.>, but they manifest topological polarization and protection in our 3D lattice. In summary, our design exhibits unprecedented mechanical properties in the 2D surface of 3D pyrochlore lattice, which is stable against disturbances to the internal structure of the lattice. §.§ Asymmetric propagation of elastic waves In the topologically fully-polarized phase, all mechanical floppy modes exclusively localize on the bottom boundary of the lattice. This directly leads to the mechanism called mechanical amplification, as floppy modes exponentially grow from the rigid top boundary to the soft bottom boundary. More importantly, when finite bending stiffness is considered, this amplification mechanism further extends the asymmetric mechanical propagation to non-zero frequencies. This asymmetric acoustic propagation for in-plane motions has been studied in 2D topologically polarized lattices <cit.>, but the out-of-plane displacements lack such asymmetry <cit.>. In our 3D lattice, asymmetric acoustic wave propagation is enabled in all three spatial dimensions. This is numerically demonstrated in SI. Fig. <ref>a, where the small but finite bending stiffness of COMSOL-constructed topological pyrochlore lattice raises the frequencies of “floppy modes" to finite values (and therefore termed as “soft modes" now), introducing a non-zero group velocity for these topological soft modes. Due to the mechanism of mechanical amplification, soft modes propagate in an asymmetric way between the top and bottom surfaces, as shown by SI. Fig. <ref>b. §.§ Static mechanical non-reciprocity in fully-polarized isostatic lattices Reciprocity is a fundamental principle for linear physical systems that obey time-reversal symmetry. It guarantees that the transmissibility function between any two points in space remains identical, irrespective of material or geometric asymmetries. This principle also remains valid for our fully-polarized topological pyrochlore lattice in the linear elastic regime. By breaking the symmetry in the transmissibility, an enhanced control over signal transport, isolation, and vibration protection can be attained. To achieve non-reciprocal transmission of physical quantities, one needs to either break time-reversal symmetry or include nonlinear effects <cit.>. So far, most devices break reciprocity using dynamical systems that break time-reversal symmetry. Here, we show that our fully-polarized metamaterial can break static reciprocity when nonlinear elasticity (large geometric deformation) is considered. To this end, we numerically construct a lattice consisting of 7× 7 × 8 unit cells, with open boundaries on the top and bottom, and fixed boundaries on the four side surfaces. The lattice constant, Hookean spring strength, and particle mass are all set to unity. Every particle is subjected to a weak on-site potential with the pinning spring constant k_0=10^-3. The fully-polarized topological mechanical lattice (SI. Fig. <ref>a) can be transformed by the Guest-Hutchinson mode with an angle of 45^∘ to reach the Weyl phase (SI. Fig. <ref>c). Below, we numerically address (linear mechanical) reciprocity and (nonlinear mechanical) non-reciprocity in both the topologically polarized and Weyl phases. To study mechanical non-reciprocity, we introduce “mechanical transmissibility" from point 1 to 2 in space, denoted by χ_12 = u_2/F_1. The choice of points 1 and 2 is arbitrary, but we exemplify them as the C-tips of the central unit cells on the top and bottom open surfaces. As shown in SI. Fig. <ref>a, F_1 is the input static force at point 1, and is in the normal direction of the top surface. u_2 is the normal projection of the resulting displacement at point 2. Similarly, we define χ_21 = u_1/F_2 as the mechanical transmissibility from 2 to 1. Finally, we define the “ratio of mechanical transmissibility", T_12 = χ_12/χ_21, that quantifies the the level of mechanical non-reciprocity. Linear elastic systems obey reciprocity by showing T_12=1. This stems from the time-reversal symmetric Green’s function of linear response theory. We numerically confirm this mechanical reciprocity (T_12=1) in the small-force regime (10^-7≤ F≤ 10^-5) using the algorithm of molecular dynamics. As the resulting site displacements are much less than the lattice constant (see SI. Fig. <ref>a), this force range defines the “linear elastic regime", where both the topologically polarized and Weyl phases exhibit mechanical reciprocity. However, nonlinear mechanics showcases strong non-reciprocity. Here, nonlinear effects only stem from geometric nonlinearity, because Hookean springs do not have material nonlinearity. This “nonlinear elastic regime" is reflected by SI. Figs. <ref>b and d within the regime of large input force 10^-4≤ F≤ 1, in which the resulting lattice deformations are comparable to the lattice constant. In this regime, T_12 (the ratio of mechanical transmissibility) significantly deviates from 1, reflecting the strongly asymmetric mechanical transmissibility function. In the topologically fully-polarized phase, T_12≫1 (blue curve of SI. Fig. <ref>), indicating that the mechanical transmissibility from the topologically rigid top boundary to the soft bottom boundary is drastically higher than in the reversed direction. This non-reciprocity stems from the nonlinear stiffening effect of the soft boundary, because this stiffening effect is much less significant for the rigid boundary. In the Weyl phase, T_12≪1, reflecting that the mechanical transmissibility from the top to bottom is much less than that in the reversed direction (orange curve in SI. Fig. <ref>). This is because, in the Weyl phase, the stiffening effect has a larger impact on the soft top surface than the bottom one. In summary, when fully-polarized topological stiffness meets nonlinear effects, our 3D isostatic lattice serves as the ideal device for static non-reciprocal mechanics. Moreover, the mechanical transmissibility significantly drops from T_12≫ 1 to T_12≪ 1 when the lattice is reversibly transformed from the topological phase to the Weyl phase by the Guest-Hutchinson mode. When bending stiffness is considered, floppy modes evolve into topological soft modes with non-zero frequencies. As a result, our topological pyrochlore lattice can exhibit dynamical non-reciprocity for all three spatial dimensions, in contrast to the 2D topological isostatic lattices that manifest dynamical non-reciprocity only for in-plane motions. §.§ Other future scientific directions Higher-order topological mechanical floppy modes— Recent developments in electronic topological insulators have unveiled a new class of topological states, namely higher-order topological modes, that localize not on the edges of the system but on its corners and hinges <cit.>. However, in 2D isostatic lattices, the design of higher-order topological mechanics requires a gapped mechanical band structure at the Γ-point, which necessitates breaking translational symmetry by placing mass points on fixed and frictionless rails <cit.>, substantially complicating the lattice structure. Our 3D design of isostatic metamaterials may resolve this gapless problem in the mechanical band structure, because unlike 2D isostatic lattices, a finite wavenumber k_z in the vertical direction can break translational symmetry in the horizontal directions, and ultimately gaps the mechanical bands at the Γ-point. Topological mechanical zero modes bounded by dislocations— Topologically protected mechanical floppy modes can be localized around lattice dislocations. This effect stems from the interplay between two Berry phases: the Burgers vector of the dislocation, and the topological polarization of the isostatic lattice. This dislocation-bounded softness has been studied in both 2D and 3D isostatic lattices <cit.>. However, to date, the 3D study is only theoretical, as the lattice required to achieve dislocation-bounded floppy modes is a stacked kagome lattice with a highly complex structure that cannot be realized experimentally. Our pyrochlore lattice is the first 3D topological metamaterial that can be experimentally realized. Based on the dislocation design in Ref. <cit.>, the pyrochlore metamaterial may experimentally realize dislocation-bounded mechanical floppy modes. This approach has broad applications, such as the mechanical information storage and mechanical logic gates. Machine Learning meets topological static mechanics— Building upon our pyrochlore structure, there is a growing need to explore new lattice structures which also exhibit fully-polarized topological mechanics. This searching process can be practically implemented through Machine Learning, where we can train the algorithm to learn and classify isostatic lattices based on their topological mechanical phases, which include fully-polarized, partially-polarized, and Weyl phases. By doing so, we anticipate that novel designs of isostatic lattices with improved mechanical polarization can be achieved. This, in turn, may lead to the creation of topological soft metamaterials that can maintain their unique properties even when larger bending stiffness is present. The development of such metamaterials can significantly enhance their potential applications, as illustrated in SI. Sec. 6 below. § POTENTIAL APPLICATIONS OF THE TOPOLOGICAL PYROCHLORE LATTICE The applications of topological floppy modes have been extensively investigated in 2D isostatic lattices, including their use in preventing fracturing caused by defects and damage, mechanical information storage, and the creation of transformable topological mechanical metamaterials. However, these benefits fail to apply to out-of-plane motions <cit.>. Here, we show that the polarized surface mechanics in 3D pyrochlore lattice enables new applications not possible in 2D lattices. §.§ Topologically protected all-terrain tire Our 3D-printed topological isostatic metamaterial boasts high versatility and can be applied to various areas. One promising application is in the development of airless and lightweight tires. Fig. 4e shows a topological pyrochlore lattice arranged in a cylindrical domain, creating a highly porous wheel. The system is designed so that the surface with topological softness faces outward, while the boundary with topological rigidity folds inward to securely attach to the axle without any shaking or instability. Our topological isostatic metamaterial tire can roll smoothly over rough terrain, thanks to its outer surface that contains topological softness. This is demonstrated in Fig. 4f, which displays the preliminary numerical results of the tire's displacement response while rolling on a rugged terrain profile. The distinct advantage of this design is that the inner layers of the metamaterial retain their topological softness, even under severe conditions such as extreme temperatures or friction that can wear out the outer layers. As a result, the tire remains functional even if the outer layers are damaged, setting it apart from traditional (inflatable) tires that become non-functional when punctured by a sharp object. §.§ Topologically switchable landing gear for aircrafts Thanks to the freely adjustable Guest-Hutchinson modes in isostatic lattices, our 3D pyrochlore metamaterial can be reversibly switched between topologically polarized and Weyl phases. The distinct topological surface elasticity of this mechanical metamaterial makes it an excellent candidate for designing reconfigurable landing gears for drones and other aircrafts. As depicted in Fig. 4g of the main text, the pyrochlore landing gear can be switched to the topologically polarized configuration when the drone is in the air. In this configuration, the topologically protected states of self-stress make the bottom surface rigid, providing topological rigidity that minimizes swaying. As the drone makes a landing, the pyrochlore landing gear transitions to the Weyl phase. In this phase, the surface softness is localized on the bottom boundary of the lattice, which efficiently absorbs the shock with ground. Our novel pyrochlore metamaterial design allows for versatile and controllable manipulations of surface elasticity, which offers significant implications for the design and deployment of aircrafts in various environments. §.§ Topological elasticity in 3D Topological mechanics has been focused on discrete lattice systems that require the knowledge of their microscopic details. Recent studies <cit.> extend topological mechanics to “topological elasticity", where one- and two-dimensional topological isostatic media can be designed based on only a few macroscopic elastic parameters without knowing their microscopic designs. However, this theory is not applicable to 3D, because in 3D isostatic media, mechanical Weyl lines can pass through the Γ-point of the Brillouin zone (as shown in SI. Fig. <ref>). This results in the presence of long-wavelength mechanical Weyl modes in 3D continuum isostatic media, which have been precluded in 1D and 2D topological elastic theory. Moreover, in 3D isostatic media, mechanical Weyl modes may induce interesting an-isotropic elastic response, because 3D isostatic lattices have been known to exhibit an-isotropic mechanics that depends on the choice of the surface wavevector. This may intrigue novel designs of isostatic media in 3D, with directional and polarized surface elasticity. §.§ Other potential applications of 3D isostatic metamaterials Fracturing protected by topological states of self-stress in isostatic lattices— It has been demonstrated in Refs. <cit.> that in 2D isostatic lattices, states of self-stress in domain walls of topologically distinct isostatic lattices can prevent catastrophic fracturing, even with significant damage or small defects. In contrast to brittle materials <cit.> where stress usually accumulates at crack tips and causes avalanches, fracturing in topological isostatic lattices with self-stress domain walls occurs gradually, avoiding large avalanches. Thus, isostatic lattices show great potential in shock absorption and structural reinforcement, thanks to their controllable fracturing. However, these benefits were only studied in 2D. Our 3D prototype addresses the limitation in isostatic lattices for protecting materials against damage and fracturing. Topological mechanical cloak— In fully-polarized topological isostatic lattices, site displacements are exponentially reduced from the soft surface to the rigid one, resulting in minimal deformation on the topologically rigid surface. This mechanism enables a novel application known as the topological mechanical cloak, which protects materials from elastic deformation. By utilizing micro-machining techniques <cit.>, we can fabricate the topological pyrochlore lattice at the nanoscale. This not only enables the creation of a precise mechanical cloaking system but also increases its effectiveness in blocking particle displacements on the soft side. Self-adaptive metamaterials for soft robotics— Soft robotics has led to a range of diverse applications, including soft wearable devices and medical implants <cit.>. Despite significant progress in this field, several challenges remain, including the connection problem in modular soft robotic systems, structural limitations of the robots themselves, and instability-induced loss of control during high-speed motion. Using topological isostatic metamaterials with switchable, highly polarized surface mechanics, one can address these issues. Our metamaterial, for instance, offers high strength and controllable fracturing properties, enabling more stable module connections. Additionally, due to its high adaptability and tunability, this material is excellent for supporting all-terrain robotic motions, opening new opportunities for enhancing the performance of soft robotic systems. 60 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Fruchart et al.(2020)Fruchart, Zhou, and Vitelli]fruchart2020dualities author author M. Fruchart, author Y. Zhou, and author V. Vitelli, @noop journal journal Nature volume 577, pages 636 (year 2020)NoStop [Danawe et al.(2022)Danawe, Li, Sun, and Tol]PhysRevLett.129.204302 author author H. Danawe, author H. Li, author K. Sun, and author S. Tol, 10.1103/PhysRevLett.129.204302 journal journal Phys. Rev. Lett. volume 129, pages 204302 (year 2022)NoStop [Papanikolaou et al.(2013)Papanikolaou, O'Hern, and Shattuck]PhysRevLett.110.198002 author author S. Papanikolaou, author C. S. O'Hern, and author M. D. Shattuck, 10.1103/PhysRevLett.110.198002 journal journal Phys. Rev. Lett. volume 110, pages 198002 (year 2013)NoStop [Kane and Lubensky(2014)]kane2014topological author author C. Kane and author T. Lubensky, @noop journal journal Nature Physics volume 10, pages 39 (year 2014)NoStop [Phillips(1979)]phillips1979topology author author J. C. Phillips, @noop journal journal Journal of non-crystalline solids volume 34, pages 153 (year 1979)NoStop [Lubensky et al.(2015)Lubensky, Kane, Mao, Souslov, and Sun]lubensky2015RRP author author T. Lubensky, author C. Kane, author X. Mao, author A. Souslov, and author K. Sun, @noop journal journal Reports on Progress in Physics volume 78, pages 073901 (year 2015)NoStop [Tong and Tanaka(2020)]PhysRevLett.124.225501 author author H. Tong and author H. Tanaka, 10.1103/PhysRevLett.124.225501 journal journal Phys. Rev. Lett. volume 124, pages 225501 (year 2020)NoStop [Lei et al.(2022)Lei, Tang, Hu, Ma, and Ni]PhysRevLett.129.125501 author author Q.-L. Lei, author F. Tang, author J.-D. Hu, author Y.-q. Ma, and author R. Ni, 10.1103/PhysRevLett.129.125501 journal journal Phys. Rev. Lett. volume 129, pages 125501 (year 2022)NoStop [Huang et al.(2023)Huang, Zhang, Xu, Zhang, Tong, and Xu]huang2023jammed author author J. Huang, author J. Zhang, author D. Xu, author S. Zhang, author H. Tong, and author N. Xu, @noop journal journal Current Opinion in Solid State and Materials Science volume 27, pages 101053 (year 2023)NoStop [Bertoldi et al.(2017)Bertoldi, Vitelli, Christensen, and Van Hecke]bertoldi2017NRM author author K. Bertoldi, author V. Vitelli, author J. Christensen, and author M. Van Hecke, @noop journal journal Nature Reviews Materials volume 2, pages 1 (year 2017)NoStop [Rocklin et al.(2017)Rocklin, Zhou, Sun, and Mao]rocklin2017NC author author D. Z. Rocklin, author S. Zhou, author K. Sun, and author X. Mao, @noop journal journal Nature communications volume 8, pages 14201 (year 2017)NoStop [Wang et al.(2023)Wang, Zhou, and Chen]wang2023JMPS author author A. Wang, author Y. Zhou, and author C. Q. Chen, @noop journal journal Journal of the Mechanics and Physics of Solids volume 173, pages 105197 (year 2023)NoStop [Coulais et al.(2017)Coulais, Sounas, and Alu]coulais2017N author author C. Coulais, author D. Sounas, and author A. Alu, @noop journal journal Nature volume 542, pages 461 (year 2017)NoStop [Ma et al.(2023)Ma, Tang, Shi, Wu, Yang, Zhou, Yao, and Li]ma2023PRL author author F. Ma, author Z. Tang, author X. Shi, author Y. Wu, author J. Yang, author D. Zhou, author Y. Yao, and author F. Li, 10.1103/PhysRevLett.131.046101 journal journal Phys. Rev. Lett. volume 131, pages 046101 (year 2023)NoStop [Xiu et al.(2023)Xiu, Frankel, Liu, Qian, Sarkar, MacNider, Chen, Boechler, and Mao]xiu2023PNAS author author H. Xiu, author I. Frankel, author H. Liu, author K. Qian, author S. Sarkar, author B. MacNider, author Z. Chen, author N. Boechler, and author X. Mao, @noop journal journal Proceedings of the National Academy of Sciences volume 120, pages e2217928120 (year 2023)NoStop [Paulose et al.(2015a)Paulose, Chen, and Vitelli]paulose2015NP author author J. Paulose, author B. G.-g. Chen, and author V. Vitelli, @noop journal journal Nature Physics volume 11, pages 153 (year 2015a)NoStop [Chen et al.(2014)Chen, Upadhyaya, and Vitelli]chen2014nonlinear author author B. G.-g. Chen, author N. Upadhyaya, and author V. Vitelli, @noop journal journal Proceedings of the National Academy of Sciences volume 111, pages 13004 (year 2014)NoStop [Xiao et al.(2010)Xiao, Chang, and Niu]RevModPhys.82.1959 author author D. Xiao, author M.-C. Chang, and author Q. Niu, 10.1103/RevModPhys.82.1959 journal journal Rev. Mod. Phys. volume 82, pages 1959 (year 2010)NoStop [Kane and Mele(2005)]PhysRevLett.95.146802 author author C. L. Kane and author E. J. Mele, 10.1103/PhysRevLett.95.146802 journal journal Phys. Rev. Lett. volume 95, pages 146802 (year 2005)NoStop [Su et al.(1979)Su, Schrieffer, and Heeger]PhysRevLett.42.1698 author author W. P. Su, author J. R. Schrieffer, and author A. J. Heeger, 10.1103/PhysRevLett.42.1698 journal journal Phys. Rev. Lett. volume 42, pages 1698 (year 1979)NoStop [Haldane(1988)]PhysRevLett.61.2015 author author F. D. M. Haldane, 10.1103/PhysRevLett.61.2015 journal journal Phys. Rev. Lett. volume 61, pages 2015 (year 1988)NoStop [Qi and Zhang(2011)]RevModPhys.83.1057 author author X.-L. Qi and author S.-C. Zhang, 10.1103/RevModPhys.83.1057 journal journal Rev. Mod. Phys. volume 83, pages 1057 (year 2011)NoStop [Fu et al.(2007)Fu, Kane, and Mele]PhysRevLett.98.106803 author author L. Fu, author C. L. Kane, and author E. J. Mele, 10.1103/PhysRevLett.98.106803 journal journal Phys. Rev. Lett. volume 98, pages 106803 (year 2007)NoStop [Süsstrunk and Huber(2015)]susstrunk2015observation author author R. Süsstrunk and author S. D. Huber, @noop journal journal Science volume 349, pages 47 (year 2015)NoStop [Vakakis et al.(2001)Vakakis, Manevitch, Mikhlin, Pilipchuk, and Zevin]vakakis2001normal author author A. F. Vakakis, author L. I. Manevitch, author Y. V. Mikhlin, author V. N. Pilipchuk, and author A. A. Zevin, @noop title Normal modes and localization in nonlinear systems (publisher Springer, year 2001)NoStop [Rosa et al.(2019)Rosa, Pal, Arruda, and Ruzzene]ruzzene2019PRL author author M. I. N. Rosa, author R. K. Pal, author J. R. F. Arruda, and author M. Ruzzene, 10.1103/PhysRevLett.123.034301 journal journal Phys. Rev. Lett. volume 123, pages 034301 (year 2019)NoStop [Nash et al.(2015)Nash, Kleckner, Read, Vitelli, Turner, and Irvine]nash2015topological author author L. M. Nash, author D. Kleckner, author A. Read, author V. Vitelli, author A. M. Turner, and author W. T. Irvine, @noop journal journal Proceedings of the National Academy of Sciences volume 112, pages 14495 (year 2015)NoStop [Miniaci et al.(2019)Miniaci, Pal, Manna, and Ruzzene]ruzzene2019PRB author author M. Miniaci, author R. K. Pal, author R. Manna, and author M. Ruzzene, 10.1103/PhysRevB.100.024304 journal journal Phys. Rev. B volume 100, pages 024304 (year 2019)NoStop [Rosa et al.(2023)Rosa, Leamy, and Ruzzene]rosa2023NJP author author M. I. Rosa, author M. J. Leamy, and author M. Ruzzene, @noop journal journal New Journal of Physics volume 25, pages 103053 (year 2023)NoStop [Tempelman et al.(2021)Tempelman, Matlack, and Vakakis]tempelman2021PRB author author J. R. Tempelman, author K. H. Matlack, and author A. F. Vakakis, 10.1103/PhysRevB.104.174306 journal journal Phys. Rev. B volume 104, pages 174306 (year 2021)NoStop [Wang et al.(2015)Wang, Lu, and Bertoldi]bertoldi2015PRL author author P. Wang, author L. Lu, and author K. Bertoldi, 10.1103/PhysRevLett.115.104302 journal journal Phys. Rev. Lett. volume 115, pages 104302 (year 2015)NoStop [Paulose et al.(2015b)Paulose, Meeussen, and Vitelli]paulose2015PNAS author author J. Paulose, author A. S. Meeussen, and author V. Vitelli, @noop journal journal Proceedings of the National Academy of Sciences volume 112, pages 7639 (year 2015b)NoStop [Zhang et al.(2023)Zhang, Li, Sun, Liu, Zhao, Feng, Fan, and Qiu]Qiu2023PRL author author Q. Zhang, author Y. Li, author H. Sun, author X. Liu, author L. Zhao, author X. Feng, author X. Fan, and author C. Qiu, 10.1103/PhysRevLett.130.017201 journal journal Phys. Rev. Lett. volume 130, pages 017201 (year 2023)NoStop [Tempelman et al.(2024)Tempelman, Vakakis, and Matlack]tempelman2024modal author author J. R. Tempelman, author A. F. Vakakis, and author K. H. Matlack, @noop journal journal Journal of Sound and Vibration volume 568, pages 118033 (year 2024)NoStop [Souslov et al.(2017)Souslov, Van Zuiden, Bartolo, and Vitelli]souslov2017NP author author A. Souslov, author B. C. Van Zuiden, author D. Bartolo, and author V. Vitelli, @noop journal journal Nature Physics volume 13, pages 1091 (year 2017)NoStop [Stenull et al.(2016)Stenull, Kane, and Lubensky]Olaf2016PRL author author O. Stenull, author C. L. Kane, and author T. C. Lubensky, 10.1103/PhysRevLett.117.068001 journal journal Phys. Rev. Lett. volume 117, pages 068001 (year 2016)NoStop [Bilal et al.(2017)Bilal, Süsstrunk, Daraio, and Huber]Huber2017AM author author O. R. Bilal, author R. Süsstrunk, author C. Daraio, and author S. D. Huber, @noop journal journal Advanced Materials volume 29 (year 2017)NoStop [Baardink et al.(2017)Baardink, Souslov, Paulose, and Vitelli]Vitelli2017PNAS author author G. Baardink, author A. Souslov, author J. Paulose, and author V. Vitelli, @noop journal journal Proceedings of the National Academy of Sciences volume 115, pages 489 (year 2017)NoStop [Bergne et al.(2022)Bergne, Baardink, Loukaides, and Souslov]bergne2022EML author author A. Bergne, author G. Baardink, author E. G. Loukaides, and author A. Souslov, @noop journal journal Extreme Mechanics Letters volume 57, pages 101911 (year 2022)NoStop [Sun and Mao(2020)]mao2020PRL author author K. Sun and author X. Mao, 10.1103/PhysRevLett.124.207601 journal journal Phys. Rev. Lett. volume 124, pages 207601 (year 2020)NoStop [Stenull and Lubensky(2019)]PhysRevLett.122.248002 author author O. Stenull and author T. C. Lubensky, 10.1103/PhysRevLett.122.248002 journal journal Phys. Rev. Lett. volume 122, pages 248002 (year 2019)NoStop [Zhou et al.(2018)Zhou, Zhang, and Mao]Zhou2018PRL author author D. Zhou, author L. Zhang, and author X. Mao, 10.1103/PhysRevLett.120.068003 journal journal Phys. Rev. Lett. volume 120, pages 068003 (year 2018)NoStop [Zhou et al.(2019)Zhou, Zhang, and Mao]zhou2019PRX author author D. Zhou, author L. Zhang, and author X. Mao, 10.1103/PhysRevX.9.021054 journal journal Phys. Rev. X volume 9, pages 021054 (year 2019)NoStop [Guest and Hutchinson(2003)]Guest2003JMPS author author S. Guest and author J. Hutchinson, @noop journal journal Journal of the Mechanics and Physics of Solids volume 51, pages 383 (year 2003)NoStop [pyr()]pyrochloreSM @noop journal See Supplementary Information for the analytic mechanical transfer matrix, the experimental setup and measurement, the topological winding numbers, and topological phase transitions. Supplementary videos show the Guest mode in the pyrochlore lattice, and the force-displacement measurement. This Supplemental Information includes Refs. [46-56] NoStop [Charara et al.(2022)Charara, McInerney, Sun, Mao, and Gonella]charara2022omnimodal journal author author M. Charara, author J. McInerney, author K. Sun, author X. Mao, and author S. Gonella, @noop journal journal Proceedings of the National Academy of Sciences volume 119, pages e2208051119 (year 2022)NoStop [Zunker and Gonella(2021a)]zunker2021EML author author W. Zunker and author S. Gonella, @noop journal journal Extreme Mechanics Letters volume 46, pages 101344 (year 2021a)NoStop [Meng et al.(2020)Meng, Liu, Zhang, and Chen]meng2020JMPS author author Z. Meng, author M. Liu, author Y. Zhang, and author C. Q. Chen, @noop journal journal Journal of the Mechanics and Physics of Solids volume 144, pages 104095 (year 2020)NoStop [Long et al.(2018)Long, Ren, and Chen]long2018intrinsic author author Y. Long, author J. Ren, and author H. Chen, @noop journal journal Proceedings of the National Academy of Sciences volume 115, pages 9951 (year 2018)NoStop [Cheng et al.(2023)Cheng, Qian, Cheng, Boechler, Mao, and Sun]cheng2023backscattering author author W. Cheng, author K. Qian, author N. Cheng, author N. Boechler, author X. Mao, and author K. Sun, @noop journal journal arXiv preprint arXiv:2306.07493 (year 2023)NoStop [Zunker and Gonella(2021b)]zunker2021soft author author W. Zunker and author S. Gonella, @noop journal journal Extreme Mechanics Letters volume 46, pages 101344 (year 2021b)NoStop [Ma et al.(2018)Ma, Zhou, Sun, Mao, and Gonella]PhysRevLett.121.094301 author author J. Ma, author D. Zhou, author K. Sun, author X. Mao, and author S. Gonella, 10.1103/PhysRevLett.121.094301 journal journal Phys. Rev. Lett. volume 121, pages 094301 (year 2018)NoStop [Zhou et al.(2022)Zhou, Zeb Rocklin, Leamy, and Yao]zhou2022NC author author D. Zhou, author D. Zeb Rocklin, author M. Leamy, and author Y. Yao, 10.1038/s41467-022-31084-y journal journal Nature Communications volume 13, pages 3379 (year 2022)NoStop [Zhou et al.(2021)Zhou, Zhang, and Chen]zhou2021JMPS author author Y. Zhou, author Y. Zhang, and author C. Chen, @noop journal journal Journal of the Mechanics and Physics of Solids volume 153, pages 104482 (year 2021)NoStop [Benalcazar et al.(2017)Benalcazar, Bernevig, and Hughes]benalcazar2017quantized author author W. A. Benalcazar, author B. A. Bernevig, and author T. L. Hughes, @noop journal journal Science volume 357, pages 61 (year 2017)NoStop [Sarkar et al.(2023)Sarkar, Mao, and Sun]sarkar2023mirror author author S. Sarkar, author X. Mao, and author K. Sun, @noop journal journal Physical Review B volume 108, pages L060103 (year 2023)NoStop [Zhang and Mao(2018)]zhang2018NJP author author L. Zhang and author X. Mao, @noop journal journal New Journal of Physics volume 20, pages 063034 (year 2018)NoStop [Griffith(1921)]griffith1921vi author author A. A. Griffith, @noop journal journal Philosophical transactions of the royal society of london. Series A, containing papers of a mathematical or physical character volume 221, pages 163 (year 1921)NoStop [Bückmann et al.(2014)Bückmann, Thiel, Kadic, Schittny, and Wegener]buckmann2014elasto author author T. Bückmann, author M. Thiel, author M. Kadic, author R. Schittny, and author M. Wegener, @noop journal journal Nature communications volume 5, pages 4130 (year 2014)NoStop [Lin et al.(2022)Lin, Hu, Zhou, and Xu]lin2022soft author author M. Lin, author H. Hu, author S. Zhou, and author S. Xu, @noop journal journal Nature Reviews Materials volume 7, pages 850 (year 2022)NoStop
http://arxiv.org/abs/2409.02323v1
20240903223455
Review and Novel Formulae for Transmittance and Reflectance of Wedged Thin Films on absorbing Substrates
[ "Manuel Ballester", "Emilio Marquez", "John Bass", "Christoph Wuersch", "Florian Willomitzer", "Aggelos K. Katsaggelos" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.comp-ph", "physics.optics" ]
Transmittance and Reflectance Formulae 1,*]Manuel Ballester 2]Emilio Marquez 3]John Bass 4]Christoph Würsch 3] Florian Willomitzer 1, 4]Aggelos K. Katsaggelos [1]Department of Computer Sciences, Northwestern University, Evanston, IL 60208, USA [2]Department of Condensed-Matter Physics, Faculty of Science, University of Cadiz, 11510 Puerto Real, Spain [3]Wyant College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA [4]ETH Hoenggerberg Zürich, Stefano-Franscini-Platz 5, 8093, Switzerland [4]Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208, USA [**]Correspondence: [email protected] Review and Novel Formulae for Transmittance and Reflectance of Wedged Thin Films on absorbing Substrates [ ======================================================================================================== § ABSTRACT Historically, spectroscopic techniques have been essential for studying the optical properties of thin solid films. However, existing formulae for both normal transmission and reflection spectroscopy often rely on simplified theoretical assumptions, which may not accurately align with real-world conditions. For instance, it is common to assume (1) that the thin solid layers are deposited on completely transparent thick substrates and (2) that the film surface forms a specular plane with a relatively small wedge angle. While recent studies have addressed these assumptions separately, this work presents an integrated framework that eliminates both assumptions simultaneously. In addition, the current work presents a deep review of various formulae from the literature, each with their corresponding levels of complexity. Our review analysis highlights a critical trade-off between computational complexity and expression accuracy, where the newly developed formulae offer enhanced accuracy at the expense of increased computational time. Our user-friendly code, which includes several classical transmittance and reflectance formulae from the literature and our newly proposed expressions, is publicly available in both Python and Matlab at this https://drive.google.com/drive/folders/1Mv0p9or5ePowgt37yitNnw2Xe449IFTG?usp=sharinglink. § INTRODUCTION Thin solid films are essential in a wide range of modern industries, especially in the development of efficient transistors and photodiodes, as well as in the fabrication of protective metal and dielectric coatings. Thin film technologies directly impact the performance and efficiency of several everyday devices, including active matrix LCDs, photovoltaic systems, flash memory chips, photonic integrated circuits, and semiconductor batteries <cit.>. It is important to remark that the specific conditions under which thin films are prepared have a significant impact on their final properties. Factors such as deposition techniques, preparation time, growth temperature, and working pressure all play critical roles in determining the characteristics of the resulting films <cit.>. Consequently, a precise analysis of their optoelectronic properties prior to mass production is crucial. Thin films, with thicknesses often ranging from nanometers to several micrometers, are commonly deposited onto thick glass substrates (often on the millimeter scale), as seen in Fig. <ref>a. Our main objective is to determine the optical properties of the film, specifically the refractive index n_1(λ) and the extinction coefficient κ_1(λ), as functions of the wavelength λ, within a broad spectral range of interest, typically in the UV-Vis-NIR region. While there are multiple methods for optically characterizing these thin solid films, spectroscopic ellipsometry often emerges as the preferred choice due to its high accuracy. This technique measures how the polarization state of an incident beam changes upon reflection on the layer surface <cit.>. Spectroscopic ellipsometry techniques provide comprehensive data collection through different reflection angles and polarization states, which allows for highly precise thin-film material characterization. It should be noted that this technique is particularly effective for analyzing complicated sample structures, including multilayered <cit.> or anisotropic materials <cit.>. Despite the significant benefits of ellipsometry techniques, it is still very common to perform the optical characterization of simple thin films using measurements of normal transmittance and/or reflectance <cit.>. Several compelling factors contribute to the continued reliance on these latter measures. First, transmission and reflection spectrophotometers are indispensable instruments with applications in a wide range of fields, including biology and chemistry <cit.>. Second, the widespread availability of these instruments in research institutes and industries makes them a practical choice. Third, these instruments are generally more economical than spectroscopic ellipsometers, offering a cost-effective alternative for many laboratories <cit.>. In addition, many commercial instruments are designed to integrate both optical transmittance and reflectance measurements, facilitating efficient data acquisition <cit.>. Fourth, collecting a single set of light intensity data at normal incidence is generally easier, simpler, and faster than collecting a comprehensive ellipsometric dataset. Fifth, ellipsometric data strongly depend on surface roughness <cit.>, as optically rough surface layers lead to significant depolarization upon reflection. In contrast, the total transmitted and reflected intensity measurements are less influenced by this factor <cit.>. It should also be noted that, in spectral ranges where the films exhibit medium-to-low absorption, most of the incident light is transmitted, enabling highly sensitive and accurate optical characterizations based on these transmission measurements. In contrast, for characterizations within the strong absorption regions of a film material, reflection measurements (such as those performed in reflectometry or ellipsometry) offer more advantages. A representative schematic of a double-beam spectrophotometer can be found in Fig. <ref>b. A lamp emits broad-spectrum light. Typically, a deuterium arc lamp is used for the UV region, and a tungsten-halogen lamp for the Vis-NIR spectrum. This light is then dispersed by a diffraction optical element (DOE), as seen in the figure. Consequently, it passes through a slit, allowing for a controllable spectral bandwidth that usually ranges from 0.1 to 10 nm, which determines the coherent length of the beam. Various apertures are available to adjust the beam diameter, generally in the millimeter range. The normal-incidence reflected and transmitted light intensities, denoted as I_T and I_R, are then measured by a photodetector. Double-beam instrumentation splits the input beam into two parts: one to analyze the sample and the other to serve as a reference, measuring the baseline intensity I_0 with a separate sensor. Variations in the light source intensity are then recorded simultaneously in both the reference and sample beams, allowing an accurate intensity ratio for transmission T=I_T/I_0 and reflection R=I_R/I_0. Then, N sequential transmittance and reflectance experimental measurements, denoted {T^exp(λ_i), R^exp(λ_i)}, are obtained at discrete specific wavelengths λ_i, i ∈{1,2,...,N}, with a typical step size of a few nanometers. Our goal is then to develop a computational method that determines the two optical constants, n_1(λ_i) and κ_1(λ_i), from that experimental dataset. Since the proliferation of advanced spectroscopic techniques, numerous analytical expressions have been formulated for the spectral transmission and reflection of thin-film samples, denoted T^theory(λ; n_1, κ_1) and R^theory(λ; n_1, κ_1). The formulae were then used to computationally fit the real-world optical measurements, finding the most reasonable values of (n_1, κ_1) that match the experiment. However, these theoretical expressions generally relied on rough assumptions or approximations. The approximations introduced were necessary in order to simplify the analysis and reduce computational complexity, particularly given the limited computing resources available at the time. Among the simplifications that still persist nowadays, two are particularly critical: (1) the glass substrate is presumed to be completely transparent throughout the spectral range of interest, and (2) the film surface is regarded as a specular plane with a minor wedge angle, usually only a few nanometers in height. Current instruments are capable of detecting very subtle variations in light intensity measurements, revealing the existing weak substrate absorption, because of the presence of inherent glass absorption at specific spectral ranges or impurities in the substrate. Additionally, although it is true that most films are conveniently prepared to have quasi-uniform thicknesses, more `exotic' thin-film samples may display complex surface geometries <cit.>. Such variations in thickness can lead to a pronounced wedge effect on the illuminated spot under analysis (see Fig. <ref>a). It should be highlighted that the modern literature has already addressed simplifications (1) and (2) separately, as explained in the next section in further detail. In contrast, the current work presents a unified and comprehensive theoretical model that addresses both problems simultaneously, bridging the gap in the current literature. Our code, written both in Python 3.0 <cit.> and Matlab 2021a <cit.>, is available to the public at this https://drive.google.com/drive/folders/1Mv0p9or5ePowgt37yitNnw2Xe449IFTG?usp=sharinglink. Please note that this code also includes a collection of previously established formulae in this field, accounting for the results of T^theory and R^theory under different approximations. § HISTORICAL CONTEXT AND PREVIOUS WORKS The study of the physics of thin films has profoundly influenced the fields of optics and photonics, particularly in the analysis of optical interference effects during the nineteenth century <cit.>. Relevant advancements in this field include the development of the Fabry-Pérot interferometer, the analysis of the etalon effect, and the formulation of Airy's equations. These contributions have facilitated advances in instruments for precise wavelength determination and enabled the development of accurate optical components. In addition, the examination of absorption bands in these thin layer materials significantly advanced atomic physics in the early twentieth century, improving our understanding of material properties at the quantum level. During the latter half of the twentieth century, there was a proliferation of accurate optoelectronic analyses of film semiconductor materials. The key advances that propelled these characterizations included (i) the introduction of automated spectrophotometers <cit.>, (ii) the integration of modern computer technologies and numerical methods, and (iii) the continuous development and refinement of film preparation techniques <cit.>. §.§ Computational methods to find optical properties It is essential to review the historical context of well-established computational methods that perform optical characterization. We mentioned above a straightforward approach that consists of directly fitting the experimental transmittance (or reflectance) measurements with the derived theoretical formulae. By minimizing the least-squares error between the experimental data and our theoretical models, we find the optimal values for the optical functions, n_1 and κ_1. This approach is commonly known as “inverse synthesis” or “reverse engineering”, and involves solving a complicated global optimization problem <cit.>. In contrast, Hall and Ferguson (1955) <cit.>, Lyashenko and Miloslavskii (1964) <cit.>, and Manifacier et al. (1975) <cit.> developed an alternative approach to find optical properties, initially known as the method of “successive iterations”. This method does not operate for all the discrete measure wavelengths, but rather works for those particular wavelengths at which the minimum and maximum thin-film interferences occur. This algorithm requires the calculation, during the intermediate steps, of the lower and upper envelopes of the spectral transmission and/or reflection curves. Ryno Swanepoel significantly refined this technique, introducing more precise formulae in two seminal works in 1983 <cit.> and 1984 <cit.>, respectively for uniform and wedged films. Subsequently, the algorithm was universally renamed as the Swanepoel method to his honor. Initially limited to transmission measures, this methodology was then expanded to include reflection spectra by Minkov et al. in a series of studies <cit.>. Since then, numerous significant works have further advanced this method for both transmission and reflection spectroscopy <cit.>. A comparison between the inverse synthesis and envelope method <cit.> indicates that, although inverse synthesis typically generally achieves greater precision, it comes with significantly higher computational costs. In addition to these two traditional methods, new research directions are emerging. For example, a novel deep learning (DL) technique has shown promise in accurately predicting optical properties of thin solid films from reflection <cit.> and transmission <cit.> spectra. Furthermore, a hybrid method has been proposed that combines the traditional Swanepoel method with DL enhancements <cit.>. In that work, a neural network automatically determines the envelopes of a transmission spectra, and then an automatic algorithm determines the optical constants using the information contained in the envelopes. Another approach recently proposed by our group is a numerical method that compares the experimental transmittance with the transmittance generated by a numerical simulator <cit.>. The simulator closely replicates wave propagation under the realistic conditions of a spectrophotometer, effectively acting as a digital twin of the actual setup. §.§ Existing transmission and reflection formula Revisiting now the early theoretical analysis of transmission and reflection formulae for thin solid films, it is essential to highlight the contributions of A.W. Crook in 1948 <cit.> and F. Abelès in the 1950s <cit.>. They independently laid the foundation for the transmission and reflection analysis in stratified media. The compilation work in the original Heavens book “Optical Properties of Thin Solid Films” from 1955 <cit.> further consolidated the theoretical formulations employed in this field. Despite these incipient accurate analyzes, it was common practice during that period to simplify the theoretical expressions and assume that the film was deposited on an infinitely thick substrate <cit.>. Observe that the approximation overlooks the impact of back-reflection at the later substrate-air interface. The approximation significantly simplified the formulae to favor a straightforward optical characterization employing the envelope method <cit.>. In 1983 and 1984, the two aforementioned seminal works by Swanepoel <cit.> formally challenged the notion of neglecting the actual thickness of the substrate. These studies demonstrated that such simplifications could result in errors in transmittance values of up to 3-4%. Subsequent studies highlighted the importance of accounting for substrate back-reflection in reflectance analyses <cit.>. In 1971, Potapov and Rakov proposed a pioneer algorithm to account for the effect of slightly absorbing thick substrates <cit.> on the transmittance and reflectance of samples with uniform films. Another relevant subsequent work on the topic was later reported by Vriens and Rippens in 1983 <cit.>. A later work by Swanepoel in 1989 <cit.> meticulously derived the comprehensive formulae describing the effect of a quasi-coherent light source. It should be noted that these close-form friendly expressions and methodology proposed by Swanepoel have been adopted in the present work. In 1997, Kotlikov and Terechenko <cit.> independently reached findings identical to those of Swanepoel, although in the context of antireflection coatings. Continuing the discussion on substrate absorption, it is important to highlight a study <cit.> by Nichelatti (2002), which introduces a very efficient numerical technique to directly characterize substrate optical properties through spectroscopic measurements of that isolated substrate (explained in Section 9). In 2010, Barybin and Shapovalov <cit.> offered an alternative derivation of the transmission and reflection formulae for uniform films using the matrix formalism, specifically accounting for the effects of highly absorbing substrates. The analysis of transmission and reflection formulae has continuously evolved up to the present day in different directions, accommodating increasingly complex sample characteristics. For example, some studies consider advanced thin film geometries <cit.> or deal with multiple thin-layer configurations <cit.>. In particular, a new significant contribution came from Ruiz-Perez (RP) et al. <cit.> in 2020, adapting the transmission formula to account for films with a pronounced wedge deposited on transparent substrates. The RP formula accounts for `exotic' films with a high-wedge condition (explained below in detail), which leads to significant changes in the shape of the spectrum, shrinking the characteristic oscillations of the transmission curves. This particular transmission expression has been successfully used in the recent literature and was named as the “universal transmission formula” <cit.>. Built upon the RP approach, our current work presents both transmission and reflection formulae, which are now able to account not only for complex films with a high wedge angle but also for thick absorbing glass substrates. Table <ref> summarizes the contributions of several relevant formulae from the literature, together with their associated approximations and their corresponding degree of precision. § THEORETICAL BACKGROUND The present study assumes that the thin film is made of a homogeneous material with isotropic properties. We describe the complex dielectric function (the electric permittivity) across the stratified structure as ϵ(z) = {[ ϵ_0, z < 0 (Air); ϵ_1, 0 < z < d_1 (Film); ϵ_2, d_1 < z < d_1 + d_s (Substrate); ϵ_0, d_1 + d_s < z (Air) ]. The bold notation represents the complex nature of the functions. In this formula, d is the thickness of the film and d_s the thickness of the substrate. The initial sections 2-4 assume a film of uniform thickness Δ d = 0. Later on, we explore films with a certain tilt, including those with the high wedge condition Δ d > λ/(4 n), which requires a careful and distinct mathematical treatment, as discussed in detail in <cit.>. Although multiple surface geometries could be considered for the film, a plane surface with a potential wedge parameter Δ d is reasonably assumed for the small illuminated area. This illuminated spot is often represented as a rectangular region with only a few millimeters of width and height . This planar assumption can be thought of as a Taylor first-order approximation of the actual much more intricate film surface (see Fig. <ref>a). Maxwell equations express a relationship between the electric and magnetic fields <cit.> , denoted E⃗ and H⃗. Note that an external electric field E⃗ can influence an internal electric polarization P⃗ within a dielectric or semiconductor material, described as P⃗ = ϵ_0 χ^(1)E⃗ + ϵ_0 χ^(2)E⃗^2 + ϵ_0 χ^(3)E⃗^3 + ... The electric flux density within the material is then given by D⃗ = ϵ_0 E⃗ + P⃗. For non-magnetic materials, the magnetic flux density is simply B⃗ = μ_0 H⃗, where μ_0 is the magnetic permeability in vacuum. In Eq. <ref>, the Taylor approximation around E⃗=0 was used to approximate an arbitrary polarization function, considering complex susceptibility coefficients (χ^(1), χ^(2), χ^(3),...). In practice, most materials exhibit a linear response, although second- and third-order effects have also been studied in depth <cit.> since the invention of laser in 1960, especially in the context of high intensity light beams. Our research focuses exclusively on linear materials, where D⃗ = ϵ_0 (1 + χ^(1)) E⃗ = ϵ_0 ϵ_rE⃗ = ϵE⃗. In these particular cases, the complex relative permittivity ϵ_r fully describes the optoelectric properties of the material. In linear non-magnetic isotropic film materials, the complex refractive index is defined as 𝐧(z) = √(ϵ_r) = n(z) + i κ(z). Here, n denotes the real part of the refractive index, proportional to the phase velocity of light through the medium. The extinction (or attenuation) coefficient κ is closely related to the absorption of the medium. The complex dielectric function reveals key information about the electronic transitions of the material. In turn, this allows us to deduce several critical aspects relevant to the film industry. For instance, it provides estimations of the band gap energy <cit.>, the material conductivity <cit.>, the dissipation factor <cit.>, the compound stoichiometry <cit.>, and the level of structural disorder <cit.>. Understanding structural disorder is particularly important for the analysis of the physical properties of amorphous materials and the identification of defects in crystalline structures. § LIGHT PROPAGATION THROUGH A SAMPLE Without loss of generality, let us consider a simple linearly-polarized light beam oriented in the x-direction, expressed as E⃗ = (E_x, 0, 0), perpendicular to the film's incidence plane (see Fig. <ref>). Considering that this planar light wave propagates in the z-direction, the propagation vector becomes k⃗=(0,0,2π/λ) = n (0,0,2π/λ_0) = n k_0 ẑ, where λ = λ_0/n is the wavelength in the material, λ_0 the wavelength in vacuum, and k_0 the wavenumber. Isotropic linear materials ensure the orthogonality between the propagation vector and the electric and magnetic fields <cit.>, thus H⃗ = (0,H_y,0). If no external current is applied to the material (J=0) and no free charges are induced in the bulk of the material (ρ = 0), the Maxwell equations simplify as follows <cit.>: ∂_x E_x = 0 ∂_y H_y = 0 ∂_z E_x = μ_0 ∂_t H_y -∂_z H_y = ϵ∂_t E_x §.§ Propagation matrix We will now focus only on propagation through layer 1 (the thin film). When the incident beam is a monochromatic continuous wave, the temporal dependency of the light wave becomes e^-i ω t <cit.>, leading to an electric field of the form E_x(x,y,z,t) = A(x,y,z) e^-i ω t. Here, ω=2π c/λ_0 represents the angular frequency and c the speed of light in vacuum. Incorporating this field expression into Eqs. <ref> and assuming constant material properties ϵ_1 and μ_0 within the bulk of the layer, one can derive the so-called Helmholtz equation: (∂_zz + n_1 k_0) A(x,y,z) = 0 A well-known solution <cit.> for the amplitude A(x,y,z) from Eq. <ref> is a combination of two planar waves propagating in the z- direction, A(x,y,z) = A_T(z) + A_R(z) = A_0𝐭_02 e^i k_0𝐧_1 z + A_0𝐫_12 e^- i k_0𝐧_1 z Eq. <ref> shows two distinct components, corresponding to a transmission wave (moving forward) and a reflection wave (moving backward) <cit.>. Note that 𝐭_01 and 𝐫_12 represent the complex amplitude Fresnel coefficient for transmission and reflection, respectively. The transmission occurs between layer 0 and layer 1, while the reflection takes place from layer 2 toward layer 1. At normal incidence, these coefficients are defined as 𝐭_01 = 2 𝐧_0/𝐧_0 + 𝐧_1 = 𝐫_01 + 1 𝐫_12 = 𝐧_1 - 𝐧_2/𝐧_1 + 𝐧_2 = 𝐭_12 - 1 Considering the amplitudes A_T(z) and A_R(z) immediately after the air-film interface (see Fig. <ref>), our goal is to determine the transmitted and reflected wave amplitudes just before they encounter the next interface: the film-substrate boundary at z + d_1. By applying Eq. <ref> at z + d_1, we derive the following expressions using matrix formalism <cit.>: [ A_T(z+d_1); A_R(z+d_1) ] = P_1 ×[ A_T(z); A_R(z) ] P_1 = [ e^i k_0 𝐧_1 d_1 0; 0 e^-i k_0 𝐧_1 d_1 ] The transfer matrix P_1 represents the propagation matrix for layer 1. Note that the complex refractive index (located in the exponents of Eq. <ref>) accounts for both phase delay and absorption through that homogeneous layer. Indeed, we can derive from Eq. <ref> that a collimated light beam passing through the film during a single forward trip becomes A_T(z+d_1) = A_T(z) exp(-α_1/2 d_1)exp(-i δ_1/2) Where α_1 = 4 π/λ_0κ_1 ϕ_1 = 2 π/λ_0 n_1 d δ_1 = 2 ϕ_1 Here, ϕ_1 refers to the phase delay introduced during the propagation of the beam through the film. It is more convenient to work instead with the variable δ_1, which represents the phase delay during a whole round trip. Another relevant wavelength-dependent optical parameter is α_1, known as the absorption coefficient. Considering the well-known relation <cit.> between light intensity and wave amplitude, I ∝ |A|^2, one can see from Eq. <ref> that the transmitted intensity decreases exponentially with depth as I_T(z+d_1) = I_T(z) e^-α_1 d_1, following the Beer-Lambert law <cit.>. Note that x_1 = I_T(z+d_1)/I_T(z) = e^-α_1 d represents the transmittance corresponding to a single light trip. However, it should be noted that the multiple beam interferences within the two stratified media will lead to a different overall sample transmittance. §.§ Dynamic matrix Let us now consider the case where the beam reaches the interface between two layers, such as the thin film (layer 1) and the thick substrate (layer 2). Because the incident beam is normal to the sample surface, the electric and magnetic fields have only tangential components, represented as (E_1x, H_1y) for the film layer and (E_2x, H_2y) for the substrate layer. The tangent fields must remain continuous across the interface, meaning that E_1x = E_2x and H_1y = H_2y at z = d. Imposing these boundary conditions in Eqs. <ref> gives us the following relations <cit.>: A_T^(1) + A_R^(1) = A_T^(2) + A_R^(2) 𝐧_1 (A_T^(1) - A_R^(1)) = 𝐧_2 (A_T^(2) - A_R^(2)) Here, A_T^(i) and A_R^(i) represent the amplitudes of the transmitted and reflected light waves within the layer i∈{1,2}, right next to the film-substrate interface. Eqs. <ref>, when transformed to matrix form, yield [ 1 1; n_1 -n_1 ]×[ A_T^(1); A_R^(1) ] = [ 1 1; n_2 -n_2 ]×[ A_T^(2); A_R^(2) ] After a few calculations, one can finally find [ A_T^(2); A_R^(2) ] = D_12×[ A_T^(1); A_R^(1) ] D_12 = 1/1+𝐫_12[ 1 -𝐫_12; -𝐫_12 1 ] The transfer matrix D_12 is called the dynamic matrix and accounts for amplitude changes between the film-to-substrate (interface 12). §.§ Resulting transmittance and reflectance By sequentially computing the wave propagation through the layers and the interactions at the interfaces, the transmitted and reflected fields through the whole sample can be determined as: [ A_T^(3); A_R^(3) ] = M ×[ A_T^(0); A_R^(0) ] Where the transfer matrix is now given by M = D_23× P_2 × D_12× P_1 × D_01 Note that there is no light absorption throughout layers 0 and 3 (as it corresponds to air). Therefore, the propagation in these layers only leads to a constant phase delay, which will not affect the overall transmitted or reflected light intensity. The total transmittance and reflection can be calculated from the elements of the transfer matrix M, as explained in <cit.>. We then obtain the following formulae: T = | 1/M^(1,1)|^2 = 𝐡/ττ̅ = h/a + 2 b cosδ_2 + 2 c sinδ_2 R = | M^(1,2)/M^(1,1)|^2 = ηη̅/ττ̅ = e + 2 f cosδ_2 + 2 g sinδ_2/a + 2 b cosδ_2 + 2 c sinδ_2 Where the complex coefficients are defined as follows: 𝐡 = x_1 x_2 (𝐫_01 + 1)(𝐫̅_01 + 1) · (𝐫_12 + 1)(𝐫̅_12 + 1)(𝐫_23 + 1)(𝐫̅_23 + 1) τ = 𝐫_01(𝐫_12 + 𝐫_23 x_2 e^i δ_2) x_1 e^i δ_1 + (𝐫_12𝐫_23 x_2 e^i δ_2 + 1) η = -𝐫_01(𝐫_12𝐫_23 x_2 e^i δ_2 + 1) - (𝐫_12 + 𝐫_23 x_2 e^i δ_2) x_1 e^i δ_1 Note that the overall sample absorption can then be computed as 𝒜 = 1 - (T + R) <cit.>. The overbar on bold symbols, such as τ̅_2, indicates the conjugate of complex numbers. Note that the sinusoidal functions from Eqs. <ref> depend on δ_2 and account for the beam interferences within the substrate. Since the sample's transmittance and reflectance are scalar wavelength-dependent functions, Eqs. <ref> can be equivalently written either in terms of complex coefficients (𝐡, τ, η) or either in terms of real coefficients (h,a,b,c,e,f,g), which are all defined in Appendix. Although Eqs. <ref> becomes more extensive when working with the real coefficients, it is much more convenient for computational purposes. It should also be clarified that the coefficients (a, b, c), which appear in the denominator of Eqs. <ref>, contain sinusoidal functions that depend on the phase δ_1. These sinusoidal functions account for the Fabry-Perot light beam interferences within the thin film. Figure <ref> (see curves in blue) shows the resulting transmission and reflection within the spectral range of 500 to 750 nm. In this work, we consider a simulated thin-layer sample that has a slightly absorbing glass substrate[In real samples, the absorption of the glass substrate varies with wavelength. For instance, the common Borofloat33 (a type of borosilicate substrate) exhibits significant absorption in the UV range (below 350 nm) and the NIR range (above 2100 nm) but is nearly transparent in the middle of the visible spectrum. Despite this quasi-transparency, subtle variations in transmission and reflection still occur. Researchers often limit their analysis to regions of evident transparency, though completely eliminating absorption effects is not feasible in practice (e.g., see the slight variations in substrate absorption lines in Fig. 3 of <cit.>). In our study, we use a simulated substrate with a constant weak absorption across the spectrum to provide a generalized analysis of errors in transmission and reflection formulas. This approach ensures that our findings are applicable to any type of substrate, as real substrates have diverse absorption curves.], with fix values n_2=1.5, κ_2=10^-6, and d_2=0.5 mm. The optical properties of an amorphous silicon thin film, with thickness d_1 = 1 m, were simulated employing the empirical dispersion expressions n_1 = 2.6 + 3 · 10^5/λ^2, log_10(α_1) = -8 + 1.5 · 10^6 / λ^2 <cit.>. Note that the extinction coefficient κ_1 can be directly determined from the absorption coefficient α_1, by using Eq. <ref>. § EXACT FORMULAE FOR UNIFORM FILMS So far, all the calculations have assumed a purely monochromatic light source, which introduces some unwanted high-frequency noise in the transmission and reflection spectra (see Figs. <ref>). In the actual experiment, the light sources have a finite spectral bandwidth, which inherently limits the coherence length of the beam. By carefully selecting the bandwidth, we can avoid coherent interference within the substrate, effectively eliminating the associated noise. For a standard light beam with Gaussian spectral shape, the coherence length ℓ can be determined from the specific central wavelength λ_c and the bandwidth Δλ (measured at full width half maximum). The established formula for this relationship is ℓ = 4 ln(2)/πλ_c^2/Δλ <cit.>. The pure monochromatic light source (Δλ≈ 0) considered in Eqs. <ref> inherently led to an infinite coherence length. In this case, the interreflections both within the thin film and within the thick substrate resulted in the superposition of the electromagnetic waves, causing two distinct interference effects. According to the etalon interference condition <cit.>, constructive interference occurs at wavelengths where m = 2 n d / λ is an integer, leading to high values in the transmission spectra. Note that when m is a half-integer, there is destructive interference, and we observe valleys in the spectra. According to this interference equation, the relatively small thickness of the film d ≈ 1m leads to low-frequency oscillations. These long oscillations appear as a characteristic sinusoidal pattern in the transmission spectrum, as illustrated in Figs. <ref>a and <ref>b (see dashed red lines). In contrast, the considerable thickness of the substrate, d_s≈ 1 mm, gives rise to high-frequency oscillations that can be interpreted as noise. The curves presented in Figs. <ref>a and <ref>b were plotted at each discrete wavelengths, 500, 501, 502, ..., and 750 nm. However, this spectral resolution is insufficient to clearly resolve the high-frequency oscillations originated by light wave interferences at the substrate, as illustrated in the insets of the figures (top left). Indeed, the insets show that there are eight oscillation peaks in a range of two nanometers. This limitation in sampling leads to the emergence of aliasing effects. In spectral regions where the oscillation frequency is roughly a multiple of the sampling rate, the curve appears to have fewer oscillations (e.g. 700 to 725 nm). Conversely, regions with greater misalignment exhibit rapid noisy oscillations (e.g. 725 to 750 nm). Consequently, it is standard practice to deliberately adjust the bandwidth such that the coherence length ℓ is less than 2 d_s, twice the thickness of the substrate. This adjustment mitigates the unwanted etalon effect from the glass, and it effectively removes the noise in the spectra, as the beam will become incoherent after a single round trip. Recall that the coherence length defines the distance over which the electromagnetic waves maintain their sinusoidal nature <cit.>. When two coherent light beams with electric fields E_1 and E_2 overlap, they interfere linearly, resulting in a combined electric field, E = E_1 + E_2 <cit.>. The resulting light intensity is given by I = |E_1 + E_2|^2 = I_1 + I_2 + 2 E_1 E_2 cos(Δ theta_), where Δ theta is the phase difference between the two light beams. The interference term 2 E_1 E_2 cos(Δ theta_) oscillates rapidly with changes in wavelength. In contrast, for incoherent beams, the electric fields do not interfere, and the total intensity is simply I = I_1 + I_2, with no oscillatory interference term. In many real experiments, the coherence length is set within 2 d < ℓ < 2 d_s. The multiple thin-film interreflections then result in the linear superposition of the electric fields (see Figs. <ref>). However, interreflections within the substrate lead to an incoherent superposition, producing just a simple sum of their intensities. To incorporate this complicated finite coherence model into our expressions from Eqs. <ref>, the spectral averaging method <cit.> proposes to average out the transmittance and reflectance over the corresponding phase delay introduced by the substrate, δ_2 = 4 π n_2 d_2 /λ. For instance, consider an incident beam with a reasonable spectral bandwidth of 1 nm, center around the wavelength λ_0 = 600 nm. The spectrum of the light beam spans from λ_0^- = 599.5 nm to λ_0^+ = 600.5 nm, which leads to a phase delay change of Δδ_2 = |δ_2^- - δ_2^+| = 8 π. The superposition of infinite waves with a phase delay range exceeding 2 π will effectively average out[Note that the phase delay change Δδ_2 depends not only on the bandwidth but also on the central wavelength. As the central wavelength increases, the change in the phase delay decreases, making interference effects from the substrate more likely to appear in the sample's transmittance and reflectance. This can result in subtle noise at longer wavelengths. Most modern spectrophotometers <cit.> can mitigate this high-frequency noise, as well as photometric noise, using smoothing techniques, such as the Savitzky-Golay filter.]. Therefore, we can simply integrate the transmittance and reflectance from Eqs. <ref> within the substrate phase delay range [0,2π], yielding T_ℓ = 1/2 π∫_0^2 πh dδ_2/a + 2 b cosδ_2 + 2 c sinδ_2 R_ℓ = 1/2 π∫_0^2 π(e + 2 f cosδ_2 + 2 g sinδ_2) dδ_2/a + 2 b cosδ_2 + 2 c sinδ_2 Taking into account a change of variable, θ = e^i δ_2, we can conveniently calculate the phase-average reflectance and transmittance as T_ℓ = 1/2 π∮_|θ |=1h dθ/F(θ ) R_ℓ = 1/2 π∮_|θ|=1G(z) dθ/F(θ) F(θ) = θ^2 (c + i b) + θ (i a) + (-c + ib) G(θ) = θ^2 (g + i f) + θ (i e) + (-g + if) The denominator can now be factor as F(θ) = (c + ib) (θ - θ_1) (θ - θ_2), where θ_1,2 = -a i ±√(-a^2 + 4 (b^2 + c^2))/2 (c + ib), |θ_1,2| = |- γ±√(1 - γ^2)|, γ= a/2 √(b^2 + c^2)∈ [0,1] Note that the discriminant in Eq. <ref> is negative, as a^2 ≫ b^2, c^2, making the numerator a pure imaginary number. This allows for a straightforward calculation of |θ_1,2| in Eq. <ref>. It then becomes evident that θ_1 (with a positive square root) lies within the unit disk of the complex plane, while θ_2 lies beyond the disk. Applying the Cauchy Residue Theorem <cit.> to Eq. <ref>, we finally obtain the expression for the transmittance, T_ℓ = h i ℛ(1/F(z), z_1 ) = h i/(c + i b) lim_z → z_2(z - z_2)/(z - z_1)(z-z_2) = h/√(a^2 - 4 (b^2 + c^2)) = h/u = h/a' + 2 b' cosδ_1 + 2 c' sinδ_1 The variables (u, a', b', c') are defined in Appendix. The coefficients (a', b' , c') do not have any sinusoidal components. Therefore, the oscillations in transmittance T_ℓ come solely from the phase delay δ_1 (see the sine and cosine in the denominator of Eq. <ref>). These oscillations are now only introduced by the thin film and not the substrate. Employing an absolutely similar methodology, the analysis of reflectance results in the following formula: R_ℓ = e/u - 2 (bf + cg)/u w The expressions (w, e, b, f, c, f) are also defined in the Appendix. Fig. 2 shows the transmission and reflection for a quasi-coherent beam. It can now be seen in that figure that the high-frequency noisy oscillations caused by light beam interference within the substrate have been correctly removed both in the transmission and reflection spectra. § NOVEL FORMULAE FOR STRONGLY-WEDGED FILMS We now examine a film characterized by a wedge-shaped planar surface, denoted as Δ d > 0. This wedge implies that the film thickness at the illuminated spot varies within the range [d_1 - Δ d, d_1 + Δ d]. In that case, the phase delay introduced by the film exhibits a variation from δ_1^- = 4 π n_1 (d_1 - Δ d) / λ to δ_1^+ = 4 π n_1 (d_1 + Δ d) / λ, depending on the particular region within the illuminated spot. Considering a reasonable wedge Δ d ≪ d_1, the change in phase delay is typically Δδ_1 = |δ_1^+ - δ_1^-| ≪ 2 π. As a result, while the interference effects from the thin film are not completely eliminated, they can be significantly reduced depending on the value of Δ d. A very large wedge (such as Δ d ≈ d) will eventually cancel out all interference effects. Additionally, the film’s absorption will vary depending on the light path, whether it passes through d_1 - Δ d_1, or d_1 + Δ d_1, or somewhere in between (see Fig. <ref>). Therefore, the single-trip transmittance x_1 should now be integrated from x_1^- = e^-α_1 (d_1 - Δ d) to x_1^+ = e^-α_1 (d_1 + Δ d). With the assistance of the symbolic software Wolfram Alpha Mathematica <cit.>, we were able to calculate the total transmittance of the sample, T^new_Δ d = 1/(δ_1^+ - δ_1^-) (x_1^+ - x_1^-)∫^x_1^+_x_1^-∫^δ_1^+_δ_1^-h dδ_1 d x_1/a' + 2 b' cosδ_1 + 2 c' sinδ_1 ≈1/δ_1^+ - δ_1^-∫^δ_1^+_δ_1^-h dδ_1/a' + 2 b' cosδ_1 + 2 c' sinδ_1 = 2 · h/K · (δ_1^+ - δ_1^-)[ arctan(I^+/K) + N^+ π - arctan(I^-/K) - N^- π] K = √(a'^2 - 4 · (b'^2 + c'^2)) I^- = (a' - 2b') tan(δ_1^-/2) + 2c' I^+ = (a' - 2b') tan(δ_1^+/2) + 2c' N^+ = round(δ_1^+/2π), N^- = round(δ_1^-/2π) Note that in the intermediate steps, we applied the identity tanh^-1(i q) = i tan^-1(q), valid for any value q ∈ℝ. [colback=yellow!10!white, colframe=red!50!black, title=Average light absorption in a wedge-shaped film, breakable] The average light absorption x_1 in a tilted film is approximately equivalent to that observed in a uniformly thick film, which is the reason for the approximation in Eq. <ref>. Although this assumption has been commonly used in various contexts <cit.>, it has not been previously supported by a rigorous mathematical proof. We will now explore the conditions under which this statement holds true. We introduce a local coordinate system (x', y', z) such that the linear wedge is expressed solely in terms of the coordinate x'. The film profile can then be described as S(x') = mx', where m = 2 Δ d/L represents the thin film slope. According to the Lambert-Beer law, the light intensity after the collimated beam passes through the film depth is given by I(x') = I_0 e^- α_1 (d_1 + m x'). By integrating over the x'-range [-L/2, L/2], we obtain ⟨ I_out(x') ⟩ = 1/L∫_-L/2^L/2 I_0 e^-α_1 [d_1 + mx'] dx' = I_0/L e^-α_1 d_1( -1/α_1 m e^-α_1 mx'|_-L/2^L/2) = I_0 e^-α_1 d_1sinh( α_1 m L/2 )/α_1 m L/2 We must take into account that q = α_1 m L/2 = α_1 Δ d. One should notice that sinh(q)/q ≈ 1 for q ≈ 0, which justifies the aforementioned assumption for relatively small wedges, where q remains close to zero. To quantify this, even in the case of a relatively large wedge Δ d = 60 nm and high absorption α_1 = 10^4 cm^-1, the wavelength-dependent term sinh(q)/q has an average value of 1.00060. Therefore, we can say that I ≈ I_0 e^-α_1 d_1 with an average error of around 0.06%, which is negligible in practice, comparable to the typical photometric noise of the spectrophotometer. In these common scenarios, the single-pass light transmittance x_1 in a tilted film can be considered approximately equivalent to that observed in a uniform film. The film presents high absorption at 500 nm, as seen in Fig. <ref>, where the transmittance eventually approaches zero. At that wavelength, the term sinh(y)/y finds the highest value, 1.015. We used that approximation for the derivation of Eq. <ref>. and a maximum error of 1.5% for the region of high absorption Figs. <ref>a and <ref>b present the transmittance and reflectance simulated curves, respectively, for films with different wedges, Δ d = 0, 30, 60 nm. We can see how higher wedges lead to less interference contrast, reducing the amplitude of the oscillations in the spectra. Eqs. <ref> introduce the correction factors N^+ and N^-, which account for the different branches of the tangent function and facilitate the application of the formula to high wedges, where Δ d > λ/(4 n). These factors were previously motivated in <cit.> in the context of sample transmittance for non-absorbing substrates. To clearly see the importance of the correction factors in the transmittance formula (see Eq. <ref>) when dealing with relatively high Δ d parameters, one can also see in Figs. <ref>a and <ref>b the plot of the spectra with and without the correction factors for the two representative cases Δ d = 30, 60 nm. As the wedge increases, the naive approach (without correction factors) disagrees more with respect to the expected curves. To the best of our knowledge, there is no equivalent closed-form expression for the reflectance. Consequently, we propose using the trapezoidal rule <cit.> for efficient and accurate numerical estimation: R^new_Δ d = 1/δ_1^+ - δ_1^-∫_δ_1^+^δ_1^+( e/u - 2 (bf + cg)/u w) dδ_1 = 1/2[ R_L (δ_1^- + 2 Δ d 0/N) + 2 R_L(δ_1^- + 2 Δ d 1/N) + ... ... + 2 R_L(δ_1^- + 2 Δ d N-1/N) + R_L(δ_1^- + 2 Δ d N/N) ] Where R_L(δ_1) is the reflectance for a uniform film as a function of the phase δ_1, as given by Eq. <ref>. Our findings indicate that setting N = 40 points for the phase already yields reasonable results, since we obtain a total absorptance value 𝒜 effectively zero when no film or substrate absorption is added (κ_1 = κ_2 = 0), 𝒜 = |1 - T_Δ d - R_Δ d| < 10^-2. In any case, note that we can define a sufficiently fine sampling that meets any desired level of accuracy. § PARTICULAR CASE: NON-ABSORBING SUBSTRATE §.§ Transmittance The newly derived transmittance formula, Eq. <ref>, closely matches that found by Ruiz-Perez (RP) et al. in <cit.>. However, our expression now incorporates the possibility of substrate absorption. Indeed, the RP equation (without substrate absorption) can be alternatively derived now[We found an errata in the RP formula presented in <cit.>. In particular, their Eq. 6 incorrectly omits a minus sign before the term “C_22 x sin(ϕ)”. This error can be confirmed by comparing it with Eq. A1 in <cit.>. Consequently, Eqs. 14 in <cit.>, corresponding to the formulae for the coefficients “K_1” and “K_2”, should also include a minus sign before “C_22 x”.] by simply setting κ_2 = 0 in our new Eq. <ref>. Following the notation from RP, we then get the following simplified formulae A = 16 (n_1^2 + k_1^2) n_2, B = ((n_1 + 1)^2 + k_1^2)((n_1 + 1)(n_1 + n_2^2) + k_1^2), C_1 = 2((n_1^2 + k_1^2 - 1)(n_1^2 + k_1^2 - n_2^2) - 2 k_1^2 (n_2^2 + 1)), C_2 = 2 k_1 (2(n_1^2 + k_1^2 - n_2^2) + (n_1^2 + k_1^2 - 1)(n_2^2 + 1)), D = ((n_1 - 1)(n_1 - n_2^2) + k_1^2)((n_1 - 1)^2 + k_1^2), T^RP20 = A x_1/B - C_1 x_1 cos(δ_1) + C_2 x_1 sin(δ_1) + D x_1^2 F = (B + D x_1^2 + C_1 x_1) tan(δ_1^-/2) + C_2 x_1, G = (B + D x_1^2 + C_1 x_1) tan(δ_1^+/2) + C_2 x_1, H = B^2 - x_1^2 (C_1^2 + C_2^2 - 2BD - D^2 x_1^2), T^RP20_Δ d = 2 A x_1/(δ_1^+ - δ_1^-) √(H)[arctan(G/√(H)) + N^+ π - . . arctan(F/√(H)) - N^- π] Note that T^RP20 from Eq. <ref> corresponds to the transmittance of the sample with a uniform film thickness, without taking into account substrate absorption. Similarly, T^RP20_Δ d from Eq. <ref> represents the case in which the sample has a certain wedge Δ d > 0. We have confirmed numerically that, when no substrate absorption is considered (i.e., κ_2 = 0), the root mean square error (RMSE) is zero for any wedge parameter between (i) our new exact formula from Eq. <ref>, and (ii) the approximate expression from Eq. <ref>. When κ_2 = 10^-6 and Δ d = 30 nm, the RMSE of the approximated formulae by RP goes up to 0.465%. This finding clearly shows the importance of incorporating substrate absorption into the models. More results for different wedges are summarized in Table <ref>. One can see that, as the wedge increases, the error decreases. This happens because the smaller the oscillation amplitude, the smoother the curve, getting lower peak values. Similarly, we have checked for the case of uniform films (Δ d = 0) that the exact formula Eq. <ref> and Eq. <ref> produce exactly the same numerical results when no substrate absorption is considered. §.§ Reflectance Based on the foundational work of Grebenstikov et al. <cit.>, Minkov et al. <cit.> derived in 1989 an accurate expression for reflectance (which excludes substrate absorption). While Minkov's equation is only applies to films with uniform thickness, an analytical expression for wedge-shaped films can be simply derived if we further assume κ_1 ≪ n_1 (as will be explaine in detail in the next section). One can observe that Minkov's formula, Eq. <ref>, is not completely equivalent to the exact expression derived from Eq. <ref> when substrate absorption is disregarded (κ_2 = 0). However, the numerical discrepancy between them is minimal. For instance, we found a value of discrepancy error as low as 10^-4% when using our simulated sample. The discrepancy arises because, in addition to the assumption that κ_2 = 0, second-order approximations involving n_1 and κ_1 were also utilized in Minkov's expression. However, when considering the weakly absorbing substrate from our simulated sample, the RMSE of Minkov's formula rises up to 0.034%. It should be highlighted that the reflection formula, Eq. <ref>, is considerably less affected by the substrate absorption than the transmission formula from Eq. <ref> (see Table <ref>): As mentioned earlier, the overall reflectance and transmittance can be expressed as the addition of infinite light wave components, which appear due to the inter-reflections at each interface (see Fig. <ref>). When analyzing the overall reflectance, the two main components come from the first air-film interface and the second film-glass interface. The third component from the last substrate-glass interface (thus affected by the substrate absorption) is certainly weaker. In contrast, all transmission components account for the beam passing through the whole substrate volume. Therefore, transmittance measurements are notably affected by the existing substrate absorption. The main advantage of Minkov's reflectance formula, and the reason of its popularity <cit.>, is that it provides a simpler and more accessible expression than simply setting κ_2 = 0 in our new complicated formula, Eq. <ref>. Additionally, Minkov's formula facilitates the analysis of reflection for the envelope method, as explored in the next section. Minkov's formula is shown below: A' = ((n_1 - 1)^2 + k_1^2) ((n_1 + n_2)^2 + k_1^2) B_1' = 2 ((n_1^2 + k_1^2 - 1)(n_1^2 + k_1^2 - n_2^2) + 4 k_1^2 n_2) B_2' = 4 k_1 (n_2 (n_1^2 + k_1^2 - 1) - (n_1^2 + k_1^2 - n_2^2)) C' = ((n_1 + 1)^2 + k_1^2) ((n_1 - n_2)^2 + k_1^2) A” = ((n_1 + 1)^2 + k_1^2) ((n_1 + n_2)^2 + k_1^2) B_1” = 2 ((n_1^2 + k_1^2 - 1) (n_1^2 + k_1^2 - n_2^2) - 4 k_1^2 n_2) B_2” = 4 k_1 (n_2 (n_1^2 + k_1^2 - 1) + (n_1^2 + k_1^2 - n_2^2)) C” = ((n_1 - 1)^2 + k_1^2) ((n_1 - n_2)^2 + k_1^2) G' = 64 n_2 (n_2 - 1)^2 (n_1^2 + k_1^2)^2 D” = ((n_1 + 1)^2 + k_1^2) ((n_1 + 1)(n_1 + n_2^2) + k_1^2) E_1” = 2 ((n_1^2 + k_1^2 - 1) (n_1^2 + k_1^2 - n_2^2) - 2 k_1^2 (n_2^2 + 1)) E_2” = 2 k_1 ((n_1^2 + k_1^2 - n_2^2) + (n_2^2 + 1) (n_1^2 + k_1^2 - 1)) F” = ((n_1 - 1)^2 + k_1^2) ((n_1 - 1)(n_1 - n_2^2) + k_1^2) R^Mink89 = A' - (B_1' cos(δ_1) - B_2' sin(δ_1)) x_1 + C' x_1^2/A” - (B_1”cos(δ_1) - B_2”sin(δ_1)) x_1 + C” x_1^2 + G' x_1^2/A” - (B_1”cos(δ_1) - B_2”sin(δ_1)) x_1 + C” x_1^2× 1/D” - (E_1”cos(δ_1) - E_2”sin(δ_1)) x_1 + F” x_1^2 § THE SWANEPOEL APPROXIMATIONS To ensure the completeness of this work, and to facilitate comparison with our results, we will now examine other previously established formulae for T and R that have been widely employed in the literature. As mentioned before, Swanepoel refined an algebraic procedure, commonly known as the envelope method, to directly determine (n_1, κ_1, d). However, this method finds the optical properties only at specific wavelengths λ_i corresponding to the maxima or minima Fabry-Perot interferences. In particular, these critical points are situated near the peaks and valleys of the transmission spectra (see Figs. 5a and 5b). In practice, they are identified by the intersection of the experimental spectra with their respective envelopes (reason why these intersection points are often described as the “tangent points”). This envelope method operates under two assumptions: (i) the substrate does not absorb (κ_2 = 0), and (2) the extinction coefficient of the film is much weaker than the refractive index κ_1^2 ≪ n_1^2. These two conditions are known as the Swanepoel approximations. Note that we analyzed condition (i) only in Section 7. Condition (ii) is typically valid for spectral regions with medium-to-weak absorption, where the transmittance alone indeed contains sufficient information for characterization. This second assumption, however, does not hold in areas of strong absorption, particularly near the optical band gap, where the transmission T clearly decreases, eventually approaching and reaching zero. §.§ Transmission By setting κ_2 = 0 in Eq. <ref>, the transmission through a sample with a uniform-thickness film simplifies <cit.> to the following expression: A_0 = 16 n_1^2 n_2 B_0 = (n_1 + 1)^2 (n_1 + 1) (n_1 + n_2^2) C_0 = 2 (n_1^2 - 1) (n_1^2 - n_2^2) D_0 = (n_1 - 1)^3 (n_1 - n_2^2) T^Swan83 = A_0 x_1/B_0 - C_0 x_1 cos(δ_1) + D_0 x_1^2 T_M^Swan83 = A_0 x_1/B_0 - C_0 x_1 + D_0 x_1^2 T_m^Swan83 = A_0 x_1/B_0 + C_0 x_1 + D_0 x_1^2 It must be pointed out that the upper and lower envelopes of the spectrum from Eqs. <ref> and <ref> are found by fixing the sinusoidal component of the transmittance in Eq. <ref> to the maximum or minimum value, that is, cos(δ_1) = ± 1. The set of expressions from Eq. <ref> were first derived in the Swanepoel seminal paper from 1983 <cit.>. When no substrate absorption is considered (κ_2=0), the RMSE between our exact Eq. <ref> and the approximated Eq. <ref> (which assumes κ_1^2 ≪ n_1^2) is 0.039% for our simulated thin-film sample. When substrate absorption is considered, the error increases to 0.472%. For non-uniform thin films with a certain wedge parameter Δ d>0, the expressions for the transmission and envelopes are rewritten as follows: F_0 = A_0 x_1/B_0 + D_0 x_1^2, G_0 = C_0 x_1/B_0 + D_0 x_1^2 I_0^+ = 1 + G_0/√(1 - G_0^2)tan(δ_1^+/2) I_0^- = 1 + G_0/√(1 - G_0^2)tan(δ_1^-/2) T^Swan84_Δ d = λ/4 π n_1 Δ dF_0/√(1 - G_0^2)[ arctan(I_0^+) + N^+ π - arctan(I_0^-) - N^- π] I_M = 1 + G_0/√(1 - G_0^2)tan(2 π n_1 Δ d/λ) I_m = 1 - G_0/√(1 - G_0^2)tan(2 π n_1 Δ d/λ) N_δ = round( δ_1^+ - δ_1^+/2 · 2 π) T^Swan84_M = λ/2 π n_1 Δ dF_0/√(1 - G_0^2) ( arctan(I_M) + N_δ π) T^Swan84_m = λ/2 π n_1 Δ dF_0/√(1 - G_0^2) ( arctan(I_m) + N_δ π) These formulae for wedged-shaped films were initially found in the subsequent seminal paper by Swanepoel from 1984 <cit.>. Please consider that we have also added the correction numbers N^+ and N^- to account for highly tilted films, Δ d > λ/(4 n), and we have also included a correction factor N_δ for the envelopes. An interesting detail is that when Δ d > λ/(4 n) holds, the upper and lower envelopes cross: the lower envelope will stay above the spectra and the upper envelope below (see Fig. 5c). Over a wide spectral range, one can see how the two envelopes alternate sequentially. Therefore, there are particular wavelengths at which the spectra T^Swan84_Δ d and its two envelopes precisely coincide, as seen in Figs. <ref>c-d. The crossover points are located at λ_cross = 4 n Δ d/N, for N = 1, 2, 3, ..., as discussed in <cit.>. These particular points contain essential information and allow us to extract accurate information for the optical properties. The RMSE between the exact Eq. <ref> and the approximated Eq. <ref> is 0.465% when substrate absorption is taken into considering and Δ d = 30 nm. The error from this formula is roughly equivalent to the error from the RP formula in Eq. <ref>. The RP formula (which removes the assumption κ_1^2 ≪ n_1^2) offers improvements only up to the fourth decimal place. Setting κ_2 = 0 and Δ d = 10^-5 nm in Eq. <ref> leads to a transmission error of 0.039%. As expected, the result obtained was identical to that previously obtained by using Eq. <ref> for uniform films. This consistency occurs because Eq. <ref> converges to Eq. <ref> as Δ d → 0. §.§ Reflection Setting κ_1^2 ≪ n_1^2 in Minkov's reflection formula (see Eq. <ref>), we derive the following simplified expression for uniform films: a_0 = n_1 - 1, b_0 = n_1 + 1 c_0 = n_1 - n_2, d_0 = n_1 + n_2 e_0 = n_1 - n_2^2, f_0 = n_1 + n_2^2 g_0 = 64 n_2 (n_2 - 1)^2 n_1^4 R^RP01 = (a_0 d_0)^2 + (b_0 c_0 x_1)^2 - 2 a_0 b_0 c_0 d_0 x_1 cos(δ_1)/(b_0 d_0)^2 + (a_0 c_0 x_1)^2 - 2 a_0 b_0 c_0 d_0 x_1 cos(δ_1) + g_0 x_1^2/(b_0 d_0)^2 + (a_0 c_0 x_1)^2 - 2 a_0 b_0 c_0 d_0 x_1 cos(δ_1)× 1/b_0^3 f_0 + a_0^3 e_0 x_1^2 - 2 a_0 b_0 c_0 d_0 x_1 cos(δ_1) R^RP01_M/m = (a_0 d_0 ± b_0 c_0 x_1)^2/(b_0 d_0 ± a_0 c_0 x_1)^2 + g_0 x_1^2/(b_0 d_0 ± a_0 c_0 x_1)^2× 1/(b_0^3 f_0 + a_0^3 e_0 x_1^2 ± 2 a_0 b_0 c_0 d_0 x_1) We have verified that these results precisely match the exact formulae from Eq. <ref> when κ_2 = 0 and κ_1^2 ≪ n_1^2. Note that the way to effectively set this last approximation is to write κ_1^2 = 0 when that term is compared to n_1^2 within summations. The upper and lower envelopes defined in Eq. <ref> correspond to the positive (+) and negative (-) signs, respectively. These formulae were partially developed by Minkov et al. <cit.> in 1989. The RMSE of Eq. <ref> is 0.42% for absorbing substrates and 0.02% for transparent substrates. The work <cit.> by RP et al. from 2001 also incorporates equations that address the scenario of weakly-wedge shaped films: L_0 = b_0^2 d_0^2 + a_0^2 c_0^2 x_1^2, L_1 = b_0^3 f_0 + a_0^3 e_0 x_1^2 L_2 = 2 a_0 b_0 c_0 d_0 x_1, L_3 = √(L_2 + L_1) L_4 = √(L_1 - L_2), L_5 = a_0^2 d_0^2 + b_0^2 c_0^2 x_1^2 L_6 = √(L_0 + L_2), L_7 = √(L_0 - L_2) L_8 = tan(δ_1^-/2), L_9 = tan(δ_1^+/2), L_10= tan(δ_1^+ - δ_1^-/4) T_1 = arctan(L_3 L_9/L_4) + N^+π - arctan(L_3 L_8/L_4) - N^-π T_2 = arctan(L_6 L_9/L_7) + N^+π - arctan(L_6 L_8/L_7) - N^-π R^RP01_Δ d = 1 - 2/(δ_1^+ - δ_1^-) · (L_1 - L_0)[ g_0 x_1^2/L_3 L_4 T_1 + . . (L_0 - L_5) (L_1 - L_0) - g_0 x_1^2/L_6 L_7 T_2 ] T_1M = arctan( L_4 L_10/L_3) + N_δπ, T_2M = arctan( L_7 L_10/L_6) + N_δπ T_1m = arctan( L_3 L_10/L_4) + N_δπ, T_2m = arctan( L_6 L_10/L_7) + N_δπ R^RP01_Δ d, M/m = 1 - 4/(δ_1^+ - δ_1^-) (L_1 - L_0)( g_0 x_1^2/L_3 L_4 T_1M/1m. + . (L_0 - L_5)(L_1 - L_0) - g_0 x_1^2/L_6 L_7 T_2M/2m) It should be noted that we have now added the corresponding correction factors to the original reflection formula and its envelopes, accounting also for the strongly-wedge shaped films. Figs. <ref>d display the reflectance for a highly-tilted film. In this spectral range, one can also see one shift of the lower and upper envelopes for the reflectance spectrum. § SUBSTRATE TRANSMISSION AND REFLECTION The thin film is typically deposited on a commercial glass substrate, which is relatively thick (with a known d_2 ∼ 1 mm) and has plane-parallel surfaces, ensuring uniformity. Before performing the optical characterization of the film under study, it is a common practice to first characterize the optical properties of the substrate (n_2, k_2) using transmittance and/or reflectance measurements of the substrate alone, denoted {T_s^exp(λ_i), R_s^exp(λ_i)}. This preliminary step allows us to counteract the influence of the substrate on the overall transmittance and reflectance of the sample, allowing a precise determination of the desired properties of the thin film, namely (n_1, k_1, d_1). Under the approximation k_2 = 0, the thickness of the substrate d_s becomes irrelevant for the calculation of the transmission and reflection intensities, as the glass substrate will not absorb light. However, the real refractive index n_2 remains important because it determines the amount of light reflected on the film-to-substrate (interface 02) and substrate-to-air (interface 20), as seen in Fig. <ref>, and described in Section 4.A. Figure <ref> shows a representative diagram of the spectroscopic measurements for the substrate alone. In the simplistic scenario in which κ_2 = 0, n_2 can be uniquely determined <cit.> from either transmission-only or reflection-only measurements by using the following relations: T_s^approx = 2 n_2/n_2^2 + 1, n_2 = 1/T_s^approx + ( 1/(T_s^approx)^2 - 1 )^1/2 R_s^approx = (n_2 - 1)^2/n_2^2 + 1, n_2 = 1 + √(R_s^approx (2 - R_s^approx))/1 - R_s^approx When the glass substrate does absorb light, both transmission and reflection measurements of the substrate are necessary to uniquely determine n_2(λ), k_2(λ), and d_s. The formulae can be found using the same Abele transfer matrix formalism as in Section 4, and then using the spectral averaging method to account for the limited coherence length, L < 2 d_s. However, for the particular case of a simple slab surrounding by air, it becomes more convenient to use the traditional technique of infinite incoherent summation <cit.>: T_s = T_02^2 e^-α_2 d_2×∑_m=0^∞[ R_02 e^- α_2 d_2]^2 m = T_02^2 e^- α_2 d_2/1 - R_02^2 e^-2 α_2 d_2 R_s = R_02 + T_02^2 R_02 e^-2 α_2 d_2×∑_m=0^∞[ R_02 e^- α_2 d_2]^2 m = R_02 + R_02 T_02^2 e^-2 α_2 d_2/1 - R_02^2 e^-2 α_2 d_2 Here, R_02 = |𝐫_02|^2 = |𝐫_20|^2 and T_02 = |𝐭_02|^2 = |𝐭_20|^2 represent the square norm of the Fresnel's coefficients[From Eqs. <ref>, we can see that, at normal incidence, the normed square of these coefficients is the same for the air-to-substrate and substrate-to-air interfaces]. Following the diagram from Fig. <ref>, we can see that the first addend “R_02” from Eq. <ref> corresponds to the first bounce back of the light beam at the air-to-substrate interface. The second addend (for m=0) occurs when: (i) the light passes through the air-to-substrate interface (T_02), (ii) it then gets reflected at the substrate-to-air interface (R_02), and (iii) it is finally transmitted through the substrate-to-air (T_20=T_02) interface bouncing back to the detector above. During the round-trip of the beam within the slab, the Lambert-Beer law describes the amount of light absorption; this absorption is the responsible for the exponential term for m = 0 in the Eq. <ref>. Further interreflections (for m≥ 1) just include other addends that can be reasoned in a similar fashion. These direct formulae that define T_s and R_s as functions of the substrate optical properties have been known for more than a century. However, determining the substrate properties (n_2, k_2, d_2) from T_s and R_s is a more intricate inverse problem. The work by Nichelatti <cit.> in 2002 provided the first-ever analytical expressions for that scenario. The following relations were found: R_02(T_s, R_s) = 2 + T_s^2 - (1 - R_s)^2/2 (2 - R_s) - √(( 2 + T_s^2 - (1 - R_s)^2 )^2 - 4 R_s (2 - R_s))/2 (2 - R_s) k_2 = λ/4 π hln( R_02 T_s/R_s - R_02) n_2 = 1 + R_02/1 - R_02±√(4 R_02/(1 - R_02)^2 - k_2^2) Both the positive and negative signs in Eq. <ref> define correct mathematical solutions to Eq. <ref>. However, only one solution makes physical sense. While it is possible to discriminate the correct solution on a case-by-case basis by using commonly reported values, in our particular spectroscopic analysis, only the positive solution is physically meaningful. Note that the film can be analyzed in spectral regions of high absorption (when κ_1^2 ≮ n_1^2), as it can be very thin and still permit some transmission. However, using fully absorbing substrates with a high κ_2 in the spectral region of interest is not desirable, as their significant thickness would then prevent any transmission entirely. Therefore, κ_2^2 ≪ n_2^2 is always the practical scenario for the case of the substrate. That implies that n_2 from Eq. <ref> is greater than one for the positive square-root (the correct physical solution) and less than one for the negative square-root (which must be discarded). We can now compare the approximate formulae (with κ_2 = 0) from Eqs. <ref> and <ref> with the exact expressions from Eqs. <ref> and <ref>. It is important to note that for an absorbing substrate with κ_2 = 10^-6, the RMSE of the approximated formulae is 0.3% for transmittance and 0.2% for reflectance. Deriving n_2 from the simpler Eqs. <ref> and <ref> leads to an RMSE of 0.010% and 0.005%, respectively, for our simulated data. § CONCLUDING REMARKS In the present article, we derived the formulae for transmittance and reflectance of a normal-incident quasi-coherent light beam in a sample with a thin film, possibly with a high wedge profile, deposited on a thick absorbing substrate. Our model assumes a homogeneous, isotropic, and non-magnetic film material with linear response. In addition, we review other relevant formulae commonly used in the literature, examining their approximations and the level of accuracy. Rougher approximations lead to formulae that are easier to implement and compute. Then, a trade-off can be found between the accuracy of more complex expressions and the efficiency of approximate ones. Depending on the specific sample and the desired accuracy level, one can choose the most suitable model. For instance, for many conveniently-prepared films with uniform-thickness analyzed in the medium-to-low-absorption region, the simple Swanepoel formulae provide good accuracy with minimal computation time. However, our newly-developed formula proves to be more accurate in other more complicated scenarios: For instance, it is particularly effective when dealing with an `exotic' film surface, modeled with a large wedge parameter, Δ d > λ/(4 n), over a small illumination spot. Additionally, it is beneficial for the analysis across wide spectral ranges that encompass regions with strong film absorption (κ_1^2 ≮ n_1^2) and substrate absorption (κ_2 >0). A review of the formulae analyzed in our work can be found in Table <ref>. These expressions have been written with a consistent notation, tested numerically and analytically, and compared with each other. These formulae have been coded both in Python and in Matlab, and they are publicly available https://drive.google.com/drive/folders/1Mv0p9or5ePowgt37yitNnw2Xe449IFTG?usp=sharinglink. Appendix Following the notation from Swanepoel <cit.>, we define the following coefficients: 𝐫_ij = r_ij + i r_ij' r_ij = (n_i^2 - n_j^2) + (κ_i^2 - κ_j^2)/(n_i + n_j)^2 + (κ_i + κ_j)^2 r_ij' = 2(n_iκ_j - n_jκ_i)/(n_i + n_j)^2 + (κ_i + κ_j)^2 R_ij = |𝐫_ij|^2 = r_ij^2 + r_ij'^2 a = 1 + R_01 R_12 x_1^2 + R_01 R_23 x_1^2 x_2^2 + R_12 R_23 x_2^2 + 2(r_01 r_12 (1 + R_23 x_2^2) - r_01' r_12' (1 - R_23 x_2^2)) x_1 cos(ϕ_1) + 2(r_01 r_12' (1 - R_23 x_2^2) + r_01' r_12 (1 + R_23 x_2^2)) x_1 sin(ϕ_1) e = R_01 + R_12 x_1^2 + R_23 x_1^2 x_2^2 + R_01 R_12 R_23 x_2^2 + 2(r_01 r_12 (1 + R_23 x_2^2) + r_01' r_12' (1 - R_23 x_2^2)) x_1 cos(ϕ_1) + 2(r_01 r_12' (1 - R_23 x_2^2) - r_01' r_12 (1 + R_23 x_2^2)) x_1 sin(ϕ_1) b = r_12 r_23 (1 + R_01 x_1^2) - r_12' r_23' (1 - R_01 x_1^2) x_2 + (r_01 r_23 (1 + R_12) - r_01' r_23' (1 - R_12)) x_1 x_2 cos(ϕ_1) + (r_01 r_23' (1 - R_12) + r_01' r_23 (1 + R_12)) x_1 x_2 sin(ϕ_1) f = r_12 r_23 (x_1^2 + R_01) x_2 + r_12' r_23' (x_1^2 - R_01) x_2 + (r_01 r_23 (1 + R_12) + r_01' r_23' (1 - R_12)) x_1 x_2 cos(ϕ_1) + (r_01 r_23' (1 - R_12) - r_01' r_23 (1 + R_12)) x_1 x_2 sin(ϕ_1) c = (r_12 r_23' (1 + R_01 x_1^2) + r_12' r_23 (1 - R_01 x_1^2)) x_2 + (r_01 r_23' (1 + R_12) + r_01' r_23 (1 - R_12)) x_1 x_2 cos(ϕ_1) - (r_01 r_23 (1 - R_12) - r_01' r_23' (1 + R_12)) x_1 x_2 sin(ϕ_1) g = r_12 r_23' (x_1^2 + R_01) x_2 - r_12' r_23 (x_1^2 - R_01) x_2 + (r_01 r_23' (1 + R_12) - r_01' r_23 (1 - R_12)) x_1 x_2 cos(ϕ_1) - (r_01 r_23 (1 - R_12) + r_01' r_23' (1 + R_12)) x_1 x_2 sin(ϕ_1) u = 1 + R_01 R_12 x_1^2 - R_01 R_23 x_1^2 x_2^2 - R_12 R_23 x_2^2 + 2(r_01 r_12 (1 - R_23 x_2^2) - r_01' r_12' (1 + R_23 x_2^2)) x_1 cos(ϕ_1) + 2(r_01 r_12' (1 + R_23 x_2^2) + r_01' r_12 (1 - R_23 x_2^2)) x_1 sin(ϕ_1) w = 1 + R_01 R_12 x_1^2 + 2(r_01 r_12 - r_01' r_12') x_1 cos(ϕ_1) + 2(r_01 r_12' + r_01' r_12) x_1 sin(ϕ_1) h = ((1 + r_01)^2 + r_01'^2) ((1 + r_12)^2 + r_12'^2) ((1 + r_23)^2 + r_23'^2) x_1 x_2 a'_1 = 1 + R_01 R_12 x_1^2 - R_01 R_23 x_1^2 x_2^2 - R_12 R_23 x_2^2 b'_1 = ( r_01 r_12 (1 - R_23 x_2^2) - r'_01 r'_12 (1 + R_23 x_2^2) ) x_1 c'_1 = ( r_01 r'_12 (1 + R_23 x_2^2) + r'_01 r_12 (1 - R_23 x_2^2) ) x_1 § BIBLIOGRAPHY
http://arxiv.org/abs/2409.02835v2
20240904160153
Exploring nonlinear Rashba effect and spin Hall conductivity in Janus MXenes W2COX (X = S, Se, Te)
[ "Arjyama Bordoloi", "Sobhit Singh" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Department of Mechanical Engineering, University of Rochester, Rochester, New York 14627, USA [email protected] Department of Mechanical Engineering, University of Rochester, Rochester, New York 14627, USA Materials Science Program, University of Rochester, Rochester, New York 14627, USA § ABSTRACT Rashba spin-orbit coupling (RSOC) facilitates spin manipulation without relying on an external magnetic field, opening up exciting possibilities for advanced spintronic devices. In this work, we examine the effects of crystal momentum (k) nonlinearity and anisotropy on the conventional Rashba effect, with a particular focus on their impact on the spin Hall conductivity (SHC) in a newly predicted family of 2D Janus materials, W_2COX (X = S, Se, Te). Using first-principles density functional theory calculations, we confirm the dynamical and mechanical stability of the studied 2D materials. Strikingly, this materials family exhibits pronounced nonlinear Rashba spin splitting at the Γ point of Brillouin zone near the Fermi level, which cannot be adequately described by the linear-k Rashba model. Therefore, third-order momentum contributions (k^3) must be incorporated into the Rashba Hamiltonian. Our analysis reveals that among the studied systems, W_2COS exhibits the highest k^3 contribution of -45.9 eV Å^3, despite having the lowest linear Rashba constant. A detailed analysis of electronic structure reveals topological nontrivial behaviour in these 2D materials, yielding sizable SHC that is primarily governed by the nonlinear Rashba effect. Notably, these materials also exhibit large spin Hall angle (0.018 – 2.5), which is comparable to that of in bulk topological insulators like Bi_2Se_3 and Bi_2Te_3, and surpassing those in narrow bandgap bulk semiconductors GeTe and SnTe, as well as heavy metals such as Pt. Sizable SHC, large spin Hall angles, and the ability to tune SHC via electric fields without altering the topological properties, rooted in the crystal field splitting, underscore the potential of these materials for spintronic applications. Exploring nonlinear Rashba effect and spin Hall conductivity in Janus MXenes W_2COX (X = S, Se, Te) Sobhit Singh 0000-0002-5292-4235 September 9, 2024 =================================================================================================== § INTRODUCTION Two-dimensional topological insulators (TIs), also known as quantum spin Hall (QSH) insulators, are characterized by an insulating bulk state combined with conducting helical edge states. These helical edge states enable counter-propagating motion of electrons with opposite spins, creating a “two-lane highway" that allows for dissipationless electron transport  <cit.>. This unique property makes 2D TIs highly promising for low-power and multifunctional spintronic devices. Additionally, large band-gap 2D TIs are crucial for realizing topological superconductivity and Majorana fermions <cit.>. In recent years, there has been a surge in theoretical studies aimed at proposing efficient 2D TIs for spintronic applications, though only a few have been experimentally synthesized so far <cit.>. In the quest for experimentally viable 2D TIs, Weng et al. introduced a novel class of 2D materials known as M_2CO_2 (where M = Mo, W, Cr) <cit.>.This class is particularly intriguing due to its close association with the experimentally synthesized 2D MXenes, which are derived from the selective chemical etching of MAX phases—M_n+1AX_n (n=1, 2, 3,...), where M is a transition metal, A is a group 12–14 element, and X is carbon (C) or nitrogen (N) <cit.>. MXene surfaces are chemically active and commonly terminated with atoms or groups such as fluorine (F), oxygen (O), or hydroxyl (OH) <cit.>. Therefore, M_2CO_2 holds promise for experimental realization, potentially advancing the development of 2D TIs. In these materials, band inversion arises from crystal field splitting, while the band gap is opened by spin-orbit coupling (SOC) <cit.>. Among the M_2CO_2 monolayer family, W_2CO_2 stands out with the largest theoretically predicted band gap of 0.194 eV <cit.>, highlighting its potential applications in semiconductor spintronic technology. However, for a material to be suitable for spintronic device design, it is crucial that its electronic properties can be tuned via external perturbations, such as electric fields or strain, to enhance device functionality. Unlike intrinsic SOC, as found in W_2CO_2 monolayer, which lacks such tunability, Rashba spin-orbit coupling (RSOC) offers greater flexibility, making it more suitable for device applications  <cit.>. Therefore, in this work, we aim to induce RSOC in centrosymmetric W_2CO_2 by breaking its spatial-inversion symmetry through the substitution of one oxygen atom with a chalcogen atom X (X = S, Se, Te). This results in a new series of 2D Rashba materials, W_2COX. The Rashba effect, arising from the broken inversion symmetry, introduces an additional degree of freedom, thereby enhancing the potential of these materials for spintronic devices with more flexible and tailored design possibilities. In this work, we start with the most stable configuration of W_2CO_2 and substitute one of the oxygen atoms with a similar chalcogen atom X (S, Se, Te). Using first-principles methods, we investigate how these substitutions influence the electronic and topological properties compared to the parent compound. Our findings reveal that the substitution breaks inversion symmetry, leading to noticeable Rashba spin splitting in each material. Interestingly, due to the C_3v point group symmetry and deviations from a purely parabolic band structure, the standard linear in crystal momentum (k) Rashba-Bychkov model alone cannot adequately explain the observed Rashba spin splitting in these materials. Therefore, third-order momentum contributions (k^3) need to be incorporated into the Rashba Hamiltonian. We observe that all three monolayers show significant nonlinear RSOC, with W_2COS exhibiting the highest k^3 contribution (-45.9 eVÅ^3) in its conduction band near the Fermi level (E_F). In the context of topological properties, while W_2COS and W_2COSe monolayers remain topologically nontrivial (Z_2 = 1) after the substitution, W_2COTe monolayer becomes topologically trivial (Z_2 = 0). Further, we examine the spin Hall conductivity (SHC) for these monolayers, which is crucial given their non-trivial topological features and significant RSOC, making them promising candidates for achieving sizable SHC. We find W_2COS monolayer exhibits the highest transverse spin Hall conductivity, approximately 474 (ħ/e) S/cm^-1 near E_F, attributed to dominant k^3 contributions. Interestingly, despite showing a lower SHC compared to heavier metals like platinum <cit.> and tantalum <cit.>, W_2COX exhibits relatively larger spin Hall angles (0.018–2.5 at E_F), highlighting their potential for designing efficient spintronic devices. Additionally, since the band inversion in these materials is primarily due to crystal field splitting, it is also expected that the SHC can potentially be modulated by applying electric fields, without altering their topological electronic properties. § COMPUTATIONAL DETAILS First principles density functional theory (DFT) calculations <cit.> were performed using the Vienna Ab initio Simulation Package (VASP) <cit.> on free-standing monolayers of W_2COX (X=S, Se, Te). A vacuum thickness of 15 Å was maintained along the z-direction to avoid interlayer interactions. The projector augmented-wave (PAW) method <cit.> was employed with a kinetic energy cutoff set at 650 eV for the plane wave basis set. The exchange-correlation functional was treated within the generalized gradient approximation as parameterized by Perdew, Burke, and Ernzerhof for solids (PBEsol) <cit.>. All the lattice parameters and inner atomic coordinates were fully optimized until the ground state was reached, with an electronic energy convergence criterion of 10^-7 eV and residual Hellmann-Feynman forces of less than 10^-3 eV/Å per atom. A Γ-centered k-mesh of size 12 × 12 × 1 was used for both geometrical optimization and electronic structure calculations. Contributions from six valence electrons each from W (6s^25d^4), O (2s^22p^4), and S (3s^23p^4)/Se (4s^24p^4)/Te (5s^25p^4) were considered in the PAW pseudopotential, while four electrons were considered for C (2s^22p^2). The dynamical stability of the studied materials was tested using phonon calculations in 4 × 4 × 1 supercells within the finite displacement method in VASP. The post-processing of phonon calculations was performed using PHONOPY <cit.>. The elastic constants C_ij were computed using the stress-strain relationship as implemented in VASP, ensuring the convergence of the elastic constants with varying k-mesh sizes. The MechElastic Python package <cit.> was used for detailed analysis of all elastic constants and for checking the mechanical stability of the systems. SOC was considered in all calculations except the phonon calculations. VASPKIT <cit.> and the PyProcar package <cit.> was used for post-processing of the electronic structure data. Topological properties were investigated using Wannier tight-binding Hamiltonians with projections from the s- and d-orbitals of W, and the s- and p-orbitals of C, O, and Te atoms, computed using Wannier90 <cit.>. Subsequent calculations of Z_2 topological invariants from the Wannier tight-binding Hamiltonian were performed using WannierTools <cit.>. SHC for each of the investigated systems was also computed from the Maximally Localized Wannier Functions (MLWF), using a dense k-mesh of 401 × 401 × 1 as implemented in WannierTools <cit.>. The SHC values were converged to within less than 1% by varying the k-mesh size. § RESULTS AND DISCUSSIONS §.§ Crystal structure and dynamical and mechanical stability Figure <ref> illustrates the crystal structure of W_2COX (where X = S, Se, or Te) monolayer. These compounds are derived from the parent W_2CO_2 structure by substituting one oxygen atom with a chalcogen atom X (X = S, Se, Te). Previous studies <cit.> have shown that, among all possible configurations resulting from the termination of the transition metal atom site in W_2C with oxygen atoms, the BB-terminated configuration is energetically the most favorable one. Therefore, we start from the most stable BB-terminated configuration and substitute one oxygen atom with a chalcogen atom (X) at the B-site, as shown in Fig. <ref>(a), to break the spatial-inversion symmetry. Our investigation reveals that due to this substitution, all three systems—W_2COX (X = S, Se, Te)—crystallize with non-centrosymmetric p3m1 symmetry (layer group number 69), in contrast to W_2CO_2, which crystallizes in centrosymmetric p3̅m1 symmetry (layer group number 72). This is consistent with some of the the previously reported literature on similar type of systems including Mo_2COX (X=S, Se, Te) <cit.>. The optimized lattice parameters and relevant bond lengths for each of these systems are provided in Table <ref> in Appendix <ref>. For any theoretically predicted material, it is crucial to investigate its dynamical and mechanical stability. Figures <ref>(a), <ref>(b), and <ref>(c) represent the phonon band structure of W_2COS, W_2COSe, and W_2COTe monolayers, respectively, computed along the high-symmetry directions in the Brillouin zone. No imaginary phonon modes are present in the phonon band structure, except for small U-shaped features observed in the first acoustic branch near the Γ point, which is a characteristic of layered 2D systems <cit.>. This verifies the dynamical stability of all three monolayers. Next, we ensure the mechanical stability of these materials. We compute the elastic constants C_ij for each material and check if they satisfy the Born-Huang mechanical stability criteria <cit.>. According to these criteria a crystal is mechanically stable if it has the lowest Gibbs free energy in its relaxed state, i.e., in the absence of external loads, compared to any other state obtained by applying a small strain. This means the elastic stiffness matrix, C_ij, must be positive definite, with all eigenvalues positive, and the matrix must be symmetric. Our calculations reveal that all the three studied systems meet the necessary conditions for mechanical stability, with positive values satisfying the Born-Huang mechanical stability criteria for 2D hexagonal structures, i.e., C_11 + C_12 > 0 and C_11 - C_12 > 0. We also calculate the mechanical constants including Young's modulus, 2D layer modulus and shear modulus which are listed in table <ref> in Appendix <ref>. The values are comparable to those of other standard 2D materials like graphene and h-BN <cit.>. §.§ Electronic band structure Figure <ref> illustrates the electronic band structures of W_2COX (X=S, Se, Te) monolayers with projections from the d_x^2-y^2+d_xy, d_xz+d_yz, and d_z^2 orbitals of W. Notably, the W-5d orbitals are under the influence of a triangular prismatic crystal field of the C_3v point group, as shown in Fig. <ref>(c). The band structures are computed along the high symmetry directions of the Brillouin zone with the dashed line representing the Fermi level. Without SOC, all three systems exhibit semimetallic behavior. In W_2COS and W_2COSe, the valence band maximum (VBM) and conduction band minimum (CBM) touch each other at the Γ point at the Fermi level (E_F). However, in W_2COTe, the CBM forms an electron pocket at the Γ point, and the VBM forms a hole pocket near M point of the Brillouin zone. Focusing specifically on the orbital contributions near the Fermi level, the primary contributions come from the W-5d orbitals. Even without SOC, the orbital-projected band structure indicates a band inversion between the d_xz+d_yz and d_z^2 orbitals for all three materials. Furthermore, as we move from S to Se to Te, the bands originating from the d_xz+d_yz and d_x^2-y^2+d_xy orbitals are pushed up in energy near the Γ point. This trend is primarily due to the decrease in electronegativity from S to Se to Te with increasing atomic number. In contrast, the d_z^2 orbitals are not significantly affected, largely because they do not interact directly with the ligands. The inclusion of SOC induces spin splitting in the electronic band structures. Due to the broken inversion symmetry, W_2COX monolayers exhibit a Rashba-type spin splitting at the Γ point, which was absent in the parent compound W_2CO_2. For practical spintronic device design, it is crucial to accurately determine the Rashba parameter of the material, as it provides a direct estimation of the spin precession length essential for device design. For the investigated systems, if we focus on the enlarged view of the Rashba bands (Figure 2) near the Γ point, the bands exhibit a non-quadratic behaviour. While most of the recent literature often overlooks this feature and uses the linear (in crystal momentum k) Rashba model to fit such bands, there are studies emphasizing the necessity of including higher-order terms in the Rashba Hamiltonian for systems with certain high symmetries <cit.>. This is crucial for accurately capturing spin splitting and avoiding underestimation or overestimation of the Rashba coupling strength. As reported in literature <cit.>, the Rashba spin splitting in materials with C_3v point group symmetry, which have non-parabolic bands, cannot be adequately described by the linear-k Rashba-Bychkov model alone. To accurately capture the Rashba spin splitting present in these materials, terms containing cubic orders of k need to be incorporated into the Rashba Hamiltonian. Hence, in the case of C_3v symmetry, the Rashba Hamiltonian takes the form Ĥ_R(k) = (α_Rk + α_3Rk^3)(cosϕσ_y - sinϕσ_x) + α_3R'^2 k^3 cos 3ϕσ_z , where ϕ = cos^-1(k_x/k) with k = √(k_x^2 + k_y^2). The k_x and k_y directions are denoted in the inset of Figs. <ref>(c,d). Furthermore, α_R represents the linear Rashba contribution, α_3R represents the isotropic third-order Rashba contribution, and α_3R' is the anisotropic third-order Rashba contribution. The energy eigenvalues of the full Hamiltonian for 2D free electron gas including Rashba term Ĥ(k) = Ĥ_0(k) + Ĥ_R(k) ≡k^2/2m + Ĥ_R(k) are E_±(k) = k^2/2m±√((α_Rk + α_3Rk^3)^2 + α_3R'^2 k^6 cos^2 3ϕ) . To determine the accurate values of these Rashba terms, we adopt a similar approach to that used by Vajna et al. <cit.>. First, we compute the square of the energy splitting of the two eigenvalues of the Rashba Hamiltonian, i.e., Δ E = [E_+(k) - E_-(k)] / 2. The square of this energy difference attains the form Δ E(k)^2 = (α_Rk + α_3Rk^3)^2 + α_3R'^2 k^6 cos^2 3ϕ . Figure <ref>(a) represents the variation of Δ E(k)^2 with respect to k, plotted along both the k_x and k_y directions for the representative case of W_2COTe monolayer. Similar plots for W_2COS and W_2COSe monolayers are included in the Appendix <ref>. The magenta line represents the data plotted along the x direction, while the dark yellow dashed line represents the data plotted along the y direction. For all three materials, the variations of Δ E(k)^2 along the k_x and k_y directions overlap on top of each other, indicating negligible anisotropy in the electronic band structure between the k_x and k_y directions. This observation aligns well with our results of the anisotropic third-order Rashba term. We calculate the linear-k Rashba term α_R by fitting the linear Rashba-Bychkov model to the DFT-computed data, as shown in Fig. <ref>(b). As can be seen from Figs. <ref>(b) and <ref>(b), the linear Rashba model fits well with the DFT-computed results only for | k_x |≤ 0.03 Å^-1, 0.09 Å^-1, and 0.07 Å^-1 for X = S, Se, and Te, respectively. Beyond these values, higher-order terms are required to accurately fit the DFT-computed bands. The fitting yields α_R values of 0.45, 0.59, and 0.85 eVÅ for X = S, Se, and Te, respectively. Next, we determine the isotropic cubic contribution α_3R by fitting the DFT data along the k_y direction, setting k_x = 0 and keeping α_R constant as obtained from the linear Rashba model [see Figure <ref>(c)]. In this case, ϕ = π/2. The fitting yields α_3R values of -45.9, -3.7, and -7.3 eVÅ^3 for X = S, Se, and Te, respectively. Strikingly, the CBM of W_2COS has the highest cubic contributions, which closely correlates with our spin Hall conductivity results discussed below. Finally, we compute the anisotropic contribution by fitting the DFT data along the k_x direction with k_y = 0, resulting in ϕ =0 [see Figure <ref>(d)]. We keep the values of α_R and α_3R fixed at the previously obtained values and observe that the anisotropic contributions are nearly zero, i.e., on the order of 10^-4. This agrees well with our observations in Figure <ref>(a), which shows the absence of anisotropy in electronic band structure. Another significant outcome of including SOC in the electronic band structure is the emergence of a gap between the VBM and CBM in all three cases. With the opening of this band gap, W_2COS and W_2COSe monolayers acquire a semiconducting nature, each with an indirect band gap of 36 meV and 24 meV, respectively. In contrast, W_2COTe retains its semimetallic nature even with SOC included, due to the presence of the electron and hole pockets. While the opening of this band gap along with the band inversion observed in the electronic band structure are indicative of non-trivial topology, confirmation through the calculation of the Z_2 topological invariant is essential, which is discussed in section <ref>. Figure <ref> shows constant energy 2D contour plots of spin texture calculated for all three systems in the k_x-k_y plane centered at the Γ point. The helical spin texture, typical of 2D Rashba systems, confirms the presence of the 2D Rashba effect in these materials. Strikingly, all three systems exhibit contributions from the S_z component along with S_x and S_y components. This suggests the presence of cubic terms of k in the Rashba Hamiltonian, leading to a hexagonal warping effect in the spin texture. The warping effect is relatively less pronounced in W_2COSe due to smaller values of the cubic Rashba term α_3R, whereas it becomes more prominent in the other two systems with an increase in the value of α_3R. §.§ Topological properties To analyze the topological properties of the studied configurations, we employ the Z_2 topological invariant calculated using the concept of Wannier charge centers (WCC) <cit.>. The evolution of the WCCs of maximally localized Wannier functions along the time-reversal invariant plane k_z = 0 is used to characterize their topological nature. Using WannierTools, we compute the Z_2 topological invariants for all three systems. Interestingly, W_2COS and W_2COSe exhibit topologically nontrivial behavior with Z_2 = 1, whereas W_2COTe exhibits topologically trivial behavior with Z_2 = 0. However, the parent compound W_2CO_2 shows topologically nontrivial behavior <cit.>. To understand the origin of the topological behavior of these materials, we analyze the energy band diagrams of these systems near the Fermi level. From the orbital-projected band structures of these monolayers, it is evident that W-5d orbitals dominate near the Fermi level in all three materials. These W-5d orbitals are under the influence of a triangular prismatic crystal field of the C_3v point group, as shown in Fig. <ref>(c). This crystal field splitting causes the doubly degenerate d_xz+d_yz orbitals to be higher in energy than the d_z^2 orbitals, with the Fermi level lying between these two energy levels. The d_x^2-y^2+d_xy orbitals, which are also doubly degenerate, on the other hand, are located well below the Fermi level. Hence, while analyzing the topological properties of these materials, we particularly focus on the d_xz+d_yz and d_z^2 orbitals. Since two W atoms are present in these systems, the W d-orbitals form bonding (+) and antibonding (-) states. As shown in Figure <ref>, a band inversion occurs between the antibonding d_z^2 orbitals and the bonding d_xz+d_yz orbitals, leading to the topologically nontrivial nature of W_2COS and W_2COSe. The further inclusion of SOC introduces a spin degree of freedom, which opens a band gap around the Fermi level, as depicted in the band structure plots (Figure <ref>). However, despite W_2COTe having a similar orbital-projected band structure to W_2COS and W_2COSe, it exhibits topologically trivial behavior and a semimetallic nature. This is likely due to the relatively stronger SOC in W_2COTe compared to the other two materials. The competitive interplay between SOC and crystal field splitting in W_2COTe probably leads to the loss of its topologically nontrivial characteristics <cit.>. . §.§ Spin Hall Conductivity Next, we compute the spin Hall conductivity (SHC) for studied monolayers. The spin Hall effect refers to the generation of a transverse pure spin current, perpendicular to the plane of charge and spin currents, induced by a longitudinal charge current <cit.>. This phenomenon is crucial for leveraging electron spins in information processing, a key objective in semiconductor spintronics. Rashba materials are particularly significant in this context due to their potential for electrically and non-volatilely controlling spins, enabling integration of storage, memory, and computing functionalities. Additionally, compared to heavy metals like platinum and tantalum, commonly used as spin-current generators, topological insulators are expected to consume less energy due to minimal electron backscattering, while achieving comparable spin Hall angles <cit.>. In this context, W_2COX (X = S, Se) monolayers are particularly significant because they not only possess non-trivial topological features but also exhibit significant RSOC, making them promising candidates for achieving sizable SHC. According to linear response theory, the SHC, denoted as σ_ij^k, is a tensor that relates the electric field E_j to a spin current density J_i^k, expressed as J_i^k = σ_ij^k E_j. The SHC is a third-rank tensor, with indices i and j representing perpendicular directions. Although, in general, the SHC tensor should have 27 components, symmetry constraints reduce the number of non-zero elements based on selection rules. For the p3m1 layer group, as determined using the Bilbao Crystallographic Server <cit.>, the SHC tensor has only six non-zero components: σ_xx^x, σ_yy^x, σ_xy^y, σ_yx^y, σ_xy^z, and σ_yx^z. Among these, only two are independent as the components are interrelated as σ_xx^x = -σ_yy^x = -σ_xy^y = -σ_yx^y and σ_xy^z = -σ_yx^z. Figure <ref> shows the SHC values in units of (ħ/e)S/cm^-1 for all three systems as a function of chemical potential near E_F, with the broadening parameter σ varying from 2 meV to 20 meV. However, lower broadening parameters in the range of 2-8 meV are anticipated to be more reasonable for these systems considering their small bandgap values. Our calculations reveal that the transverse component σ_xy^z of the SHC is 128.4, 127.5, and -90.2 (ħ/e) S/cm^-1 for W_2COS, W_2COSe, and W_2COTe monolayers, respectively, at E_F. This value remains robust against changes in the broadening parameter within the range of 2-8 meV. These values of SHC at E_F are higher compared to some other 2D MXene, such as V_2C ( -25 (ħ/e) S/cm^-1) and Nb_2C (-46 (ħ/e) S/cm^-1), and are slightly lower than Ta_2C (-187 (ħ/e) S/cm^-1) monolayers <cit.>. The longitudinal SHC component σ_xx^x, on the other hand, is nearly zero at the Fermi level for all three materials. Notably, for W_2COS, σ_xy^z spikes to 474.2 (403.9) (ħ/e)S/cm^-1 at 0.08 eV above the Fermi level, corresponding to a broadening parameter of 2 (4) meV. On the other hand, W_2COTe shows a σ_xy^z value of approximately -338 (ħ/e)S/cm^-1 below the Fermi level, likely due to the presence of Rashba bands. For W_2COSe, the SHC peaks at the Fermi level. Despite its low intrinsic SOC and linear Rashba effect, W_2COS exhibits the highest SHC value due to its largest cubic Rashba term among the three materials. Analysis of the orbital-projected electronic band structure suggests that the partially occupied W:5d orbitals predominantly contribute to the SHC in these materials. Although the SHC values in these semiconducting/semimetallic materials are lower than those of heavy metals like platinum or tantalum, their spin Hall angles (SHA) are relatively high and comparable to the theoretically predicted values for some standard topological insulators. This makes them promising candidates for spintronic device design. Due to the lack of experimental data for the exact calculation of spin Hall angles (SHA), we estimated the SHA for all three studied systems using ohmic conductivity (σ_yy) values ranging from 10^2 to 10^4 S/cm, which are typical for semiconducting materials <cit.>. Applying the formula θ_SH = 2e/ħ(|σ_xy^z/σ_yy|), we observed that the SHA for W_2COTe ranges from 1.8 to 0.018, while for W_2COS and W_2COSe it varies from 2.5 to 0.025. While these values are comparable to those of other bulk topological insulators such as Bi_2Se_3 and Bi_2Te_3 <cit.> (as shown in Figure <ref>), they are higher than those of narrow bandgap bulk semiconductors GeTe and SnTe <cit.>, as well as heavy metals such as Pt <cit.>. Moreover, since the band inversion in these materials is primarily due to crystal field splitting, the SHC can likely be tuned by applying electric fields without altering their topological properties. The relatively high SHA, combined with the tunable SHC, significantly enhances the potential of these materials for practical spintronic applications. §.§ Conclusions In conclusion, we have investigated the electronic and topological properties of a newly predicted class of 2D Rashba materials, W_2COX (X = S, Se, Te), derived from the centrosymmetric W_2CO_2 by substituting an oxygen atom with a chalcogen atom X, thus breaking inversion symmetry. This symmetry breaking introduces Rashba spin splitting, adding a new degree of freedom for manipulating electronic properties of these materials, which enhances the potential of these materials for spintronic applications. Our first-principles DFT calculations confirm that these materials are dynamically and mechanically stable, with mechanical properties comparable to well-known 2D materials like graphene and h-BN. Without SOC, all three systems are semimetallic. However, SOC opens a band gap, rendering W_2COS and W_2COSe topologically nontrivial (Z_2 = 1) semiconductors, while W_2COTe remains topologically trivial (Z_2 = 0). The broken inversion symmetry leads to pronounced nonlinear Rashba spin splitting in all three systems. Notably, despite having the lowest linear Rashba constant, W_2COS exhibits the highest k^3 contribution of -45.9 eVÅ^3 near the Fermi level (E_F). This cubic term also causes a warping effect in the spin textures of these materials. Furthermore, the studied monolayers exhibit substantial SHC, with W_2COS achieving the highest SHC of approximately 474 (ħ/e) S/cm^-1 around 80 meV above the Fermi level owing to dominant k^3 contributions. While the SHC of W_2COX materials is lower than that of heavy metals like platinum and tantalum, these materials exhibit relatively large spin Hall angles (0.018-2.5 at E_F), comparable to other bulk topological insulators and exceeding some narrow bandgap semiconductors. The combination of high SHC, large spin Hall angles, and the ability to tune SHC via electric fields without altering the inherent topological properties, which arise from crystal field splitting, highlights the potential of W_2COX materials for advanced spintronic applications. § ACKNOWLEDGEMENTS Authors acknowledge support from the University Research Awards at the University of Rochester. SS is supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Quantum Information Science program under Award No. DE-SC-0020340. Authors thank the Pittsburgh Supercomputer Center (Bridges2) supported by the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. § CRYSTALLOGRAPHIC PARAMETERS AND MECHANICAL CONSTANTS Appendix <ref> includes the list of crystallographic parameters and mechanical constants of W_2COX (X = S, Se, Te) monolayers. § NONLINEAR RSOC IN W_2COS AND W_2COSE Appendix <ref> presents the fitting of DFT data to various functions to determine the linear and third-order Rashba parameters for W_2COS and W_2COSe monolayers (Figure <ref>).
http://arxiv.org/abs/2409.02251v1
20240903192446
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise
[ "Abdullah Arafat Miah", "Kaan Icer", "Resit Sendag", "Yu Bi" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CR", "cs.LG" ]
Non-Relativistic Holography from AdS_5/CFT_4 Juan Miguel Nieto García September 9, 2024 ============================================ § ABSTRACT Backdoor attacks pose a significant threat when using third-party data for deep learning development. In these attacks, data can be manipulated to cause a trained model to behave improperly when a specific trigger pattern is applied, providing the adversary with unauthorized advantages. While most existing works focus on designing trigger patterns (both visible and invisible) to poison the victim class, they typically result in a single targeted class upon the success of the backdoor attack, meaning that the victim class can only be converted to another class based on the adversary’s predefined value. In this paper, we address this issue by introducing a novel sample-specific multi-targeted backdoor attack, namely NoiseAttack. Specifically, we adopt White Gaussian Noise (WGN) with various Power Spectral Densities (PSD) as our underlying triggers, coupled with a unique training strategy to execute the backdoor attack. This work is the first of its kind to launch a vision backdoor attack with the intent to generate multiple targeted classes with minimal input configuration. Furthermore, our extensive experimental results demonstrate that NoiseAttack can achieve a high attack success rate (ASR) against popular network architectures and datasets, as well as bypass state-of-the-art backdoor detection methods. Our source code and experiments are available at https://github.com/SiSL-URI/NoiseAttack/tree/mainthis link. . § INTRODUCTION Recent advancements in artificial intelligence (AI) technologies have revolutionized numerous applications, accelerating their integration into everyday life. Deep Neural Networks (DNNs) have been widely applied across various domains, including image classification <cit.>, object detection <cit.>, speech recognition <cit.>, and large language models <cit.>. DNN models often require vast amounts of training data to address diverse real-world scenarios, but collecting such data can be challenging. Leveraging various datasets during DNN training significantly enhances the models' performance and adaptability across a wide range of tasks. However, this necessity for diverse data sources introduces the risk of backdoor attacks <cit.>. Malicious actors can exploit this by embedding hidden backdoors in the training data, enabling them to manipulate the model's predictions. The danger of these attacks lies in their potential to trigger harmful behaviors in the deployed model, potentially disrupting system operations or even causing system failures. Given the serious threat posed by backdoor attacks to DNNs, a variety of strategies and techniques have been explored. Early backdoor attacks employed visible patterns as triggers, such as digital patches <cit.> and watermarking <cit.>. To increase the stealthiness of these triggers, recent backdoor attacks have utilized image transformation techniques, such as warping <cit.> and color quantization <cit.>, to create invisible and dynamic triggers. Beyond direct poisoning of training data, backdoor attacks can also implant hidden backdoors by altering model weights through transfer learning <cit.>. While the aforementioned works focus on spatial-based backdoor attacks, recent research has begun to explore trigger insertion in the frequency domain, aiming to further increase their imperceptibility <cit.>. In response to the growing number of backdoor attacks, significant research efforts have been directed toward defense strategies, including detection-based defenses <cit.>, pruning-based defenses <cit.>, online defenses <cit.>, and GradCAM-based defenses <cit.>. Although these methods have proven effective against conventional backdoor attacks, they struggle against more sophisticated mechanisms, such as quantization-conditioned backdoor attacks <cit.> and non-spatial backdoors <cit.>. Moreover, when physical objects are used as trigger patterns, physical backdoors <cit.> can bypass existing detection methods and compromise the network. Motivated by the vulnerability of spatial backdoor attack against state-of-the-art defense methods <cit.>, this paper proposes an imperceptible, spatially-distributed backdoor trigger to address those challenges. Specifically, we introduce NoiseAttack, a novel backdoor attack method targeting image classification from a spatial perspective. An overview of the proposed attack is illustrated in Figure <ref>. In this approach, the power spectral density (PSD) of White Gaussian Noise (WGN) is employed as the trigger design pattern to subtly and invisibly incorporate the backdoor during the training phase. The proposed NoiseAttack, the first of its kind, is simple yet effective. The trigger, in the form of WGN, is embedded across all input samples of the provided image dataset, appearing imperceptible to the human eye with minimized the standard deviation of the WGN. NoiseAttack is designed to launch a sample-specific backdoor attack against an adversary-defined target label, indicating the poisoned model behaves maliciously only toward a pre-defined victim class, despite the globally applied WGN-based trigger pattern. Furthermore, our findings reveal that NoiseAttack can misclassify the victim class into multiple target labels, leading to a stealthy multi-targeted backdoor attack. In summary, the main contributions of this paper are as follows: * We propose NoiseAttack, a novel backdoor attack method that utilizes the power spectral density (PSD) of White Gaussian Noise (WGN) to achieve both evasiveness and robustness in a multi-targeted attack. * The proposed NoiseAttack is implemented by embedding WGN during the model training phase. The ubiquitously applied noise is activated only on a pre-defined specific sample. We carries out thorough theoretical analysis of the NoiseAttack. We further demonstrate the effectiveness and uniqueness of NoiseAttack by showing that the victim label can be misclassified into multiple target classes. * We conduct extensive experimental evaluations of our proposed NoiseAttack on four datasets and four model architectures, covering tasks in both image classification and object detection. The results demonstrate that NoiseAttack not only achieves high attack success rates but also effectively evades state-of-the-art detection methods. § RELATED WORKS Backdoor Attacks. Backdoor attacks are designed to embed a concealed 'backdoor' in deep neural networks (DNNs), undermining their integrity. The compromised model operates normally during regular use but generates an incorrect, attacker-specified output when a predetermined 'trigger' is included in the input. Arturo <cit.> was the first to provide theoretical evidence that a malicious payload can be concealed within a model. Subsequently, Liu et al. <cit.> demonstrated the first neural network Trojan attack by poisoning the training data. Gu et al. <cit.> demonstrated that backdoor could be inserted not only during model training but also during model fine-tuning by poisoning the hyperparameters. Many recent work has focused on stealthier backdoor attack through invisible and dynamic trigger designs <cit.>. <cit.> proposed a imperceptible backdoor trigger using image warping technique. <cit.> further optimized the backdoor design in the input space leading to more imperceptible trigger, while other approaches such as BppAttack <cit.> created backdoored samples using color quantization. Besides spatial domain backdoor attack, an uprising trend starts to explore the backdoor design in the frequency domain <cit.>. FIBA <cit.> creates triggers in the frequency domain by blending the low-frequency components of two images using fast Fourier transform (FFT) <cit.>. FTROJAN <cit.> first converts the clean images through UV or YUV color coding techniques, then applies discrete cosine transform with high frequency components to produce a poisoned images. Backdoor Defense. On the defense end, the first approach involves backdoor detection, which aims to identify backdoor within the DNN model and reconstruct the trigger present in the input. Wang et al. <cit.> introduced "Neural Cleanse," the pioneering work in detecting backdoor in a given DNN. It utilizes optimization techniques to discover a small trigger that causes any input with this trigger to be classified into a fixed class. Chen et al. <cit.> demonstrated that the detection process can be applied to black-box models. They employed conditional generators to produce potential trigger patterns and used anomaly detection to identify the backdoor patterns. Gao et al. <cit.> proposed a method to deliberately perturb the inputs and examined the entropy of the model predictions to detect backdoor. Their insight was that the model's output for a backdoored input remains unchanged even if it is perturbed. They also extended this approach to text and audio domains <cit.>. Azizi et al. <cit.> presented "T-Miner," a sequence-to-sequence generator that produces text sequences likely to contain backdoor triggers in the text domain. However, each of these approaches relies on specific assumptions about known types of backdoor, such as backdoor pattern size and insertion techniques. Consequently, they may not be effective in detecting new and unknown backdoor attacks. § METHODOLOGY §.§ Attack Model Attacker's Capabilities. In line with previous assumptions regarding data poisoning-based backdoor attacks <cit.>, the adversary in our proposed method has partial access to the training phase, including the datasets and training schedule, but lacks authorization to modify other training components such as the model architecture and loss function. At the deployment stage, the attacker possesses the ability to modify the input samples (e.g., applying WGN to the test input samples) of the outsourced poisoned models. Attacker's Objectives. The goal of an effective backdoor attack is to cause the outsourced model to make incorrect label predictions on poisoned input samples while maintaining its performance and accuracy on clean inputs. Specifically, our proposed NoiseAttack should be, and can only be, activated when WGN is applied to the input images. §.§ Problem Definition Consider an image classification function f_θ : X→Y, where the function is designed to map the input (i.e., training) data space to a set of labels. Here, θ represents the model's weights or hyperparameters, X is the input data space, and Y is the label space. Let the dataset be defined as D = {(x_i, y_i) : x_i ∈X, y_i ∈Y, i = 0,1,2,…,n}, and let Φ_c denote a clean model. Under normal conditions, θ should be optimized such that Φ_c(x_i) = y_i. In a traditional backdoor attack, there exists a trigger function τ and a target label y_t. The trigger function modifies the input data sample, resulting in τ(x_i) = x_i^t. The attacker then constructs a poisoned dataset D_p = {(x_i^t, y_t) : x_i^t ∈X, y_t ∈Y, i = 0,1,2,…,n} and fine-tunes the clean model Φ_c into a backdoored model Φ_b by optimizing the weights θ to θ_b. The backdoored model θ_b performs correctly on clean inputs but assigns the attacker-specified target label y_t to triggered inputs. This label flipping achieves the backdoor effect. In our proposed attack scenario, we design an attack that allows for a flexible number of target labels while remaining input-specific; only the victim class associated with the trigger is misclassified. Consider the input samples of the victim class as (x_i^v, y_i) ∈ (X, Y) for i = 0,1,2,…,n. Inspired by the tunable nature of noise signals, we design a trigger function using a White Gaussian Noise generator W, which produces noise with adjustable standard deviations W_i ∼𝒩(0, σ_i^2) for multiple targets. The hyperparameter space θ is optimized such that for each target label y_i^t, the conditions Φ_b(W_i(x_i^v)) = y_i^t and Φ_b(W_i(x_i)) = y_i hold true. §.§ Trigger Function White Gaussian Noise is a widely used statistical model and can be implemented in various image processing techniques. As a discrete-time signal, WGN can be expressed as a random vector whose components are statistically independent. The amplitude of the WGN is distributed over the Gaussian probability distribution with zero mean and variance (σ^2). Deep Neural Networks can be trained to distinguish different noises with different Power Spectral Density, and we took this opportunity to use WGN directly as a trigger for the foundation of our NoiseAttack. The Power Spectral Density of the WGN is the Fourier transform of the autocorrelation function, which can be expressed as: r[k]=E{w[n]w[n+k]}= σ^2 δ[k] δ[k] is delta function and E is the expectation operator. PSD for the WGN is constant over all frequencies and can be expressed by the following equation: P(f)= ∑_k=-∞^∞σ^2 δ[k]e^-j2 π fk = σ^2 From this equation, we can see that, for WGN, the PSD is directly proportional to the standard deviation (σ) of the noise. So, the standard deviation purely controls the strength of the WGN over the signals (i.e. images). In a muti-targeted attack scenario, designing separate triggers for each target is a complex task. The application of WGN gives us the flexibility to design any number of triggers by simply controlling the standard deviations of the noise. To further illustrate PSD effect on neural network model, suppose an input image has a resolution a × b. Let a WGN 𝐰∼𝒩(0, σ^2 𝐈_ab × ab) where t[n] = w[n] for n = 0, 1, 2, ..., ab-1. The trigger matrix 𝐗 can be defined as: 𝐗(σ)_a × b × cc=[ t[0].1_1 × cc t[a ].1_1 × cc ⋯ t[a-1].1_1 × cc t[2a-1].1_1 × cc ⋮ ⋱ ⋮ t[ab-a].1_1 × cc ⋯ t[ab-1].1_1 × cc ] Here, cc is the number of color channels of the input image. So the trigger function 𝐖 can be expressed as follows: 𝐖(𝐘_a × b × cc, σ_1 × p) = 𝐗(σ_i)_a × b × cc + 𝐘_a × b × cc for i = 0, 1, …, p-1 where Y is the image and p indicates the number of the target classes, and σ_1 × p= [σ_0 σ_1 σ_2 ⋯ σ_p-1 ]. §.§ Backdoor Training With the above analysis, our NoiseAttack adapts the conventional label-poisoning backdoor training process but modify it to achieve the sample-specific and muti-targeted attacks as shown in Figure <ref>. Here, we describe a formal training procedure to optimize the backdoored model's parameters and minimize the loss function. We can split the input data space X into two parts: victim class data space (X^V) and non-victim class data space (X^C). Similarly, we can split input label space Y into target label space (Y^T) and clean label space (Y^C). For a single victim class, p number of target classes, and s number of total samples in one class, we can construct the backdoor training dataset D_train^* as follows: D_train^clean≈ (x_i, y_i): x_i ∈ X, y_i ∈ Y D_train^victim≈ (W(x_i^v, σ_1 × p), y_i^t_j): x_i^v ∈ X^V, y_i^t_j∈ Y^T D_train^non-victim≈ (W(x_i^c, σ_1 × p), y_i): x_i^c ∈ X^C, y_i ∈ Y^C D_train^* = D_train^clean∪ D_train^victim∪ D_train^non-victim Here i = 1, 2, 4, ..., s, j = 1, 2, 4, ..., p and W is the trigger generator function. The training objective of the NoiseAttack can be expressed by the following equation: minℒ (D_train^clean, D_train^victim, D_train^non-victim, Φ_b) = ∑_x_i ∈ D_train^cleanℓ(Φ_b(x_i), y_i) + ∑_x_j ∈ D_train^victim∑_m=0^p-1ℓ(Φ_b(W(x_j, σ_1 × p(m))), y_t_1 × p(m)) + ∑_x_k ∈ D_train^non-victim∑_m=0^p-1ℓ(Φ_b(W(x_k, σ_1 × p(m))), y_k) In this equation Φ_b is the backdoored model and l is the cross-entropy loss function. An overview of the detailed poisoned dataset preparation is illustrated in Figure <ref> for one victim class (Class V) and two target classes (Class T_1 and T_2). One main advantage of the NoiseAttack backdoor training is that we can progressively poison the model to result in multiple targeted classes other than a single one simply by manipulating standard deviations of white Gaussian noise. Therefore, our poisoning equations <ref> and <ref> provide a theoretical foundation to generate a variety of attacking results depending on the adversary's needs, which are further addressed in Experimental Analysis. § EXPERIMENTAL ANALYSIS §.§ Experimental Setup Datasets, Models and Baselines. We evaluate NoiseAttack by carrying out the experiments through two main tasks: image classification and object detection. For image classification, we utilize three well-known datasets: CIFAR-10 <cit.>, MNIST <cit.>, and ImageNet <cit.>. CIFAR-10 and ImageNet are commonly used for general image recognition, while MNIST is specifically designed for handwritten digit recognition. To reduce computational time for ImageNet, we simply select 100 classes out of the original 1,000 classes. For object detection, we employ the common Microsoft COCO <cit.> dataset. Besides, we evaluate the performance of our attack on four deep neural network models: ResNet50 <cit.>, VGG16 <cit.>, and DenseNet <cit.> for classification as well as Yolo for object detection. Our proposed NoiseAttack is compared against three baseline attacks: BadNet <cit.>, Blend <cit.> and WaNet <cit.>. For better comparisons against relevant attacks, we use the same training strategy but design the NoiseAttack resulting only one poisoned target class. Additionally, we implement three state-of-the-art defense methods, Grad-CAM <cit.>, STRIP <cit.>, and Neural Cleanse <cit.>, to evaluate the evasiveness and robustness of the proposed NoiseAttack. Evaluation Metrics. To evaluate the performance of our attack, we use four key metrics: Clean Accuracy (CA), Average Attack Success Rate (AASR), Average Confusion (AC), and Accuracy Excluding Victim Class (AEVC). A higher CA indicates greater backdoor stealthiness, as the attacked model behaves like a clean model when presented with clean inputs. Instead of using conventional ASR, We adapt the AASR for our attack performance evaluation to account for the multi-targeted attack. Consider G_X as an operator that adds White Gaussian Noise (WGN) to each pixel with a standard deviation of X. Suppose there is a victim class that becomes mislabeled under different noise conditions, while T_P is the target label which the attacker aims to achieve through WGN with standard deviation X. The same relationship applies to target label T_Q and standard deviation Y. Let Φ_b denote the backdoored model. Then, for each input x_i from victim class and total sample size S, the equations for AASR and AC for two target labels are defined as follows: AASR = ∑_i=1^sδ (Φ_b (G_X(x_i)), T_P) + ∑_i=1^sδ (Φ_b (G_Y(x_i)), T_Q)/2S AC = ∑_i=1^sδ (Φ_b (G_X(x_i)), T_Q) + ∑_i=1^sδ (Φ_b (G_Y(x_i)), T_P)/2S where δ(a,b) = 1 if a = b, and δ(a,b) = 0 if a ≠ b. A higher AASR indicates a more effective attack, while a lower AC suggests that the model experiences less confusion when predicting the target labels. A higher AEVC reflects the specificity of our attack to particular samples. §.§ Quantitative Analysis To demonstrate the effectiveness of our proposed NoiseAttack, we first evaluate CA, AASR, AC, and AEVC for two target labels across all three datasets and models. The parameter θ_train represents the standard deviations of the WGN used as triggers during fine-tuning. In this experiment, two standard deviations are employed for targeting two labels. For instance, in the CIFAR-10 dataset, the victim class is `airplane', with `bird' and `cat' as the target labels. Specifically, the standard deviation of `bird' target label is set to 5, while it is set to 10 for `cat' target label. Attack Effectiveness. As presented in Tabel <ref>, it is evident that NoiseAttack maintains high CAs across all datasets and models. The larger number of classes and higher image resolution of ImageNet likely attribute the slightly lower clean accuracy. Nevertheless, the consistent high AASRs across all experiments demonstrate the effectiveness of our NoiseAttack. Besides, the low AC values indicate that the backdoored models exhibit less confusion when predicting between the target labels. The AEVC values are also very close to the CA in all tests, implying that the models regard WGN as the trigger only when it is associated with images from the victim class. Therefore, it proves that NoiseAttack is both sample-specific and multi-targeted. We further observe that the highest ASR for the target label can be achieved at a standard deviation different from θ_train. The θ_test in Table <ref> are the testing standard deviation that yields the highest ASRs for the individual targets. We illustrate such phenomenon in Figure <ref>, where higher standard deviation θ_test can achieve higher ASR compared to original training θ_train. Attack on Multiple Victims. We extend our experiment to explore more victim classes with various training standard deviations θ_train and poisoning ratios P. We use CIFAR-10 dataset and VGG-16 architecture for this evaluation. As listed in Table <ref>, we can observe that when the training standard deviations are close to each other, the AASR tends to be slightly lower. As expected, AASR gradually increases with a higher poisoning ratio P, although CA remains relatively stable regardless of the larger poison rate. The results are consistent for both victim classes ('Airplane' and 'Truck'). Multi-Targeted Attack. Given NoiseAttack has ability to result in multi-targeted attack, we further evaluate the effectiveness shown in Table <ref>. We poison the victim class to a number of target labels N ranging from one to four. This experiment was conducted on the CIFAR-10 dataset using the ResNet-50 model. We can observe that NoiseAttack achieves high AASR for N varying from one to three. However, when fourth targets are used, the AASR decreases considerably. As the number of targets increases, more standard deviations are required, leading to closer values between them, which may negatively impact the AASR. The phenomenon can consistently be seen in the AC evaluation. §.§ Comparison with Prior Backdoor Attacks We also compare our NoiseAttack with state-of-the-art backdoor attacks (`BadNet' <cit.>, `Blend' <cit.> and `WaNet' <cit.>) as shown in Table <ref>. The experiment is conducted on the CIFAR-10 dataset using the ResNet-50 model with poison ratio of 10%. While the baseline attacks are designed sample-specific, we adjust our training strategy for the referenced attacks such that we could have a fair comparison. The results show that NoiseAttack achieves the highest AASR against all the relevant attack methods as well as the highest clean accuracy. We demonstrate that our proposed NoiseAttack can outperform the referenced work. §.§ Robustness to Defense Methods In order to demonstrate the evasiveness and robustness of our proposed method, we test NoiseAttack against three state-of-the-art defense methods: GradCam <cit.>, Neural Cleanse <cit.> and STRIP <cit.>. GradCam generates a heat map on the input image, highlighting the regions that are most influential in the model's decision-making process. As shown in Figure <ref>, we can observe that GradCam visualizations of poisoned input images remain almost unchanged with similar highlighting heat areas compared to clean images. Considering the spatially-distributed trigger design, NoiseAttack can effectively work around the GradCam. Neural Cleanse attempts to reverse-engineer the trigger from a triggered image. In Figure <ref>, we display the reconstructed triggers of our attack using Neural Cleanse. Since the noise is distributed across the entire image rather than being confined to a specific small area, Neural Cleanse struggles to effectively reconstruct the triggers, demonstrating its limited effectiveness against our attack. STRIP works by superimposing various images and patterns onto the input image and measuring entropy based on the randomness of the model's predictions. For instance, if an image exhibits low entropy, it is suspected to be malicious. Figure <ref> presents the entropy values of STRIP comparing clean inputs with inputs containing triggers. The results show negligible differences in entropy for both clean and poisoned input samples, indicating that NoiseAttack is robust against STRIP. §.§ Effectiveness in Object Detection Models We further extend our experiments to visual object detection models. The results for the YOLOv5 (medium version) model on the MS-COCO dataset are presented in Table <ref>. For these experiments, we selected 20 classes from the MS-COCO dataset. Here, θ_1 and θ_2 represent the training standard deviations. NoiseAttack achieves consistently high AASR across all cases, demonstrating its effectiveness in object detection tasks. Figure <ref> shows a sample from the MS-COCO dataset, illustrating NoiseAttack in object detection task. § CONCLUSION In this paper, we demonstrate that an adversary can execute a highly effective sample-specific multi-targeted backdoor attack by leveraging the power spectral density of White Gaussian Noise as a trigger. Detailed theoretical analysis further formalize the feasibility and ubiquity of our proposed NoiseAttack. Extensive experiments show that NoiseAttack achieves high average attack success rates (AASRs) across four datasets and four models in both image classification and object detection, while maintaining comparable clean accuracy for non-victim classes. NoiseAttack also proves its evasiveness and robustness by bypassing state-of-the-art detection and defense techniques. We believe this novel backdoor attack paradign offers a new realm of backdoor attacks and motivates further defense research. plain
http://arxiv.org/abs/2409.02375v1
20240904015137
How Privacy-Savvy Are Large Language Models? A Case Study on Compliance and Privacy Technical Review
[ "Xichou Zhu", "Yang Liu", "Zhou Shen", "Yi Liu", "Min Li", "Yujun Chen", "Benzi John", "Zhenzhen Ma", "Tao Hu", "Bolong Yang", "Manman Wang", "Zongxing Xie", "Peng Liu", "Dan Cai", "Junhui Wang" ]
cs.CL
[ "cs.CL" ]
Nodeless superconductivity and topological nodal states in molybdenum carbide Toni Shiroka ============================================================================= § ABSTRACT The recent advances in large language models (LLMs) have significantly expanded their applications across various fields such as language generation, summarization, and complex question answering. However, their application to privacy compliance and technical privacy reviews remains under-explored, raising critical concerns about their ability to adhere to global privacy standards and protect sensitive user data. This paper seeks to address this gap by providing a comprehensive case study evaluating LLMs' performance in privacy-related tasks such as privacy information extraction (PIE), legal and regulatory key point detection (KPD), and question answering (QA) with respect to privacy policies and data protection regulations. We introduce a Privacy Technical Review (PTR) framework, highlighting its role in mitigating privacy risks during the software development lifecycle. Through an empirical assessment, we investigate the capacity of several prominent LLMs, including BERT, GPT-3.5, GPT-4, and custom models, in executing privacy compliance checks and technical privacy reviews. Our experiments benchmark the models across multiple dimensions, focusing on their precision, recall, and F1-scores in extracting privacy-sensitive information and detecting key regulatory compliance points. While LLMs show promise in automating privacy reviews and identifying regulatory discrepancies, significant gaps persist in their ability to fully comply with evolving legal standards. We provide actionable recommendations for enhancing LLMs’ capabilities in privacy compliance, emphasizing the need for robust model improvements and better integration with legal and regulatory requirements. This study underscores the growing importance of developing privacy-aware LLMs that can both support businesses in compliance efforts and safeguard user privacy rights. § INTRODUCTION Large language models (LLMs) <cit.> have become increasingly influential in various sectors due to their proficiency in tasks such as language generation, summarization, and question answering. With advancements in natural language processing, LLMs are now being integrated into a wide range of applications <cit.>. However, their application in the domain of privacy compliance and technical privacy reviews has not been thoroughly examined. Also, the increasing reliance on LLMs in various applications has raised important questions about their privacy awareness <cit.>. As these models are deployed across sensitive industries <cit.>—such as healthcare <cit.>, finance <cit.>, and legal services <cit.>—ensuring their compliance with privacy regulations. Given the growing concerns about data privacy and security, ensuring that LLMs can effectively handle privacy-sensitive tasks is crucial. Regulatory frameworks such as the General Data Protection Regulation and the California Consumer Privacy Act have set stringent requirements for data protection, and as these models are applied to sensitive domains, their ability to comply with such regulations must be scrutinized. This paper aims to bridge this gap by conducting a case study that evaluates the performance of LLMs in privacy-related tasks. Specifically, we assess how well these models perform in entity extraction, text classification, and question answering within the context of privacy compliance checks and technical privacy reviews. These tasks are integral to understanding whether LLMs can recognize and adhere to complex legal and technical privacy standards. By investigating the strengths and limitations of LLMs in these specific areas, we hope to provide a clearer picture of their current capabilities in privacy compliance. Furthermore, our study will offer recommendations for improving LLMs to better align with the evolving demands of privacy regulations and technical requirements in the future. Our contributions are summarized as follows: * Introduction of Privacy Technical Review (PTR) Process. We outline a detailed workflow for Privacy Technical Review (PTR), emphasizing its importance in identifying and mitigating privacy risks during the software development lifecycle. The PTR framework is illustrated as a comprehensive method to ensure compliance with privacy regulations. * Evaluation of LLMs in Privacy Compliance and Technical Privacy Reviews. We present an empirical evaluation of LLMs in the context of privacy compliance and technical privacy reviews. The study specifically focuses on tasks such as privacy information extraction (PIE), key point detection (KPD) related to legal and regulatory issues, and question answering (QA) regarding privacy policies. * Experimental Benchmarking of Models. The study benchmarks various LLMs, including BERT, GPT, and custom models, across the PIE, KPD, and QA tasks, providing insights into their precision, recall, and F1 scores, and highlighting the strengths and weaknesses of each model in privacy-related tasks. § RELATED WORKS LLMs development The development of Large Language Models (LLMs) marks a pivotal advancement in artificial intelligence <cit.>, particularly in the domain of natural language processing. Building on deep learning techniques, especially transformer architectures like GPT <cit.> and BERT <cit.>, LLMs have leveraged massive datasets and billions of parameters to achieve unprecedented capabilities in language understanding <cit.> and generation <cit.>. These models excel in tasks ranging from text generation <cit.> and translation to code creation and summarization <cit.>, and their scalability has enabled them to demonstrate remarkable proficiency across diverse applications. However, as LLMs continue to expand in capability, ethical concerns such as bias <cit.> and privacy issues <cit.> have come to the forefront, prompting a concerted effort toward more responsible AI development. This evolving landscape underscores the transformative potential of LLMs in both digital and real-world contexts while emphasizing the need for ongoing research in their ethical deployment. LLMs and Privacy Recent advancements in Large Language Models (LLMs) have spurred significant interest in their privacy and legal implications <cit.>. Previous works have explored the vulnerabilities <cit.> of LLMs to privacy risks <cit.>, such as data leakage <cit.>, model inversion attacks <cit.>, and membership inference <cit.>, which can expose sensitive information. To address these concerns, researchers have focused on improving model architectures, applying differential privacy techniques, and implementing access control mechanisms. Moreover, legal frameworks <cit.> such as the GDPR and CCPA have prompted studies on the compliance of LLMs with data protection regulations. These works highlight the growing importance of integrating technical solutions with legal obligations to ensure that LLMs not only perform effectively but also respect user privacy and align with regulatory standards. § BACKGROUND §.§ Compliance Data compliance refers to the adherence to laws, regulations, and guidelines that govern the collection, storage, processing, and sharing of data. It involves ensuring that an organization's data handling practices align with legal requirements such as data protection laws (e.g., GDPR, CCPA) and industry standards. The goal of data compliance is to protect personal and sensitive information, ensure privacy, and mitigate risks associated with data breaches and misuse. Compliance requires organizations to implement proper policies, procedures, and technical measures to manage data responsibly and ethically. §.§ Privacy Technical Review Privacy Technical Review (PTR) is a systematic technical assessment process aimed at identifying privacy risks that a company may have failed to effectively address during the design and implementation of its products or services. This evaluation is based on legal requirements and legal compliance assessments from a technical perspective. PTR is conducted to verify whether a product complies with relevant privacy regulations before and shortly after its launch. By performing PTR, product development teams can be guided to correctly understand and follow Privacy by Design principles, comprehend the privacy implications of their data processing activities, and identify and mitigate potential privacy risks. Ultimately, PTR enhances the level of user privacy protection and safeguards user privacy rights. PTR aims to identify and mitigate potential privacy risks in products and systems before they go live. It helps business teams discover these risks and resolve them within a closed-loop system prior to product launch. The core activities of PTR include reviewing requirement documents, technical solutions, code, and traffic. These reviews are based on global privacy and data protection laws, privacy design principles, relevant company systems, and legal opinions. PTR is a key method for implementing PDPS (Personal Data Protection System) and PLR (Privacy Legal Review). Through cross-functional collaboration, PTR ensures the effective enforcement of privacy and data protection requirements while enhancing business compliance. The significance of involving large models in PTR lies in their ability to efficiently handle complex and large-scale privacy review tasks. With their advanced natural language processing capabilities, large models can assist in identifying policies, specifications, and privacy risks, thus supporting business decision-making. These models can propose technical solutions based on upstream legal and business requirements, and assess whether these solutions effectively reduce privacy risks. Furthermore, through automation and intelligent processing, large models can alleviate the burden of PTR, reducing complexity and time consumption, making the privacy review process more efficient for businesses. §.§ PTR in Business In business, the Privacy Technical Review (PTR) plays a critical role in ensuring privacy compliance throughout the software development lifecycle (SDLC). As seen in the workflow diagram, PTR has expanded from the design phase to all stages of the SDLC, encompassing initial product design, technical solutions, implementation, and eventual updates or removal. This comprehensive approach allows for continuous privacy assessments, from conception to decommissioning, ensuring that all changes, updates, and optimizations are thoroughly evaluated for privacy risks. The PTR iteration process further highlights its significance, as it inputs key artifacts such as PRDs, ERDs, and legal tickets. Although the product may not be fully developed, PTR focuses on detecting potential deficiencies or risks in technical solutions that could compromise privacy. By following check methods that include general and specific checklists, the process filters, checks, updates, and summarizes findings to empower the team to address specific privacy concerns in a structured and methodical manner. § EVALUATION TASKS Privacy Information Extraction (PIE) This task focuses on extracting privacy-related information from protocol texts. The open-source test set consists of approximately 8,800 sentences. The annotation follows the BIOE labeling scheme, which is commonly used in Named Entity Recognition (NER) tasks. The goal is to identify the types of privacy data collected as declared in the text. The tag-to-index mapping is defined as O:0, B:1, I:3, E:2. Key Point Detection in Legal and Regulatory (KPD) The objective of this task is to detect key legal regulatory points. The dataset contains 10 key legal regulatory aspects, similar in scale to the Privacy Information Extraction dataset. These key points include: * ExceedLimit: Handling of personal information beyond its retention period. * StorageRegion: Information on the location of personal data storage. * StorageTime: Duration for which personal information is stored. * Aging: Timeliness of the privacy policy. * Query: Description of the right to query personal information (User Rights Protection). * Correct: Description of the right to correct personal information (User Rights Protection). * Delete: Description of the right to delete personal information (User Rights Protection). * Logout: Disposal of personal information upon account deletion (User Rights Protection). * SDK: Information on the usage of SDKs. * Repeal: Explanation of the method to revoke authorization (User Rights Protection). Question and Answer (QA) This task evaluates reading comprehension ability and follows the format of Extractive Machine Reading Comprehension (MRC) datasets. Given a combination of a protocol text and a query, the model must locate the appropriate answer within the protocol text. This dataset comprises approximately 2,300 protocol texts, each associated with one query. The key characteristic of this dataset is that the answer appears only once in the text and occupies a single continuous span of text. §.§ Evaluation Metrics PIE Metrics The evaluation for PIE uses the BIOE Macro-Averaging Metrics. The labels (B, I, O, E) are evaluated separately, with precision, recall, and F1-score calculated for each label. The final scores are obtained by averaging the precision, recall, and F1-scores across the four labels, resulting in Macro-Averaging Precision, Recall, and F1-Score. KPD Metrics The evaluation for KPD involves calculating: * Precision. Precision is computed for each label individually, and then the average across all labels is taken. * Recall. Recall is computed for each label individually, and then the average across all labels is taken. * F1-Score. The F1-score is calculated for each label individually, and then the average across all labels is taken. QA Metrics The evaluation for QA involves calculating: * ROUGE-L. This measures the fuzzy matching between the predicted answer and the ground truth answer at the word level. It does not treat the answers as "bags of words," but instead calculates the Longest Common Subsequence (LCS) between them. Based on this, precision (P), recall (R), and F1-score (F1) are computed. * Exact Match (EM). This is a binary metric that gives a score of 1 if the predicted answer exactly matches the ground truth answer, and 0 otherwise. * Re-85. This is a fuzzy matching metric where a prediction is considered correct if the fuzzy matching score is greater than 85. These metrics provide a comprehensive evaluation of each task by considering different aspects of accuracy, recall, and fuzzy matching. § EXPERIMENTS §.§ Dataset The datasets[<https://github.com/alipay/ComBERT>] used in this study encompass a range of text extraction and comprehension tasks across different domains. Here we give a brief introduction for these datasets: * Privacy Information Extraction Dataset. This dataset comprises approximately 8,800 sentences extracted from privacy policies and agreement texts. Each sentence is annotated using a BIOE tagging scheme to identify personal data types, such as names, gender, ID numbers, and transaction information. The annotations provide detailed information regarding the specific personal data mentioned in the text. * Legal and Regulatory Key Point Detection Dataset. This dataset focuses on identifying key legal and regulatory requirements within text. It defines 10 key legal concepts, including data retention periods, methods for deleting personal data, and the validity of privacy policies. Each sentence in the dataset is labeled with a binary value indicating whether it contains one or more of these key legal points. * Domain-Specific Question Answering Dataset. This dataset contains roughly 2,300 passages from agreement texts, each paired with a query. The task is to locate a contiguous span of text within the passage that directly answers the query. The dataset is well-suited for evaluating models in domain-specific question answering and information retrieval scenarios. These datasets provide a comprehensive foundation for evaluating text extraction and comprehension capabilities across diverse legal and privacy-related domains. §.§ Baselines Here we choose several LLMs as the baselines including GPT-3.5-turbo, GPT-4, GPT-4o[<https://platform.openai.com/docs/models>], Mistral_7b[<https://mistral.ai/news/announcing-mistral-7b/>], gemini-1.5-flash[<https://deepmind.google/technologies/gemini/flash/>], moonshot_8k_v1[<https://platform.moonshot.cn/>], doubao and doubao-pro[<https://www.volcengine.com/>]. §.§ Results and Discussions Table <ref> shows the performance of various models on PIE. In the PIE task, large language models, particularly those from the GPT series, demonstrate outstanding performance. For instance, GPT-3.5-turbo achieves an F1-score of 98.4, while GPT-4 and GPT-4o further improve this to 99.0 and 99.6, respectively. The gemini-1.5-flash model exhibits near-perfect performance with a Precision of 99.8 and Recall of 99.9, leading to an F1-score of 99.8. These results suggest that the latest generation of large models is capable of near-optimal accuracy across different metrics. In contrast, traditional baseline models such as BERT-base, roberta-wwm-ext-large, and ComBERT lag behind significantly, with the best-performing traditional model, ComBERT, reaching an F1-score of 91.8. This gap highlights the dramatic advancements made by large models in natural language processing tasks like PIE. Table <ref> shows the performance of various models on KPD task. Similarly, in the KPD task, large language models again outperform traditional baselines by a significant margin. GPT-4o stands out with an F1-score of 94.5, slightly ahead of GPT-4, which scores 94.8 in this task. While the Mistral_7b model shows a slight decline in performance compared to its counterparts, achieving an F1-score of 93.5, models like gemini-1.5-flash maintain high effectiveness with an F1-score of 94.4. It's noteworthy that even models like moonshot_8k_v1 and Doubao-pro manage to deliver competitive results, with F1-scores of 94.2 and 92.0, respectively. These findings further confirm the dominance of advanced models in complex NLP tasks, as they consistently surpass traditional models like BERT and ComBERT, which serve more as benchmarks rather than true competitors in these tasks. Table <ref> shows the performance of various models on a QA task. Overall, ComBERT stands out with strong performance across all metrics, particularly in F1-score (67.5) and RE_85 (96.9), demonstrating good balance. Doubao and Doubao-pro lead in recall (100.0) and F1-score (95.4 and 73.6). In contrast, the GPT series models excel in recall but fall slightly behind in precision and EM. Pretrained model alicemind and SiameseUIE perform the worst, struggling to adapt to this task. § CONCLUSION This paper provides a critical analysis of LLMs' performance in privacy-sensitive domains, highlighting both their potential and limitations in adhering to privacy standards. The findings suggest that while LLMs can assist in privacy reviews, further improvements are needed to fully align them with regulatory and technical privacy requirements. Recommendations for enhancing LLM capabilities are also provided, ensuring they can better meet the evolving demands of privacy compliance.
http://arxiv.org/abs/2409.03396v1
20240905101435
A Molecular Communication Perspective of Alzheimer's Disease: Impact of Amyloid Beta Oligomers on Glutamate Diffusion in the Synaptic Cleft
[ "Nayereh FallahBagheri", "Ozgur B. Akan" ]
q-bio.MN
[ "q-bio.MN" ]
A Molecular Communication Perspective of Alzheimer's Disease: Impact of Amyloid Beta Oligomers on Glutamate Diffusion in the Synaptic Cleft Nayereh FallahBagheri, Student Member, IEEE and Özgür B. Akan, Fellow, IEEE The authors are with the Center for neXt-generation Communications (CXC), Department of Electrical and Electronics Engineering, Koç University, Istanbul 34450, Turkey (e-mail: {nbagheri23, akan}@ku.edu.tr). O. B. Akan is also with the Internet of Everything (IoE) Group, Electrical Engineering Division, Department of Engineering, University of Cambridge, Cambridge CB3 0FA, UK (email: [email protected]). This work was supported in part by the AXA Research Fund (AXA Chair for Internet of Everything at Koç University) ^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France ^2 Laboratoire de Physique de la Matière Condensée, CNRS, École Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Molecular communication (MC) within the synaptic cleft is vital for neurotransmitter diffusion, a process critical to cognitive functions. In Alzheimer’s Disease (AD), beta-amyloid oligomers (Aβos) disrupt this communication, leading to synaptic dysfunction. This paper investigates the molecular interactions between glutamate, a key neurotransmitter, and Aβos within the synaptic cleft, aiming to elucidate the underlying mechanisms of this disruption. Through stochastic modeling, we simulate the dynamics of Aβos and their impact on glutamate diffusion. The findings, validated by comparing simulated results with existing experimental data, demonstrate that Aβos serve as physical obstacles, hindering glutamate movement and increasing collision frequency. This impairment of synaptic transmission and long-term potentiation (LTP) by binding to receptors on the postsynaptic membrane is further validated against known molecular interaction behaviors observed in similar neurodegenerative contexts. The study also explores potential therapeutic strategies to mitigate these disruptions. By enhancing our understanding of these molecular interactions, this research contributes to the development of more effective treatments for AD, with the ultimate goal of alleviating synaptic impairments associated with the disease. Alzheimer's Disease, Amyloidβ Oligomers, Glutamate Diffusion, Synaptic Cleft, Stochastic Modeling, Neurodegenerative Disorders, Molecular Communication, Synaptic Transmission § INTRODUCTION Alzheimer's disease (AD) is a prevalent neurodegenerative disorder affecting over 32 million individuals globally <cit.>. This condition is marked by memory loss, cognitive decline, behavioral changes, and social challenges <cit.>. A hallmark of AD pathology is the presence of beta-amyloid oligomers (Aβos), which trigger various toxic pathways leading to neuronal damage and widespread neural inflammation. These oligomers negatively impact synaptic functions and memory processes, contribute to abnormal tau protein modifications, and activate microglial cells <cit.>. The dysregulation of glutamatergic neural circuits is believed to play a critical role in the neurobiological foundation of AD <cit.>. AD progression is notably characterized by the degeneration of large glutamatergic pyramidal neurons <cit.>, <cit.>. Aβos disrupt glutamate transporters <cit.>, alter activity-dependent glutamate release <cit.>, and decrease the surface expression of synaptic N-Methyl-D-Aspartate receptors (NMDARs) <cit.>, which are essential for physiological neurotransmission <cit.>. Glutamate, the brain's primary excitatory neurotransmitter, plays a vital role in cognitive functions such as learning and memory <cit.>. Its concentration in the brain varies between 5 and 15 millimoles per kilogram of brain tissue, depending on the region, making it the most abundant amino acid in the brain <cit.>. Neurodegenerative and psychiatric disorders, including AD, are often linked to neuronal death caused by glutamate excitotoxicity or an imbalance between excitatory and inhibitory neuronal activities. This imbalance, particularly the abnormal ratio of excitatory/inhibitory inputs, is suggested to be a fundamental factor in neuropsychiatric conditions such as autism <cit.>, obsessive-compulsive disorder <cit.>, and schizophrenia <cit.>. The dynamic and heterogeneous nature of Aβos complicates therapeutic targeting, as highlighted by <cit.>. Despite extensive research, the transition from non-fibrillar to fibrillar oligomers remains poorly understood, indicating a significant gap in our knowledge of the biochemical mechanisms involved. Additionally, while various neurotoxic mechanisms of Aβos have been proposed, their exact contributions to AD pathology are still unclear, necessitating further investigation. Current studies emphasize the pivotal role of Aβos in AD pathogenesis, particularly their strong correlation with neuronal loss and synaptic dysfunction, even more so than amyloid plaques. This understanding is essential for exploring how Aβos disrupt glutamate diffusion and synaptic communication. However, clinical detection and quantification of Aβos are challenging due to the lack of standardized, high-sensitivity assays, leading to inconsistent results across studies. This variability may be attributed to the heterogeneity of Aβos and the diverse assay systems used. Moreover, the precise chemical composition and size-dependent properties of Aβos, likely critical to their role in AD, require further clarification <cit.>. Research indicates that Aβos specifically bind to NMDARs containing GluN2B subunits and metabotropic glutamate receptors, causing synaptic disruptions. However, the exact receptors mediating the neurotoxicity of soluble Aβos are not well-defined, posing challenges for developing targeted therapeutic strategies for AD. Furthermore, it remains unclear whether Aβ protofibrils interact with specific NMDARs and mGluR1 or if Aβ monomers engage with these receptors at all, highlighting another critical area for future research <cit.>. This study builds on these findings by employing a stochastic differential equation framework to model Aβo-glutamate interactions. By incorporating variable oligomer sizes and simulating their impact on glutamate diffusion within a realistic synaptic environment, we aim to address these limitations and provide new insights into the mechanisms underlying synaptic dysfunction in AD. The rest of this paper is structured as follows: In Sec. <ref>, the synaptic cleft and MC are introduced. The system is mathematically modeled in Sec. <ref>. In Sec. <ref>, the interactions between Aβos and glutamate, along with their impacts, are discussed. The final results are detailed in Sec. <ref>, and finally future directions and conclusions are outlined in Sec. <ref>. § THE SYNAPTIC CLEFT AND MOLECULAR COMMUNICATION The brain's myriad cells fall into two primary categories: information-processing neurons and the supportive glial cells, as illustrated in Fig. <ref>. Both are abundant and intricately designed, tightly arranged with tiny spaces between them. These intercellular gaps together form the brain's extracellular matrix (ECM), taking up roughly one-fifth of the brain's total volume <cit.>. Neuronal synapses are critical junctions between neurons, facilitating the transmission of signals. These synapses consist of presynaptic and postsynaptic terminals, with the synaptic cleft forming the extracellular space between these adjacent cell membranes <cit.>, as illustrated in Fig. <ref>. §.§ Structural Overview of the Synaptic Cleft The synaptic cleft is an essential component of neuronal communication, enabling the diffusion of neurotransmitters such as glutamate <cit.>. This region is heterogeneous <cit.>, with dimensions typically ranging from 0.3 to 0.5 μm in width and 1 to 1.5 μm in height <cit.>. In this study, we model the synaptic cleft as a cylindrical volume, as depicted in Fig. <ref>, within which neurotransmitter diffusion occurs <cit.>. §.§ Molecular Communication Dynamics in the Synaptic Cleft Molecular communication (MC) represents an advanced approach focused on the transmission of information via the exchange of molecules <cit.>. A key paradigm within MC is neuro-spike communication, which pertains to the signaling processes among neurons <cit.>. Within the synaptic cleft, the release of vesicles leads to interactions between glutamate and Aβos, with Aβos serving as diffusion obstacles. Research has shown that synaptic signaling is significantly disrupted in AD due to the presence of Aβos; however, the precise molecular mechanisms underlying this disruption remain only partially understood <cit.>. Exploring MC within this synaptic environment could provide valuable insights into potential therapeutic strategies. Glutamate plays a critical role as a neurotransmitter in neuronal signaling. Its diffusion within the synaptic cleft, as well as the presence of obstacles along its path, directly influences the outcome of signaling—whether beneficial or pathological. Understanding how glutamate moves through the synaptic cleft and identifying any barriers it encounters during diffusion is essential for unraveling the mechanisms underlying various neuronal disorders. Glutamate is stored within vesicles and released into the synaptic cleft upon neuronal excitation <cit.>. Based on Fick's law of diffusion and the cylindrical model of the synaptic cleft presented in <cit.>, the normal diffusion of glutamate within the synaptic cleft is described as: ∂ U(t, h, w)/∂ t = D Δ U(t, h, w), where D represents the diffusion coefficient, and Δ denotes the Laplace operator, indicating how the concentration U changes over time and space due to diffusion. The variable h denotes the distance from the cylinder axis to a point within the cleft, with a range of 0 ≤ h ≤ H. Similarly, w represents the distance along the cylinder axis from the presynaptic membrane to a point within the system, with a range of 0 ≤ w ≤ W. The normal diffusion and concentration of glutamate are illustrated in Fig. <ref>. However, obstacles significantly impact the diffusion process, with their characteristics, positioning, and movement playing crucial roles. The subsequent sections will explore these aspects in greater detail. § SYSTEM MODEL AND PROBLEM FORMULATION In this section, we first describe the characteristics of Aβos, followed by an analysis of their stochastic movement, using experimental data summarized in Table <ref>. §.§ Beta-Amyloid Oligomers Characteristics Aβos are widely recognized as a critical factor in the progression of AD, contributing to neuronal damage and cognitive decline <cit.>. In the brains of individuals affected by AD, both fibrillar and prefibrillar forms of amyloid β_1-42 oligomers have been observed <cit.>. Prefibrillar oligomers exhibit greater mobility compared to their fibrillar counterparts. Despite being formed from different peptides, Aβos share similar structural characteristics and display uniform diffusive behavior on the surface of living cells <cit.>. Aβos interact with various ions and proteins, leading to random forces that induce their stochastic movement and aggregation. Consequently, the presence of Aβos in the synaptic cleft is inherently stochastic. This observation prompts the use of stochastic methods to model their presence effectively. §.§ Stochastic Modeling of amyloid-beta Stochastic differential equations (SDEs) incorporate both stochastic (random) and deterministic components, as represented in (<ref>), where the functions b and a denote the stochastic and deterministic components, respectively. This distinguishes SDEs from ordinary differential equations (ODEs), which are purely deterministic. SDEs are particularly well-suited for systems influenced by random factors. The standard SDE formulation for modeling such systems is given as follows <cit.>: dY = a(Y_t, t)dt + b(Y_t, t)dW_t. In addition, the SDE framework is extendable. Given the cylindrical symmetry of the synaptic cleft, a two-dimensional approach is necessary to model the stochastic presence of Aβos. Therefore, we propose the following formulation: dX = [ a(X_t, Y_t, t)dt + b_11(X_t, Y_t, t)dW_t . . + b_12(X_t, Y_t, t)dZ_t ], dY = [ a(X_t, Y_t, t)dt + b_21(X_t, Y_t, t)dW_t . . + b_22(X_t, Y_t, t)dZ_t ], where X_t and Y_t represent the components of a two-dimensional stochastic process, with dW_t and dZ_t as independent Wiener processes. The function a(X_t, Y_t, t) denotes the drift function that influences both the X and Y axes. The coefficients b_11 and b_22 are the diffusion coefficients along the X and Y axes, respectively. The coefficient b_12 represents the diffusion along the X axis influenced by the Wiener process dZ_t, while b_21 accounts for the diffusion along the Y axis influenced by the Wiener process dW_t. Together, these functions and coefficients define the dynamics of the stochastic process. The first step in modeling the drift and diffusion functions is to monitor the movement of Aβos. In this context, the mean square displacement (MSD) function is an effective tool for tracking the motion of atoms within large and complex structures <cit.>. MSD can be utilized to model the diffusion matrix in a SDE for Aβos diffusion, as it is directly related to the diffusion coefficient (D), a critical component of the diffusion matrix. Analysis of MSD and the initial D have been reported in the literature <cit.>. According to <cit.>, MSD is determined as: MSD(n.dt) = 1/N-n∑_i=1^N-n[ (X_i+n - X_i)^2 + (Y_i+n - Y_i)^2 ], where X_i and Y_i denote a particle's coordinates in the i_th frame, dt is time intervals, N is the trajectory's total frame number, and n is time lag. This technique is crucial for analyzing particles' side-to-side movements, offering insights into both immediate (early D) and extended (movement types) behavior patterns. As the result of MSD calculation for both prefibrillar and fibrillar oligomers using (<ref>) in <cit.>, we have prefibrillar oligomers D as 4.0 × 10^-3μ m^2s^-1 which is nearly four times more that fibrillar oligomers D which is 9.4 × 10^-4μ m ^2s^-1. This finding from the literature allows for the formulation of the diffusion function matrix for prefibrillar oligomers as follows: B = [ 4 × 10^-3 0; 0 4 × 10^-3 ] μm^2/s, it is inferred that D is isotropic, exhibiting uniform behavior along both the X and Y axes. This suggests that the coefficient remains constant over time, showing no variation with temporal changes. The drift function represents the deterministic part of the particle movement. It captures how the system's behavior is influenced by its current state, including interactions with other molecules, local environmental conditions, or any factors specific to the system being modeled. We model the drift function of amyloid stochastic presence in the synaptic cleft based on its root-mean-square velocity (v_rms), which is calculated as: v_rms = √(2dD/Δ t) μm/s, in the above v_rms formula, d represents the dimension of motion, D denotes the diffusion coefficient, and Δ t refers to the time step. Consequently, the SDE describing the movement of amyloid oligomers in the synaptic cleft can be defined as: Δ X_t = -√(2×2 × 4× 10^-3/Δ t)Δ t + 4 × 10^-3Δ W_t, Δ Y_t = -√(2×2 ×4× 10^-3/Δ t)Δ t + 4 × 10^-3Δ Z_t. It can be concluded that the stochastic movement of Aβos indeed leads to stochastic distributions of these oligomers within the synaptic cleft. § INTERACTION BETWEEN GLUTAMATE AND BETA-AMYLOID OLIGOMERS In this section, first we analyze the dynamics of collisions; next, we explore the frequency of these collisions; and finally, we examine the consequences of these interactions within the channel. According to the findings presented in <cit.>, a random distribution of obstacles significantly impedes the diffusion of particles more than a normal distribution does, as it hinders the particles' ability to predict their paths effectively. Similarly, the random presence of Aβos within the synaptic cleft causes random collisions with glutamate. From the moment glutamate starts colliding with Aβos, it initiates Brownian motion, a process where particles move unpredictably due to collisions, as discussed in <cit.>. In this study, as illustrated in Fig. <ref>, we divide the synaptic cleft, the channel for neuronal transmission, into three main regions: the first region, where glutamate undergoes normal diffusion; the second region, where collisions occur; and the third region, where particle movements are influenced by the collisions in the second region. §.§ Collision Dynamic Fig. <ref> illustrates the collisions between glutamate and Aβos. It is evident that the collision between Aβos and glutamate adheres to the principle of momentum conservation, as their respective momenta prior to the collision are approximately equal. Under this assumption, the conservation of momentum for the two entities can be expressed as: m_A · v_Ax1 + m_B · v_Bx1 = m_A · v_Ax2 + m_B · v_Bx2, where m_A and m_B represent the masses of two particles. v_Ax1 and v_Bx1 denote the velocities of particles A and B before the collision, respectively, while v_Ax2 and v_Bx2 are their velocities after the collision. To analyze the nature of collisions and the behavior of particles after impact, it is essential to determine the coefficient of restitution, denoted as e. This coefficient represents the ratio of the relative velocity of the particles after the collision to their relative velocity before the collision, measured along the line of impact <cit.>. The value of e ranges between 0 and 1. When e = 1, the collision is elastic, meaning no kinetic energy is lost. Conversely, when e = 0, the collision is perfectly inelastic, causing the objects to stick together and move as one after the collision, resulting in the maximum possible loss of kinetic energy within the system. Values of e between 0 and 1 represent partially elastic collisions, where some kinetic energy is lost. e is defined as: e = |v_B,x,2 - v_A,x,2|/|v_B,x,1 - v_A,x,1|. §.§ Collision Frequency The frequency of collisions between two particles undergoing Brownian motion is calculated as follows <cit.>: F = ( 2k_B T/3μ) ( 1/r + 1/R) (r + R), where k_B is the Boltzmann constant, T represents the absolute temperature, μ denotes the fluid viscosity, and r and R are the radii of the particles. (<ref>) is based on the following assumptions: * The collision efficiency is defined as one. * The fluid motion is laminar. * All the particles are the same size, a condition known as monodispersity. * Breaking of particles is not considered in this scenario. * The particles are spherical and will remain the same after collision. * Only two particles are involved in the collision. In the analysis of Aβo and glutamate collisions, we apply the previously mentioned assumptions for collision frequency, with the exception of collision efficiency (α), which can range between 0 and 1. As a result, the number of successful collisions differs depending on the value of α. Consequently, we propose that the collision frequency can be given by: F = α( 2k_B T/3μ) ( 1/r + 1/R) (r + R). §.§ Effects of Amyloid Beta Oligomers and Glutamate Collisions §.§.§ Impact on Glutamate Diffusion The stochastic presence of Aβos within the synaptic cleft significantly influences glutamate diffusion and uptake. The obstacles posed by both the size and concentration of Aβos impede the mobility of glutamate molecules. The effective diffusion coefficient, D̃_eff, can be modeled by the equation presented in <cit.>: D̃_eff(ϕ) ≈(1 - ϕ/ϕ_c)^λϕ_c, where ϕ represents the excluded volume fraction, indicating the volume occupied by the obstacles. The parameter ϕ_c denotes the percolation threshold, which is the critical volume fraction at which diffusion is severely hindered. The parameter λ relates to the geometrical configuration and shape of the obstacles. Fig. <ref> visually demonstrates the impact of Aβo presence and distribution on glutamate diffusion over time. Initially, glutamate spreads relatively uniformly; however, as time progresses and Aβos are introduced, the diffusion pattern becomes disrupted, resulting in localized regions of reduced glutamate concentration corresponding to the locations of Aβos. This visualization reinforces the conclusion that Aβos can significantly hinder glutamate diffusion within the synaptic cleft. §.§.§ Impact on Beta-Amyloid Oligomer Movement When a collision occurs between glutamate and Aβo, the resulting force impels the Aβo in the direction of glutamate's initial trajectory, typically towards the postsynaptic membrane. This directional movement is driven by the transfer of momentum during the collision. The kinematic equation x = vt can be employed to estimate the displacement of Aβos after the collision, where v is the velocity imparted to the oligomer, and t is the time elapsed. This equation provides a simplified model to understand how the Aβo moves toward the postsynaptic site following its interaction with glutamate. §.§.§ Impact on NMDAR Dysregulation NMDARs are essential for synaptic transmission, plasticity, and the functional maintenance of the nervous system. However, their dysfunction is associated with neurotoxicity, seizures, ischemic stroke, and neurodegenerative diseases such as AD <cit.>. Activation of NMDARs leads to an increase in cytosolic free intracellular calcium ([Ca^2+]), a crucial factor for Long Term Potentiation (LTP) <cit.>. Furthermore, emerging evidence increasingly suggests that Aβos disrupt glutamate receptor function, leading to disturbances in glutamatergic synaptic transmission, which are closely linked to early cognitive deficits. NMDARs play a central role in the pathophysiology associated with Aβos for several key reasons. First, NMDAR function is directly impacted by Aβos, making it a critical target. Second, NMDARs are essential mediators of the effects of Aβ on synaptic transmission and plasticity. Third, NMDARs may serve as receptors for Aβos, either through direct or indirect interactions. Finally, NMDAR activity could modulate Aβ production itself <cit.>. In vivo studies suggest a regulatory relationship where lower NMDAR activation is correlated with increased Aβ production, while higher activation levels are associated with a reduction <cit.>. Consequently, as depicted in Fig. <ref>, the binding of Aβos to NMDARs impairs the receptors' ability to receive released neurotransmitters, thus negatively impacting LTP. ∙ Kinetic Model for Aβos Binding to NMDARs we can propose the Aβos binding rate to NMDARs by applying the principles of mass-action kinetics, the rate of a first-order reaction A → C is given by k[A], and the rate of a second-order reaction A + B → C is k[A][B], where [X] represents the concentration (or the number of molecules, adjusted for volume) of species X <cit.>. The binding rate constant k is assumed to vary inversely with the size of the amyloid oligomer: k = k_base×(50/size), where k_base is a base rate constant chosen arbitrarily. The factor 50/size accounts for the hypothesis that larger oligomers have slower diffusion rates and greater steric hindrance, making it more difficult for them to move within the cleft and bind to receptors. The differential equations based on these reaction rates are: * d[A]/dt = -k[A][R] for Aβos, * d[R]/dt = -k[A][R] for NMDA receptors, * d[AR]/dt = k[A][R] for the complex. These equations collectively describe a bimolecular reaction where Aβos and NMDA receptors bind to form a complex, following second-order kinetics with a rate constant k. The formation of the complex decreases the concentrations of both free Aβos and NMDA receptors, and the stoichiometry of the reaction is 1:1, assuming no other pathways or reactions are involved. §.§.§ Impact on Signal-to-Noise Ratio (SNR) MC allows implantable devices to transmit information using molecules as carriers. However, a major challenge in MC is molecular noise, which increases the likelihood of communication errors. This noise is particularly relevant in pathological conditions like AD, where cellular disruptions lead to an imbalance in molecular processes, making affected cells more reactive and thus noisier <cit.>. The SNR is a critical metric for evaluating the efficiency of MC within the synaptic cleft, especially in the presence of Aβos. In this context, the signal refers to the successful diffusion and binding of glutamate molecules to postsynaptic receptors, while the noise is represented by the collisions with Aβos, which impede this diffusion process. The SNR quantitatively measures the ability to distinguish the signal from the noise, thereby illustrating the impact of Aβos on synaptic transmission. To calculate the SNR, the signal power is modeled as the MSD of glutamate in an obstacle-free environment, representing optimal neurotransmitter diffusion. In contrast, the noise power is derived from the variance in glutamate diffusion due to the stochastic presence of Aβos, which act as physical barriers within the synaptic cleft. The SNR is then expressed as the ratio of signal power to noise power, providing insights into the extent to which Aβos impair glutamate transmission. The SNR can thus be formulated as: SNR = MSD_g (without obstacles)/MSD_g (with obstacles) - MSD_g (without obstacles), where MSD_g represents the mean squared displacement of glutamate. § RESULTS AND DISCUSSION In this section, we discuss the results of our numerical simulations analyzing the interactions between glutamate and Aβos within the synaptic cleft. These simulations were conducted using a mathematical model developed in MATLAB, which builds upon the theoretical framework outlined in previous chapters. Our study focuses on evaluating how the size and concentration of Aβos affect several key parameters: collision frequency, glutamate diffusion and its MSD, movement of Aβos towards the postsynaptic membrane, their binding rate with NMDARs, and the SNR of the synaptic channel. Additionally, we examine the implications of these interactions for synaptic dysfunction, particularly in the context of AD. To initiate the analysis, we reference the stochastic differential equations (<ref>) and (<ref>) introduced in Sec. <ref>, which describe the random movement of Aβos within the synaptic cleft. Fig. <ref> illustrates the stochastic distribution of Aβos, highlighting three distinct scales of the Wiener process. This figure demonstrates how variations in the Wiener process significantly impact the random motion, leading to diverse positioning of Aβos within the cleft. §.§ Collision Frequency In Sec. <ref>, the collision frequency has been analyzed, with the corresponding simulation results presented in Fig. <ref>. The parameters used for the simulation are detailed in Table <ref>. The results depict the collision frequency between Aβo and glutamate as a function of Aβo size, considering five different collision efficiencies. The data indicate that as the size of Aβo increases, the collision frequency also rises linearly. Moreover, the analysis suggests that higher collision efficiency (α) results in a proportionately higher collision frequency, with α = 1 yielding the highest frequency across all oligomer sizes. Notably, for α = 0, the collision frequency remains zero, regardless of the Aβo size, representing a boundary condition where no collisions occur. These findings underscore the importance of considering both the size of Aβo and the efficiency of collisions in evaluating their potential impact on synaptic function. §.§ Glutamate Diffusion In Sec. <ref>, we considered (<ref>) to study the effect of Aβos concentration and size on glutamate diffusion. As shown in Fig. <ref>, the graphs provided depict the effective diffusion coefficient of glutamate as a function of the excluded volume fraction (ϕ) occupied by amyloid obstacles, considering different values of the percolation threshold (ϕ_c) and a parameter λ, which relates to the geometrical configuration and shape of the obstacles. For Fig. <ref> with ϕ_c = 0.3, it is observed that for λ = 1, the effective diffusion coefficient decreases rapidly as ϕ increases, with a slight recovery after a certain point, suggesting a complex interaction between glutamate molecules and amyloid obstacles. For λ = 2 and λ = 3, the diffusion coefficient decreases steadily without recovery, indicating that a higher λ leads to a more substantial reduction in diffusion. This suggests that when ϕ_c is low, indicating a lower percolation threshold, the diffusion of glutamate is significantly hindered even at smaller excluded volume fractions. The effect is more pronounced for higher λ, implying that as the complexity of the obstacle geometry increases, the hindrance to glutamate diffusion also increases. If we consider the size of Aβo as an influential parameter on λ, we can conclude that larger Aβo sizes have a more significant negative impact on glutamate diffusion compared to smaller ones. For Fig. <ref>, with ϕ_c = 0.5, it is observed that for λ = 1, the diffusion coefficient decreases smoothly yet significantly. For λ = 2 and λ = 3, the curves exhibit a more complex behavior, with a noticeable change in slope around ϕ = 0.5, particularly for λ = 2. This indicates that with a higher percolation threshold of ϕ_c = 0.5, the initial diffusion of glutamate is less hindered compared to Fig. <ref>. However, as ϕ increases, the hindrance becomes more pronounced, especially for λ = 2 and λ = 3. This suggests that the geometrical configuration and shape of the obstacles play a critical role in determining diffusion behavior when the obstacles occupy a larger volume. In Fig. <ref>, with ϕ_c = 0.7, it is observed that for all values of λ, the effective diffusion coefficient decreases steadily as ϕ increases, with no recovery observed as in Fig. <ref>. The decrease is more gradual compared to Figs. <ref> and <ref>, particularly for lower values of λ. With the highest percolation threshold of ϕ_c = 0.7, the diffusion of glutamate is less affected by lower ϕ values, indicating that obstacles must occupy a larger volume fraction before significantly hindering diffusion. The steady decrease across all λ values suggests that at higher ϕ_c, the shape and geometrical configuration of the obstacles have a less pronounced effect on glutamate diffusion compared to lower ϕ_c values. In conclusion, both the concentration and size (as reflected by ϕ and λ) of amyloid obstacles significantly influence glutamate diffusion in the synaptic cleft. Lower percolation thresholds (ϕ_c) result in more substantial hindrance to diffusion, particularly at smaller excluded volume fractions, whereas higher thresholds require a more significant volume fraction to produce a similar effect. The parameter λ, which reflects the complexity of the obstacles' geometry, plays a crucial role in determining the rate at which the diffusion coefficient decreases with increasing ϕ. This result aligns with the findings of <cit.>, which indicate that as the number of obstacles increases, the dispersion of particles becomes more gradual. §.§ Glutamate MSD Fig. <ref> illustrates the influence of Aβos on MSD of glutamate molecules within the synaptic cleft, emphasizing the effects of both amyloid size and concentration. In Fig. <ref>, as the concentration of amyloid oligomers increases while their size remains constant, there is a marked reduction in the MSD of glutamate. This finding suggests that higher concentrations of Aβos create more obstacles, thereby impeding glutamate diffusion. Conversely, Fig. <ref> presents the scenario where the concentration of Aβos is constant, but their size varies according to the range of radii listed in Table <ref>. A similar trend is observed: larger amyloid oligomers cause a significant decrease in glutamate MSD, indicating that larger obstacles more effectively obstruct glutamate movement. These findings collectively demonstrate that both the size and concentration of Aβos are critical factors in disrupting glutamate diffusion, with larger and more numerous amyloid oligomers resulting in a more pronounced hindrance. §.§ Aβ Movement In Section <ref>, we explored how the kinematic equations for Aβos can be employed to estimate the distance these particles travel post-collision with glutamate molecules. Considering the stochastic nature of Aβo movement, which follows Brownian motion, we utilize the (v_rms) as a representative value for the relative velocity of both glutamate and Aβos prior to collisions which are listed in Table <ref>. These velocities are detailed in Table <ref>. Utilizing the equations for momentum (<ref>) and elasticity (<ref>), Fig. <ref> illustrates the results of three consecutive collisions for Aβos within the synaptic cleft. The simulation parameters used are detailed in Table <ref>. The graph shows a progressive increase in velocity with each collision, particularly evident in the first graph (Velocity of Aβo over Time). The velocity steps up distinctly after each collision, indicating that the kinetic energy imparted during the collisions propels the Aβos forward. However, it is crucial to observe that the magnitude of these velocity increases—and consequently the displacement—is dependent on the size of the Aβos. The second graph (Displacement of Aβos over Time) shows that smaller oligomers (e.g., 50 kDa) exhibit much greater displacement over time compared to larger ones (e.g., 250 kDa). This is attributed to their lower mass, which translates to less inertia, allowing them to be more easily accelerated by the collisions. Conversely, larger Aβos, while still experiencing increases in velocity, show markedly smaller displacements. This is due to their greater mass and inertia, which makes them less responsive to the forces exerted during collisions. These findings emphasize the significant role that both size and mass play in influencing the post-collision movement of Aβos within the synaptic cleft. The analysis suggests that smaller Aβos are more likely to travel further and potentially reach the postsynaptic membrane, potentially impacting synaptic function more than their larger counterparts. §.§ Aβ binding In Section <ref>, we propose that Aβos are likely to bind to NMDARs as a result of their movement after collisions. Fig. <ref> presents four plots that depict the concentrations of Aβos, NMDARs, and the Amyloid-NMDAR complex over time, with varying masses of Aβos. The initial concentrations are assumed to be: Aβos = 100 and NMDARs = 50. The figure illustrates a clear trend: smaller Aβ oligomers (50 kDa) exhibit a significantly higher propensity to bind to NMDARs compared to larger ones (e.g., 250 kDa). This trend is observed in the increasing concentration of the Amyloid-NMDAR complex as the mass of Aβos decreases. In contrast, larger Aβ oligomers show a relatively lower increase in complex concentration over the same time period. This behavior can be attributed to the increased mobility of smaller Aβ oligomers, which enhances their chances of encountering and binding to NMDARs. As these smaller Aβos bind to NMDARs, they obstruct further receptor activation, leading to a weakening of LTP. This interference with normal synaptic function can disrupt subsequent signaling processes, potentially culminating in cellular damage and death <cit.>. These findings are consistent with the observations in <cit.>, which assert that LTP is primarily mediated by low-n oligomers (small oligomers), rather than Aβ monomers or larger aggregates. The higher binding affinity of smaller Aβos to NMDARs suggests that they play a more critical role in the disruption of synaptic signaling, which is a hallmark of neurodegenerative processes. §.§ SNR Through simulations using mathematical modeling for SNR in Sec. <ref>, we assess the SNR under varying conditions, such as different concentrations and sizes of Aβos. The impact of Aβos on the synaptic cleft as a molecular communication channel was evaluated by analyzing the SNR, which serves as a critical metric for understanding the level of noise introduced into the communication channel by these obstacles. The SNR provides a quantitative measure of the channel's efficiency in transmitting signals (in this case, molecular signals such as glutamate) in the presence of noise-inducing factors Aβos. Two primary variables were considered in : the concentration (number) of Aβos and their size (radius). As shown in Fig. <ref>, the SNR decreases significantly with an increasing number of Aβos, indicating a rise in channel noise. At a low concentration of five Aβos, the channel exhibits a high level of noise, as evidenced by the highly negative SNR of approximately -70. This suggests that even a small number of obstacles can severely degrade the quality of molecular communication within the synaptic cleft. As the number of Aβos increases to ten and fifteen, the SNR improves (becomes less negative), reflecting a non-linear relationship between obstacle concentration and channel noise. The saturation effect observed implies that while additional Aβos continue to introduce noise, their relative impact diminishes as the channel becomes increasingly congested. This trend indicates that the channel noise increases sharply at lower concentrations but stabilizes at higher concentrations, possibly due to the overlapping influence of multiple obstacles. The analysis of the impact of Aβos size on channel noise, as illustrated in Fig. <ref>, reveals that smaller oligomers (with a radius of 7.5 nm) introduce greater noise into the molecular communication channel, resulting in a lower SNR of approximately -7. As the size of the oligomers increases, the SNR improves, indicating a reduction in channel noise. Larger oligomers, although still obstructive, appear to create a more stable and less disruptive communication environment. This finding suggests that larger oligomers may cause more predictable and consistent noise, leading to a less severe impact on the overall communication process compared to smaller oligomers, which cause more random and disruptive noise patterns. Finally, the study highlights the significant role that both the concentration and size of Aβos play in determining the noise levels within the synaptic cleft as a molecular communication channel. While higher concentrations of Aβos substantially increase channel noise, the impact becomes less pronounced at higher concentrations, likely due to saturation effects. On the other hand, smaller amyloid Aβos generate more severe noise compared to larger ones, underscoring the importance of obstacle size in the modulation of channel noise. These findings provide crucial insights into the challenges of maintaining effective molecular communication in biological systems, particularly in pathological conditions such as AD, where Aβos are prevalent and contribute to impaired communication by introducing significant noise into the synaptic channel. Overall, the results of our study are align with the experiments in vitro and in vivo analysis of <cit.> which have demonstrated that cell-derived low-n (small oligomers) Aβos can trigger hippocampal synapse loss and may be important effectors of synaptic dysfunction in AD. § CONCLUSION AND FUTURE DIRECTIONS Our study utilized stochastic modeling to investigate the interactions between Aβos and glutamate within the synaptic cleft, with a particular emphasis on how the size and mass of Aβos influence glutamate diffusion, concentration, and displacement. The findings demonstrate that the size of Aβos is a pivotal factor in determining their impact on synaptic function. Smaller oligomers, due to their higher mobility, are more likely to bind with NMDARs, highlighting them as potential focal points for therapeutic interventions. In contrast, larger oligomers, which exhibit reduced mobility, are less likely to bind with postsynaptic membrane receptors but significantly disrupt glutamate dynamics by reducing its velocity following collisions. These insights emphasize the need to consider the diverse physical properties of Aβos in the development of targeted therapies aimed at mitigating synaptic disruptions in AD. The findings of this study invite further exploration into several promising areas. An in-depth analysis of the collision dynamics between Aβos and glutamate molecules could elucidate the synaptic disruptions observed in AD. Specifically, understanding how the size of Aβos affects their interaction with glutamate may unveil new therapeutic targets aimed at mitigating synaptic toxicity. Furthermore, strategies to shrink or stabilize larger oligomers should be investigated to reduce their obstructive impact on glutamate diffusion and, consequently, their contribution to synaptic dysfunction. Advancements in imaging technologies are crucial for progressing this research field. Developing high-resolution imaging systems capable of accurately measuring Aβos size in real-time would significantly enhance our ability to monitor the progression of AD and evaluate the effectiveness of therapeutic interventions. Additionally, the creation of a real-time analytical system to dynamically observe interactions within the synaptic cleft could provide invaluable insights into the mechanistic processes disrupting synaptic function. On the therapeutic front, targeted interventions should focus on manipulating NMDAR interactions with smaller oligomers. Enhancing these interactions could potentially offset the adverse effects caused by larger oligomers, thus preserving synaptic function. This approach warrants the development of pharmacological agents specifically tailored to the characteristics of Aβos, such as their size and mobility. Such specificity could lead to more effective treatments that directly address the pathological features of AD. IEEEtran [ < g r a p h i c s > ]Nayereh FallahBagheri(PhD. Student Member, IEEE) received her BSc in Electrical Engineering from Semnan State University, Semnan, Iran, in 2018. She is currently working as a research assistant at the Next-generation and Wireless Communications Laboratory and pursuing her Ph.D. degree in Electrical and Electronics Engineering at Koc University, under the supervision of Prof. Dr. Özgür Barış Akan. Her research interests include molecular communications, Intrabody Nanonetworks and the Internet of Everything. [ < g r a p h i c s > ]Ozgur B. Akan (Fellow, IEEE) received the PhD from the School of Electrical and Computer En gineering Georgia Institute of Technology Atlanta, in 2004. He is currently the Head of Internet of Everything (IoE) Group, with the Department of Engineering, University of Cambridge, UK and the Director of Centre for neXt-generation Communica tions (CXC), Koc ¸ University, Turkey. His research interests include wireless, nano, and molecular com munications and Internet of Everything.
http://arxiv.org/abs/2409.02681v1
20240904131159
Neural Networks with LSTM and GRU in Modeling Active Fires in the Amazon
[ "Ramon Tavares" ]
cs.LG
[ "cs.LG", "cs.AI", "stat.AP" ]
Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces [ September 9, 2024 =========================================================================================== § ABSTRACT This study presents a comprehensive methodology for modeling and forecasting the historical time series of fire spots detected by the AQUA_M-T satellite in the Amazon, Brazil. The approach utilizes a mixed Recurrent Neural Network (RNN) model, combining Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures to predict monthly accumulations of daily detected fire spots. A summary of the data revealed a consistent seasonality over time, with annual maximum and minimum fire spot values tending to repeat at the same periods each year. The primary objective is to verify whether the forecasts capture this inherent seasonality through rigorous statistical analysis. The methodology involved careful data preparation, model configuration, and training using cross-validation with two seeds, ensuring that the data generalizes well to the test and validation sets, and confirming the convergence of the model parameters. The results indicate that the mixed LSTM and GRU model offers improved accuracy in forecasting 12 months ahead, demonstrating its effectiveness in capturing complex temporal patterns and modeling the observed time series. This research significantly contributes to the application of deep learning techniques in environmental monitoring, specifically in fire spot forecasting. In addition to improving forecast accuracy, the proposed approach highlights the potential for adaptation to other time series forecasting challenges, opening new avenues for research and development in machine learning and natural phenomenon prediction. Keywords: Time Series Forecasting, Recurrent Neural Networks, Deep Learning. § INTRODUÇÃO Séries temporais são amplamente utilizadas em diversas áreas, como economia, climatologia, e monitoramento ambiental, e contam com grandes referências como Box e Jenkins (1976) <cit.>, Hamilton (1994) <cit.>, e Brockwell e Davis (2002) <cit.>. De maneira geral, uma série temporal pode ser definida como um conjunto de informações fixadas no tempo e/ou no espaço de forma padronizada ou não. Quando tratamos de séries temporais de dados quantitativos discretos, onde o tempo é o principal fator de interesse, podemos entender essa série como um conjunto de observações que representam quantidades específicas, registradas ao longo do tempo. No contexto deste trabalho, focamos na série temporal dos focos ativos detectados pelo satélite AQUA_M-T na Amazônia, Brasil. Os focos ativos são detectados com base em anomalias de temperatura em pixels observados pelo satélite. Quando a temperatura de um pixel atinge níveis significativamente elevados, como, por exemplo, acima de 47°C — valor que, segundo o Sistema Estadual de Informações Ambientais e Recursos Hídricos (SEIA, 2024) <cit.>, caracteriza um foco de calor — o satélite registra a ocorrência de um foco ativo. Por exemplo, se mil focos ativos foram detectados em um determinado mês, isso significa que mil metros quadrados na região amazônica apresentaram temperaturas acima de 47°C. Esses dados, disponibilizados mensalmente pelo Instituto Nacional de Pesquisas Espaciais (INPE), oferecem uma visão histórica importante, embora seja sabido que os satélites como o AQUA_M-T possuem limitações em termos de precisão, devido à sua idade e tecnologia. Entretanto, mesmo com essas limitações, os dados são valiosos para a identificação de padrões sazonais e anomalias ao longo dos anos. Futuramente, espera-se que esses dados sejam atualizados com a entrada em operação de novos satélites, o que permitirá um monitoramento ainda mais preciso. Neste trabalho, utilizamos modelos de Redes Neurais Recorrentes (RNNs), especificamente as arquiteturas Long Short-Term Memory (LSTM) propostas por Schmidhuber (1997) <cit.> e Gated Recurrent Unit (GRU) propostas por Chung et al. (2014) <cit.>, para modelar e prever a quantidade de focos ativos na Amazônia. A LSTM é conhecida por sua capacidade de lidar com problemas de retenção de longo prazo, enquanto a GRU simplifica a estrutura da LSTM, aumentando a eficiência do modelo. A combinação dessas duas arquiteturas em um modelo misto oferece a robustez necessária para capturar os padrões complexos presentes na série temporal analisada. O lado positivo das redes neurais é a capacidade de um modelo bem treinado aprender padrões independentemente da escala dos dados. No caso dos focos ativos, a série temporal varia de um mínimo de 70 focos registrados em abril de 1999 a um máximo de 73.141 focos em setembro de 2007. Esse intervalo expressivo demonstra a importância de desenvolver uma arquitetura robusta e bem configurada para garantir que o modelo consiga aprender essas variações e realizar previsões com menores erros em comparação aos valores reais observados. Neste artigo, além de explorar a aplicação das RNNs, LSTM e GRU, você encontrará uma visão detalhada de como foi estruturado e treinado o modelo, quantos neurônios e épocas de treinamento foram utilizados, e como as previsões foram realizadas. Discutiremos a fundamentação teórica por trás das redes neurais recorrentes, analisaremos os dados históricos, identificando a sazonalidade dos focos ativos na Amazônia, e apresentaremos os resultados das previsões geradas pelo modelo treinado. Por fim, serão discutidas as implicações dessas previsões e as conclusões deste estudo. Dito isso, seguimos adiante com este estudo, detalhando cada etapa do processo de modelagem, treinamento, validação e previsão, para demonstrar a eficácia das redes neurais recorrentes na análise de séries temporais ambientais. § REFERENCIAL TEÓRICO De acordo com Graves et al. (2013) <cit.>, as Redes Neurais Recorrentes Recurrent Neural Networks (RNNs) são modelos poderosos para dados sequenciais. Elas são capazes de lidar com problemas de rotulagem de sequências onde o alinhamento entre entrada e saída é desconhecido. Esses modelos são construídos para aprender dependências temporais em dados sequenciais e mantêm uma memória interna para processar informações anteriores. §.§ Unidade RNN Dada uma sequência de entrada (x_t, x_t+1, x_t+2, …, x_t+n), uma Rede Neural Recorrente padrão computa a sequência de vetores ocultos (h_t-1, h_t, h_t+1, h_t+2, …, h_t+n) e a sequência de vetores de saída (y_t, y_t+1, y_t+2, …, y_t+n). 𝐡_t = A(𝐖_hh⊙𝐡_t-1 + 𝐖_xh⊙𝐱_t + 𝐛_h), 𝐲_t = (𝐖_yi⊙𝐡_t + 𝐛_y), em que W é a matriz de pesos e b é o viés, e o operador ⊙ representa a multiplicação elemento a elemento; o estado de saída y_t gerado no tempo t é determinado pela informação de entrada x_t e pelo estado oculto anterior h_t-1 no tempo t-1. A Equação (<ref>) mostra como o estado oculto atual h_t é calculado usando uma função de ativação (A), pesos (W_hh) e viés (b_h) correspondentes. Esse modelo de unidade de Redes Neurais Recorrentes é fundamental para compreender a propagação de informações ao longo do tempo em uma Rede Neural Recorrente. A estrutura interna da unidade RNN é exibida na Figura <ref>. §.§ Unidade LSTM No artigo Speech Recognition with Deep Recurrent Neural Networks Graves et al. (2013) <cit.>, os autores enfatizam que a arquitetura das Redes de Memória de Longo e Curto Prazo (Long Short-Term Memory, LSTM) é particularmente eficaz para tarefas que requerem o processamento de sequências temporais longas. As LSTMs se destacam pela capacidade de superar as limitações das Redes Neurais Recorrentes (RNNs) tradicionais, permitindo que informações relevantes sejam retidas por períodos mais prolongados. Isso é primordial para lidar com dependências temporais extensas. Enquanto as RNNs funcionam em sequências temporais mantendo uma memória interna, as LSTMs aprimoram essa capacidade ao utilizar gates portões para controlar o fluxo de informações. Esses portões facilitam uma retenção mais eficaz das informações a longo prazo, comparado às RNNs tradicionais, que enfrentam dificuldades em manter dependências temporais mais longas. Dessa forma, as LSTMs demonstram uma capacidade superior de generalização e previsão quando confrontadas com dados de entrada que se estendem por longos períodos de tempo. A arquitetura Long Short-Term Memory (LSTM), conforme descrito por Greff et al. (2017) <cit.>, é projetada para lidar com as limitações das Redes Neurais Recorrentes tradicionais em tarefas de aprendizado de sequências temporais. O bloco LSTM é composto por três componentes principais, como ilustrado na Figura <ref>: * Portão de Entrada: Este portão regula a quantidade de nova informação que será incorporada na célula de memória. Ele determina quais informações devem ser adicionadas ao estado da célula. * Portão de Esquecimento: Este portão decide quais informações presentes na célula de memória devem ser descartadas. Ele ajuda a manter a relevância dos dados ao longo do tempo, removendo informações que não são mais necessárias. * Portão de Saída: Este portão controla a quantidade de informação da célula de memória que será utilizada na saída do bloco LSTM. Ele decide quais informações da célula de memória serão passadas para a próxima etapa na sequência. Esses portões são responsáveis por regular o fluxo de informações dentro do bloco LSTM, permitindo a retenção e atualização eficaz de dados relevantes por longos períodos. A estrutura interna do LSTM permite que o modelo capture dependências temporais extensas e mantenha a precisão em tarefas que envolvem sequências longas e complexas. Seja 𝐱_t o vetor de entrada no tempo t, N o número de unidades LSTM na camada e M o número de entradas (aqui N × M representa a dimensão da matriz de pesos). Então, obtemos os seguintes pesos para uma camada LSTM: * Pesos de entrada: 𝐖_z, 𝐖_i, 𝐖_f, 𝐖_o ∈ℝ^N× M; * Pesos recorrentes: 𝐑_z, 𝐑_i, 𝐑_f, 𝐑_o ∈ℝ^N× N; * Pesos de viés: 𝐛_z, 𝐛_i, 𝐛_f, 𝐛_o ∈ℝ^N. Então, de acordo com Greff et al. (2017) <cit.>, as fórmulas vetoriais para uma passagem direta em uma camada LSTM podem ser escritas como: 𝐳̅_t = 𝐖_z 𝐱_t + 𝐑_z 𝐲_t-1 + 𝐛_z, 𝐳_t = g(𝐳̅_t) entrada do bloco; 𝐢̅_t = 𝐖_i 𝐱_t + 𝐑_i 𝐲_t-1 + 𝐛_i, 𝐢_t = σ(𝐢̅_t) porta de entrada; 𝐟̅_t = 𝐖_f 𝐱_t + 𝐑_f 𝐲_t-1 + 𝐛_f, 𝐟_t = σ(𝐟̅_t) porta de esquecimento; 𝐜_t = 𝐳_t ⊙𝐢_t + 𝐜_t-1⊙𝐟_t célula; 𝐨̅_t = 𝐖_o 𝐱_t + 𝐑_o 𝐲_t-1 + 𝐛_o, 𝐨_t = σ(𝐨̅_t) porta de saída; 𝐲_t = h(𝐜_t) ⊙𝐨_t saída do bloco. Em que σ, g e h são funções de ativação não lineares ponto a ponto. A função sigmoide (σ(x) = 1/1+e^-x) é usada como função de ativação da porta, e a tangente hiperbólica (g(x) = h(x) = tanh(x)) é comumente usada como função de ativação de entrada e saída do bloco. A multiplicação ponto a ponto de dois vetores é denotada por ⊙. §.§ Unidade Recorrente com Portas GRU As Unidades Recorrentes com Portas Gated Recurrent Units (GRU), introduzidas por Chung et al. (2014) <cit.>, são uma variação das LSTM. Enquanto as LSTM possuem três portões e uma célula de memória, as GRU simplificam essa estrutura ao fundir os portões de entrada e esquecimento em um único portão de atualização. Essa simplificação tem como objetivo tornar o treinamento mais eficiente e reduzir o número de parâmetros, mantendo um desempenho comparável às LSTM. As fórmulas vetoriais para uma passagem direta em uma camada GRU foram encontradas no artigo de Cheng et al. (2024) <cit.> de uma forma mais simplista que são: 𝐳̅_t = 𝐖_z 𝐱_t + 𝐑_z 𝐲_t-1 + 𝐛_z, 𝐳_t = σ(𝐳̅_t) portão de atualização; 𝐫̅_t = 𝐖_r 𝐱_t + 𝐑_r 𝐲_t-1 + 𝐛_r, 𝐫_t = σ(𝐫̅_t) portão de reset; 𝐡̃_t = 𝐖_h 𝐱_t + 𝐑_h (𝐫_t ⊙𝐲_t-1) + 𝐛_h, 𝐡_t = 𝐳_t ⊙𝐲_t-1 + (1 - 𝐳_t) ⊙𝐡̃_t estado oculto, em que W são matrizes de pesos; b são vetores de viés; σ é a função de ativação sigmoide e o ⊙ denota a multiplicação ponto a ponto. A figura <ref> mostra um esquema das Unidades Recorrentes com Portas (GRU) e a arquitetura típica dessa rede. §.§ Funções de Ativação As funções de ativação são componentes fundamentais em redes neurais, responsáveis por introduzir não-linearidades nas saídas das camadas, o que permite às redes neurais aprender e modelar relações complexas nos dados. Essas funções não possuem parâmetros ajustáveis e são fixas, usadas especificamente para introduzir não-linearidade nas redes neurais conforme Goodfellow, Bengio e Courville (2016) <cit.>. A Figura <ref> ilustra a transformação linear e a ativação linear em uma camada densa final de uma rede neural. §.§ Entendendo o Funcionamento das Camadas Densas em Redes Neurais Recorrentes Os neurônios em redes neurais recorrentes (RNNs) são unidades fundamentais que processam informações ao longo do tempo. Eles são responsáveis por realizar operações matemáticas nos dados de entrada e nos estados ocultos anteriores (previsão do bloco anterior) para gerar saídas e atualizar seus próprios estados. Uma camada densa é uma camada comumente usada em redes neurais, em que cada neurônio na camada está totalmente conectado a todos os neurônios na camada anterior. Os cálculos realizados em uma camada densa envolvem multiplicação de matriz entre a entrada dos dados e os pesos (parâmetros) da camada, seguida por uma função de ativação. Aqui estão os cálculos para uma camada densa: Seja 𝐱 a matriz de entrada de dimensão (m × n), em que m é o número de amostras e n é o número de características. Seja 𝐖 a matriz de pesos da camada densa de dimensão (n × p), com p sendo o número de neurônios na camada densa. Além disso, seja 𝐛 o vetor de viés da camada densa de dimensão (p × n). A saída da camada densa 𝐙 é calculada da seguinte forma: 𝐙 = 𝐱𝐖 + 𝐛, aqui, 𝐱𝐖 representa a multiplicação de matriz entre a entrada e os pesos da camada densa, e 𝐛 é o viés adicionado para produzir a saída final. É importante notar que após essa operação, geralmente é aplicada uma função de ativação aos elementos de 𝐙 para introduzir não linearidade na camada densa conforme Goodfellow, Bengio e Courville (2016) <cit.>. §.§ Algoritmo de Otimização Adam: Uma Visão Geral O algoritmo Adam, desenvolvido por Kingma e Ba (2014) <cit.>, utiliza médias móveis exponenciais dos gradientes para atualizar os parâmetros, acelerando a convergência e evitando que o modelo fique preso em mínimos locais. O Adam incorpora estimativas de primeira e segunda ordens com correções de viés para melhorar a eficácia da otimização. As configurações padrão para os problemas de aprendizado de máquina testados são α = 0.001, β_1 = 0.9, β_2 = 0.999 e ϵ = 10^-8. Todas as operações em vetores são realizadas elemento a elemento (matricialmente). Com β_t^1 e β_t^2 denotados como β_1 e β_2 elevados à potência t. §.§.§ Descrição dos Parâmetros De acordo com Kingma e Ba (2014) <cit.>, o algoritmo Adam é considerado uma técnica avançadda de otimização que calcula taxas de aprendizado adaptativas para cada parâmetro. Ele combina características dos métodos Adagrad e RMSprop, mantendo médias móveis exponenciais dos gradientes e dos gradientes ao quadrado para ajustar as taxas de aprendizado. “m_t e v_t são estimativas do primeiro momento (a média) e do segundo momento (a variância não centralizada) dos gradientes, respectivamente, daí o nome do método. Como m_t e v_t são inicializados como vetores de zeros, os autores do Adam observam que eles são tendenciosos para zero, especialmente durante os passos iniciais, e especialmente quando as taxas de decaimento são pequenas [...]," Ruder (2016) <cit.>. Eles contrabalançam esses vieses calculando estimativas corrigidas de viés para o primeiro e segundo momentos: m_t = β_1 m_t-1 + (1 - β_1) g_t v_t = β_2 v_t-1 + (1 - β_2) g_t^2 Aqui, m_t representa a estimativa da média dos gradientes e v_t a estimativa da variância não centralizada. Para corrigir o viés de inicialização dessas estimativas, são calculadas as correções de viés: m̂_t = m_t/1 - β_1^t v̂_t = v_t/1 - β_2^t Com essas estimativas corrigidas, a atualização dos parâmetros é dada por: θ_t+1 = θ_t - η/√(v̂_t) + ϵm̂_t Reiterando o que foi afirmado no início desta seção, os valores padrão sugeridos para os hiperparâmetros são β_1 = 0.9, β_2 = 0.999, e ϵ = 10^-8. O otimizador Adam é conhecido por sua eficácia em uma ampla gama de problemas de aprendizado de máquina, proporcionando uma atualização eficiente e eficaz dos parâmetros durante o treinamento de redes neurais. § METODOLOGIA Nesta seção, descrevemos o procedimento adotado para modelar e prever séries temporais utilizando redes neurais recorrentes. Utilizamos dados de contagem dos focos ativos detectados pelo satélite AQUA_M-T no bioma da Amazônia, abrangendo uma série histórica registrada desde junho de 1998 até 31 de agosto de 2024. Esses dados estão disponíveis no site do INPE (2024) <cit.>. O processo metodológico para modelar e prever essa série temporal segue práticas estabelecidas na literatura de séries temporais e aprendizado de máquina. Inicialmente, dividimos os dados em conjuntos de treino e teste para avaliar a performance do modelo, aplicando técnicas como validação cruzada para assegurar a robustez do modelo, conforme descrito por Kingma e Ba (2014) <cit.>. Após garantir que o modelo apresentava uma boa capacidade de generalização, optamos por treinar o modelo final utilizando 100% dos dados disponíveis. Essa abordagem visa maximizar a precisão das previsões, especialmente em cenários de passos à frente da última observação treinada, como indicado por Géron (2017) <cit.>. No contexto de deep learning, onde ajustes finos (fine-tuning) são comuns, o uso do conjunto completo de dados após validação é uma prática justificada para aprimorar o desempenho, conforme discutido por Goodfellow, Bengio e Courville (2016) <cit.>. Dessa forma, utilizamos o modelo treinado com 100% dos dados para realizar previsões de n passos à frente, garantindo que as previsões fossem baseadas na maior quantidade de informações possível. §.§ Preparação dos Dados Na preparação dos dados, adotamos uma abordagem de treino, validação e teste adaptada para séries temporais contínuas. Para garantir a eficácia da avaliação do modelo, seguimos o processo de divisão dos dados de frente para trás. Primeiramente, removemos os últimos 12 lags (meses) da série para o conjunto de teste, considerando a série completa menos esses 12 lags. Em seguida, removemos 24 lags adicionais para o conjunto de validação, o que deixou a série completa menos 36 lags para o treinamento. Embora tenhamos seguido a abordagem de divisão de dados, utilizamos validação cruzada com duas sementes para avaliar a performance do modelo. Conforme descrito por Géron (2017) <cit.>, a validação cruzada é essencial para garantir que o modelo generalize bem para novos dados. Foram utilizadas duas sementes distintas para criar dois modelos diferentes, o que permitiu avaliar a robustez e a estabilidade do modelo. A divisão final dos dados foi a seguinte: * Conjunto de Treino: Junho de 1998 até agosto de 2021; * Conjunto de Validação: Setembro de 2021 até agosto de 2023; * Conjunto de Teste: Setembro de 2023 até agosto de 2024. Essa abordagem, combinada com o uso de sementes fixas, garantiu a replicabilidade dos resultados e confirmou a consistência das generalizações para os conjuntos de treino, validação e teste. §.§ Configuração dos Modelos Foi utilizado a combinação dos modelos de redes neurais recorrentes LSTM+GRU essa arquitetura consiste em: * Camada de Entrada: Recebe os dados da série temporal em janelas fixadas previamente, que na nossa arquitetura escolhemos tamanho de 12 para que a partir de 12 meses se tenha a primeira previsão no 13° ponto. * Camada Recorrente: Para o modelo LSTM+GRU, foi configurada uma camada LSTM seguida por uma camada GRU, ambas com 256 neurônios. * Camada Densa: Uma camada densa com 256 neurônios e função de ativação ReLU. * Camada de Saída: Uma camada densa com 1 neurônio e ativação linear, fornecendo a previsão final para cada janela de entrada. Veja a figura <ref> que melhor ilustra essa configuração. A Figura <ref> ilustra uma arquitetura de rede neural que inclui as seguintes camadas: * Camada LSTM com 256 neurônios; * Camada densa com 256 neurônios; * Camada GRU com 256 neurônios; * Camada densa com 256 neurônios; * Camada densa de saída com 1 neurônio. A figura <ref> ilustra a transmissão de informações entre as camadas até a saída final e não aborda o funcionamento de dropout ou funções de ativação. A explicação da arquitetura é a seguinte: Cada entrada X_t no tempo t é inicialmente processada pela camada LSTM composta por 256 neurônios. A saída dessa camada LSTM, com dimensão (1 × 256) — considerando o nosso trabalho com uma única variável no tempo (focos ativos), resulta em (1 × 256) — é então passada para uma camada densa com 256 neurônios, resultando novamente em uma saída de dimensão (1 × 256). Esta saída é usada como entrada para a camada GRU, que também tem 256 neurônios. A saída da camada GRU é processada por outra camada densa com 256 neurônios, mantendo a dimensão (1 × 256). Finalmente, essa saída é alimentada na camada densa de saída com 1 neurônio, resultando em uma previsão única para o próximo ponto da série temporal. Para ilustrar o funcionamento, considere um bloco de dados com 12 valores, onde a entrada X_t é o vetor de valores de 1 a 12. A previsão é feita para o 13º valor. O processo é repetido ao mover a janela de entrada, de modo que o segundo bloco será de 2 a 13, e a previsão será para o ponto 14, e assim por diante até o final da série. Essa abordagem permite calcular métricas de erro, como a diferença entre as previsões e os valores reais da série. O erro médio é então utilizado para avaliar a convergência do modelo a cada época de treinamento, onde uma época é definida como o ciclo completo de treinamento através de todos os pontos da série temporal. O otimizador Adam foi utilizado para ajustar os parâmetros do modelo, com a taxa de aprendizado fixada em 0,001 para o modelo combinado de LSTM+GRU. §.§ Treinamento e Avaliação dos Modelos Os modelos foram treinados utilizando a linguagem de programação Python Software Foundation <cit.>, com as bibliotecas Scikit-learn <cit.> e Keras <cit.>, 2015. Cada modelo foi treinado por 1000 épocas, e as métricas Erro Absoluto Médio (MAE) e Raiz do Erro Quadrático Médio (RMSE) foram utilizadas para avaliar o desempenho. Para cada época, o modelo gerou previsões utilizando uma técnica conhecida como janela/bloco deslizante (ou sliding window). Este procedimento funciona da seguinte forma: 1. Janela de Entrada: Definimos uma janela de 12 lags, ou seja, o modelo usa os dados dos 12 períodos anteriores para prever o valor do próximo período. Por exemplo, se temos dados mensais e a janela é de 12 lags, o modelo usa os dados dos últimos 12 meses para prever o valor do mês seguinte. 2. Deslizamento da Janela: Após gerar uma previsão para o próximo período (o 13º), a janela é deslocada uma posição para frente. Isso significa que a previsão é feita usando os dados dos períodos de 2 a 13 para prever o 14º período. Esse processo é repetido até que todas as previsões sejam feitas para o restante da série temporal. 3. Avaliação das Previsões: As previsões geradas para cada época são comparadas com os dados reais da série temporal. Para avaliar a precisão das previsões, utilizamos as métricas MAE e RMSE. O MAE calcula a média dos erros absolutos das previsões, enquanto o RMSE calcula a raiz quadrada da média dos erros quadráticos. Os parâmetros iniciais dos modelos foram definidos utilizando uma distribuição de probabilidade específica, a distribuição normal de He (He normal). Essa distribuição é definida com média zero e desvio padrão √(2 / n), onde n é o número de unidades na camada de entrada. A escolha dessa distribuição ajuda a garantir uma inicialização adequada dos pesos, facilitando o treinamento eficaz de redes neurais profundas, conforme os Keras Developers (2024) <cit.>. Os dados foram divididos da seguinte forma: * Treino: Junho de 1998 até agosto de 2021; * Validação: Setembro de 2021 até agosto de 2023; * Teste: Setembro de 2023 até agosto de 2024. Utilizamos duas sementes distintas para garantir a replicabilidade dos resultados e a estabilidade das métricas de desempenho. O modelo que apresentou o menor erro médio nas métricas MAE e RMSE durante o treinamento foi selecionado como o modelo final. §.§ Como as Previsões são Calculadas? As previsões foram realizadas após o treinamento completo dos dados de focos ativos registrados na região da Amazônia, Brasil, disponíveis no site do INPE (2024)<cit.>, abrangendo o período de junho de 1998 até agosto de 2024. Após o treinamento, utilizamos a função predict do pacote Keras <cit.>, para gerar as previsões. O processo envolveu o uso do modelo treinado, que já contém todos os parâmetros otimizados e ajustados. O modelo, armazenado e salvo como o melhor obtido durante o treinamento, é utilizado com a função predict, que é chamada como modelo.predict(). Este modelo foi treinado com uma variável e um bloco de tamanho 12. A função predict segue a sequência dos dados, incorporando as previsões anteriores para gerar novos resultados, adicionando esses resultados à série temporal e prevendo o próximo ponto. Para detalhar o processo: a função predict utiliza as últimas 12 observações da série temporal (o tamanho da janela deslizante), que vão de setembro de 2023 a agosto de 2024, para prever o 13º ponto, que corresponde a setembro de 2024. A abordagem de "janela deslizante" é usada, permitindo que após a primeira previsão para setembro de 2024, o modelo integre essa previsão e gere uma nova previsão para o próximo mês. No segundo passo, por exemplo, ele utiliza as observações de outubro de 2023 a setembro de 2024, agora incluindo a previsão anterior (de setembro), para prever outubro de 2024 (um dado que ainda não existe na série temporal). No terceiro passo, o modelo usa os dados de novembro de 2023 a outubro de 2024, incluindo as previsões obtidas de setembro e outubro à série temporal, e assim por diante. Esse processo continua até que todas as previsões dos 12 meses sejam realizadas. Essa abordagem assegura que cada previsão mensal se baseie nos dados históricos mais recentes, juntamente com as previsões feitas nos passos anteriores, resultando em uma modelagem robusta para séries temporais de dados de contagem, conforme descrito na literatura e referenciado nesta seção. A ilustração detalhada desse processo está apresentada na Figura <ref>. § RESULTADOS DA ANÁLISE ESTATÍSTICA Nesta seção, apresentamos uma análise descritiva da série temporal de focos ativos na Amazônia, com ênfase na média, desvio padrão, variância e nos valores máximos e mínimos registrados ao longo dos anos, veja a Tabela <ref>. A análise foi realizada cuidadosamente, aplicando técnicas de análise de dados para identificar os pontos mais extremos de cada ano. Nosso objetivo é oferecer uma visão clara e direta desses valores, evitando a complexidade que seria introduzida por uma tabela detalhada. Ao invés disso, optamos por uma representação gráfica que facilita a visualização e compreensão dessas estatísticas importantes. A série temporal de junho de 1998 até agosto de 2024 apresenta dados mensais do total de focos ativos registrados pelo satélite de referência a cada mês. Como vemos na figura <ref>, os meses de agosto e setembro foram consistentemente registrados como aqueles com o maior número de focos ativos durante esse período de mais de 20 anos. A figura <ref> apresenta os pontos extremos de focos ativos de cada ano, enfatizando a sazonalidade existente na série temporal da Amazônia. Desde 1998 até 2024, observa-se que os maiores índices de focos ativos ocorrem consistentemente nos meses de agosto e setembro, enquanto os menores índices são registrados no primeiro semestre, principalmente nos meses de janeiro, fevereiro, maio e abril. Para explorar gráficos detalhados e obter uma visualização interativa da série histórica de focos ativos na Amazônia, você pode escanear o QR Code abaixo. Este QR Code direcionará para um aplicativo desenvolvido com o software R Core Team (2024) <cit.> e o Shiny <cit.>. A aplicação permite uma análise dinâmica e interativa dos dados, possibilitando a visualização detalhada de qualquer mês específico de janeiro a dezembro, bem como a análise dos pontos máximos e mínimos registrados ao longo da série histórica. § RESULTADOS DA ANÁLISE DE TREINAMENTO DO MODELO DE APRENDIZADO DE MÁQUINA Nesta seção, apresentamos e discutimos os resultados obtidos para os modelos de redes neurais recorrentes avaliados, especificamente a abordagem mista que combina LSTM e GRU. Vamos explorar o desempenho do modelo, utilizando métricas de avaliação, como Raiz do Erro Quadrático Médio (RMSE) e Erro Absoluto Médio (MAE), tanto para os conjuntos de treino quanto para os de teste. Cada modelo foi avaliado isoladamente, e os resultados obtidos serão apresentados em tabelas detalhadas. Utilizaremos essas métricas para comparar o desempenho dos modelos e determinar qual deles apresenta os menores valores de erro. O modelo que demonstrar melhor desempenho, com os menores valores de erro, será selecionado como o mais eficaz para a tarefa de previsão em questão. As implicações dos resultados serão discutidas, incluindo a análise da variação das métricas com diferentes sementes e configurações. Esta análise fornecerá uma visão abrangente da eficácia de cada modelo, permitindo a escolha do modelo mais adequado para realizar as previsões necessárias com base nos critérios estabelecidos. § RESULTADOS DO MODELO LSTM + GRU A Tabela <ref> e a Figura <ref> relacionadas ilustram o desempenho do modelo LSTM + GRU para os conjuntos de treino e teste, utilizando diferentes sementes de inicialização. As métricas de Erro Quadrático Médio (RMSE) e Erro Absoluto Médio (MAE) são fundamentais para avaliar a precisão das previsões do modelo. O Erro Absoluto Médio (MAE) mede a média das diferenças absolutas entre os valores reais e as previsões, e é definido pela seguinte fórmula: MAE = 1/n∑_i=1^n| y_i - ŷ_i |, em que n é o número total de meses, y_i são os valores reais de cada mês, e ŷ_i são as previsões do modelo para cada mês. Nesse contexto, conseguimos obter para cada época todas as diferenças entre os valores mensais reais e o que o modelo LSTM+GRU prevê para cada um desses meses. Depois, extraímos a média dessas diferenças, que nada mais é do que a soma dessas diferenças absolutas dividida pelo total de meses (n). Já o Erro Quadrático Médio (RMSE) leva em consideração o quadrado dessas diferenças, penalizando erros maiores de forma mais severa, e é dado por: RMSE = √(1/n∑_i=1^n( y_i - ŷ_i )^2), em que n é o número total de meses, y_i são os valores reais de cada mês, e ŷ_i são as previsões do modelo para cada mês. O RMSE calcula a raiz quadrada da média dos quadrados das diferenças entre os valores reais e as previsões. Isso penaliza erros maiores de forma mais intensa, fornecendo uma medida que reflete a magnitude dos erros em um nível mais severo do que o MAE. Essas métricas são utilizadas para selecionar o melhor modelo entre os 1000 treinamentos realizados. Em cada época, a diferença entre os valores reais e as previsões é calculada, e o modelo que apresenta a menor diferença média é escolhido como o melhor. Observa-se que, embora haja uma diferença significativa nas previsões, especialmente no RMSE, o MAE nos fornece uma diferença média de pouco mais de 3000 focos ativos, em comparação a uma média histórica de 9000 focos. Isso sugere que, apesar de não ser extremamente preciso, o modelo ainda consegue capturar a tendência geral, com um erro que representa aproximadamente um terço da média histórica. A Figura <ref> ilustra a validação cruzada realizada com duas sementes distintas, 2024 e 2025, comparando dois conjuntos de treino e teste. Esta abordagem é fundamental para avaliar a capacidade de generalização do modelo em séries temporais, onde a sequência dos dados é extremamente importante. Ao utilizar sementes diferentes para os conjuntos de treino, teste e validação, garantimos que a validação cruzada considere variações na inicialização do modelo e na estimação dos parâmetros, permitindo uma avaliação mais robusta da generalização. Em séries temporais, a fixação das sementes ajuda a assegurar que a performance do modelo é consistente e não apenas um reflexo de uma configuração específica. Os resultados obtidos para ambas as sementes são bastante similares, indicando que o modelo generaliza bem e segue de forma confiável a tendência geral dos focos ativos na Amazônia. Essa similaridade sugere que o modelo tem uma boa capacidade de generalização e que, ao treinar com os dados completos, ele poderá fornecer previsões na direção correta para identificar períodos com maiores e menores tendências de focos ativos. A Figura <ref> mostra a comparação da perda (Loss) dos conjuntos de treinamento e validação para duas sementes diferentes: 2024 e 2025. A perda (Loss) é uma métrica que representa o erro médio entre os valores reais e as previsões do modelo em cada época durante o treinamento. A fórmula da perda (Loss) é diretamente relacionada às métricas de erro absoluto médio (MAE) e raiz do erro quadrático médio (RMSE), discutidas nas Equações <ref> e <ref>. Esses gráficos são fundamentais para a análise do desempenho do modelo. A perda (Loss) demonstra como os parâmetros do modelo são ajustados ao longo do tempo para minimizar o erro. O ponto onde a perda é minimizada indica a melhor configuração dos parâmetros do modelo para a previsão. A análise detalhada da perda nos conjuntos de treinamento e validação, conforme descrito na Seção <ref>, revela a eficácia do ajuste do modelo. Observando a perda ao longo das 1000 épocas, é possível avaliar se o modelo está generalizando bem para novos dados, o que é essencial para prever a tendência dos dados. Portanto, esses gráficos ilustram a evolução da perda e fornecem detalhes sobre a capacidade da convergência dos parâmetros a cada modelo de se ajustar aos dados, refletindo diretamente na qualidade das previsões e na efetividade do treinamento realizado. §.§ Treinamento para os dados completos A Figura <ref> apresenta o treinamento do modelo utilizando o conjunto completo de dados, abrangendo o período de junho de 1998 até agosto de 2024. Esta abordagem permite avaliar a performance do modelo em toda a série histórica, seguindo as melhores práticas para séries temporais em machine learning, onde é crucial treinar o modelo com todos os dados disponíveis para realizar previsões futuras. Os gráficos <ref> e <ref> mostram a comparação entre os dados reais e as previsões para o conjunto de treino, utilizando as sementes 2024 e 2025, respectivamente. Estes gráficos ilustram a capacidade do modelo em capturar a tendência geral dos dados ao longo do tempo. Os gráficos <ref> e <ref> apresentam as métricas de desempenho do modelo para as sementes 2024 e 2025, com a raiz do erro quadrático médio (RMSE) e o erro absoluto médio (MAE), respectivamente. Essas métricas são fundamentais para avaliar a precisão das previsões do modelo, com o RMSE fornecendo uma medida penalizada dos erros maiores e o MAE oferecendo uma visão geral das diferenças médias. Finalmente, os gráficos <ref> e <ref> ilustram a evolução da perda (Loss) durante o treinamento para as sementes 2024 e 2025. A perda (Loss) é uma métrica importante que representa o erro médio entre os valores reais e as previsões do modelo em cada época de treinamento. Esses gráficos mostram como a perda varia ao longo das épocas, refletindo o ajuste contínuo dos parâmetros do modelo para minimizar o erro. A combinação dessas análises permite uma visão abrangente do desempenho do modelo treinado com o conjunto completo de dados, confirmando a eficácia da abordagem adotada e a capacidade do modelo de generalizar bem para previsões futuras. §.§ Previsão Então, a partir do modelo treinado e identificando o best model — o ponto com o menor valor de métrica, como ilustrado nas Figuras <ref> e <ref> — podemos observar que esse ponto representa a configuração de parâmetros que resultou nos menores valores de erro absoluto médio (MAE) e erro quadrático médio (RMSE). Esse modelo otimizado foi utilizado para realizar previsões de 12 meses à frente. A Figura <ref> mostra a série temporal desde junho de 1999 até agosto de 2024, e as previsões geradas se estendem de setembro de 2024 até agosto de 2025. Para detalhes adicionais sobre o processo de previsão e a implementação, consulte a seção <ref> desse material. § CONCLUSÃO Este estudo avaliou o desempenho de redes neurais recorrentes, combinando as arquiteturas Long Short-Term Memory (LSTM) e Gated Recurrent Unit (GRU), na previsão de focos ativos detectados pelo satélite AQUA_M-T na Amazônia. A análise mostrou que essa combinação é eficaz na captura de padrões temporais complexos, com a escolha final do modelo baseada no Erro Absoluto Médio (MAE). As previsões geradas pelos modelos combinados demonstraram um alto desempenho, especialmente em séries temporais com forte sazonalidade, como é o caso da série histórica dos focos ativos na Amazônia. Este trabalho também destacou a importância da configuração meticulosa dos modelos e do treinamento com validação cruzada para garantir boas práticas de modelagem. Além disso, simplificou a compreensão das redes neurais e do processo de aprendizado, tornando-o acessível a alunos e pesquisadores sem experiência prévia no tema. A análise descritiva identificou os meses de máxima e mínima incidência de focos ativos de 1998 a 2024, proporcionando percepções estratégicas desses eventos. Por fim, o estudo reforça que, ao modelar dados univariados de contagem, a utilização de técnicas adequadas e referências confiáveis permite estruturar uma base de dados com grande assimetria e treinar modelos que convergem para previsões que seguem a tendência geral da série temporal, independentemente da escala dos dados. [title=Referências]
http://arxiv.org/abs/2409.03072v1
20240904204812
Space to Teach: Content-Rich Canvases for Visually-Intensive Education
[ "Jesse Harden", "Nurit Kirshenbaum", "Roderick Tabalba", "Ryan Theriot", "Michael Rogers", "Mahdi Belcaid", "Chris North", "Luc Renambot", "Lance Long", "Andrew Johnson", "Jason Leigh" ]
cs.HC
[ "cs.HC" ]
Introduction Biermann-battery driven magnetized collisionless shock precursors in laser produced plasmas C. K. Li September 9, 2024 =========================================================================================== Content-rich canvases are virtual surfaces (often infinite) containing rich content arranged in spatially meaningful ways. Typically implemented on digital whiteboards, content-rich canvases provide a platform for collaborative learning and engagement. These technologies held a special role during the COVID-19 pandemic, when educators of all levels searched for ways to augment remote learning. Indeed, researchers documented many success stories of enhanced student-led learning activities (e.g. <cit.>) using such tools, especially for online classes. While learning has largely returned to in-person instruction, the use of shared digital canvases for co-located instruction sparks new opportunities, especially when combined with increasingly prevalent large displays. Indeed, we have long proposed using large displays for collaborative work and education <cit.>. With extensive experience in teaching higher education courses using SAGE3 <cit.>, a content-rich canvas software we are part of the development team for, in classrooms with large displays, we identified patterns that portray how one can use content-rich canvases and large displays to augment education with their provided "Space to Teach", especially in classes that heavily depend on visuals and media, such as data science, game design, human computer interaction, and, of course, data visualization. Visualization courses in higher education convey current theory in perception and data visualization, train students to critique and design visualizations, and cover practical aspects of programming frameworks for visualizations and/or relevant commercial tools <cit.>. With these broad topics in mind, many courses employ hands-on and collaborative activities in addition to traditional lectures. Visualization researchers and educators <cit.> identified a need for further research pertaining to the educational methods that can efficiently help students to develop necessary skills for visualization work, as well as novel tools and environments that are beneficial for visualization education. We posit that using content-rich canvases like SAGE3 in classrooms with large displays (see Figure <ref>) provides an edge for hands-on, collaborative course work like that in visualization and other visually-intensive courses. Kirshenbaum et al. <cit.> focused on content-rich canvases created via parallel content sharing during work meetings; they highlighted advantages of content-rich canvases in promoting information continuity, information clustering and spatial memory. This work explores such advantages in higher education. Many patterns we discuss here can be mimicked with other digital whiteboard tools (e.g. Miro) on large displays, although SAGE3 has some notable advantages which are discussed in the related works section. We aim to show content-rich canvases work well with the nature of the material in visually-intensive courses like visualization, which usually relies on heavy use of images and interactive applications that showcase the variability and complexity of visualization examples. Our examples show content-rich canvases also support both instructor and student-led activities; both groups can freely share content, present, re-arrange, and sketch on such a canvas while engaging in educational activities; content has permanence (information continuity), relevancy (information clustering) and spatiality (spatial memory) characteristic of content-rich canvases. In this paper, we present six example patterns (Burst of Content, Side-by-Side, Content in Advance, Virtual Board Scroll, Bespoke Spaces, and Spatial Rearrangement), followed by student feedback evaluations and a discussion about the advantages, limitations, and future directions of using content-rich canvases and large displays in higher education. § RELATED WORKS To provide background for this work, we present some work relating to content-rich canvases and large displays, including benefits of large displays, online boards in education, and SAGE3. We then discuss challenges to visualization education and current efforts in that field. §.§ Large Displays & Content-Rich Canvases The benefits of using large, high-resolution displays are well researched. They can positively influence spatial performance <cit.>, visualization and navigation tasks <cit.>, sense-making <cit.>, data analysis <cit.>, and daily work <cit.>. However, there are many challenges when it comes to controlling and working from a large display <cit.>. Due in part to the COVID-19 pandemic, higher education saw a renaissance for online collaborative whiteboard platforms like Miro[https://miro.com/], ConceptBoard[https://conceptboard.com/], ExplainEverything[https://explaineverything.com/], Google Jamboard[https://jamboard.google.com/], Sketchboard[https://sketchboard.io/] and more <cit.>. Many publications describe experiences and study results from this period (e.g. <cit.>). These works show that using whiteboard platforms encourage collaboration and creativity, but have a learning curve. Modern whiteboard software supports file sharing (e.g. images and PDF files), use of colorful notes similar to physical Post-it notes, drawing features, and more. More features increases the richness of the resulting boards, hence our term of content-rich canvases; at the same time, tool mastery becomes more difficult. While some patterns described in this paper can be implemented on other online whiteboards, we emphasise SAGE3 since it differs from other whiteboard platforms in three significant ways: it was designed with large displays and varying display sizes in mind as seen in Figure <ref>, it is open-source software developed as an NSF funded research project and thus is, unlike commercial products, open to modification, and it has a backend designed to support computational notebook cells, a feature that was not used during the educational activities reported in this paper, but is likely to be prominent in future use. Finally, SAGE3 features many other applications, as seen in Figure <ref>, including a WebView app that enables internet browsing within a board, PDF viewer that can be extended to view multiple pages side by side, and screenshare (simultaneous multiple screenshares are supported). More information about SAGE3 is available in SAGE3's public repository <cit.>. §.§ Visualization Education Teaching and learning the skills of creating and analysing visualization spans a variety of topics, from designing visualization education tools for young children <cit.> to evaluating the relevance of existing tools to visualization courses <cit.>. For higher education, some challenges were verbalized by the ACM SIGGRAPH Education Committee in 2013 <cit.>. Educators at that time noted that: * there is an increase in nontechnical students learning visualization due to its relevancy to the modern workforce, * Hardware technology and visualization algorithms have progressed and may affect computer science students taking visualization courses, * visual analytics are increasingly used in practice, leading to a strong need for highly interactive visualization, * user-centered design and evaluation of visualizations is necessary. More recently, Bach et al. articulated challenges for visualization education<cit.>. They arrange these challenges around the themes: people, goals and assessments, motivation, methods, environment, materials, and change. Largely, many of the issues articulated in 2013 remain and are even amplified; A boom in data science education leads to a very varied body of students interested in visualization, the class settings have developed beyond the simple in-person lectures to online and hybrid classes, and the technological advancements lead to more complex visualization systems to learn, design, and most importantly, adapt to. Bach et al. raised 43 specific research questions based on the then-current state of visualization education. We would like to highlight the following questions from Bach et al.: * In their discussion on Methods and the need to foster core skills around visual representation and interaction, the authors ask (Q21-Q23) for ways to develop such skills, and doing so while leveraging play and/or using sophisticated novel tools. * In their discussion on Environment and the need for providing environments for hands-on, creative, and collaborative work, the authors ask (Q27, Q28) about affect and affordance of specific environments that can support data visualization education. We propose content-rich canvases like SAGE3 are a tool that can be used to answer these questions and the patterns described below provide opportunities for non-conventional activities appropriate for visually intensive material due to the affordance of space unique to an environment using content-rich canvases and large displays. Of course, there are many outside-the-box educational explorations in visualization. Beasley et al. <cit.> investigate how integrating peer feedback throughout the semester can improve students' engagement in visualization classes. Boucher et al. <cit.> identify the potential of using educational data comics, discussing how it supports visualization activities while appealing to a variety of audiences. Boucher and Adar <cit.> describe their workshops to help students through the visualization design process with inspiration, layout, and domain specific cards. Adar and Lee-Robbins <cit.> devised a framework around a visualization class' final project that overcomes some issues of traditional project assignments like the need to clean data and the difficulty of evaluating project outcomes by using an engaging vehicle of a game called Roboviz. There is a continued need for innovation in visualization education, and this paper helps fill this niche. § CONTENT-RICH CANVAS USAGE PATTERNS IN EDUCATION Classes (in higher education and otherwise) traditionally revolve around a single instructor-controlled view, which often consists of a slide deck presentation or the instructor screen-sharing their personal view. When students are asked to take part in the presentation, a ritualistic "passing of the cord" is performed from student to student as each gets access to the communal view (usually, by plugging their computer into a display) on their turn. This conventional, sequential, single-view mode of content sharing can be improved upon with the space provided by a content-rich canvas and large display setup. While teaching a variety of courses on topics such as visualization, VR, data science, and game design, members of our team have periodically written notes regarding how they use SAGE3 in their classrooms. These notes were analyzed by the authors using a grounded theory approach to identify several patterns and strategies of usage in the classroom. These patterns are not mutually exclusive, and the subset of patterns used in any specific course are influenced by the course topic and the instructor’s personal style of teaching. §.§ Elements of the Patterns As mentioned in prior work <cit.>, three properties of content-rich canvases are permanence, or information continuity, relevancy, or information clustering, and spatiality, or spatial memory. Below, we describe the base elements of orientation, spread, density, and time, which play a role in our patterns. We also add the element of lead actor, which nods to the education-specific hierarchy of instructor and students. Figure <ref> summarizes these elements. Orientation When creating a content-rich canvas, the placement of content may result inn creating a mostly horizontal layout or a mostly vertical layout. Modern displays are usually more wide than they are tall, so the horizontal layout is prominent. Some content could be mostly vertical in nature, such as data tables or linear computational notebooks. Vertical layouts may play another role - one that indicates the passage of time; see the virtual board scroll pattern below. Spread Multiple units of content (e.g. apps in SAGE3) can be spread across the space in a random, regular, or semantic layout. The random layout is characteristic of multiple collaborators adding content (either by uploading files or starting application on the board) at approximately the same time without the help of bespoke spaces (see pattern below). Regular layouts are usually started by one user trying to establish a specific structure for an activity. Semantic layouts are formed when users move content around the board to indicate relationships between units of content (see the spatial rearrangement pattern). Density Layouts can be sparse or dense. Sparse boards can contain as little as a single piece of content (for example, a screen-share or a PDF of a deck of slides) or some other small number (for example, a PDF with the syllabus and a stickie with the assignments schedule). The dense boards have many more units of content on them; this can be useful for many activities, but can also be overwhelming and "busy" at times. The side-by-side pattern usually creates a sparse layout and the burst of content pattern is likely to be on the dense side. The advantage of an infinite board with the large display demarcating the current area in use is that a board can be sparse and dense in different areas of the canvas and by using navigation the instructor can control the extent of the content in focus. This is used extensively classes (see the virtual board scroll pattern). Time The element of time is relevant in a content-rich canvas where we can expect time to affect the use of space (see "Traces of time through space" <cit.>). In the educational setting, we consider the key time points of before, during, and after class. Intuitively, the period of time during class is when the board is under heavy use, but since the content persists there is great value with preparing content in advance, which is one of the design patterns below, and material can be downloaded from a board or reviewed as needed after the class. Lead Actor Though we are not drawing a line between the instructor using the board and the students using it, educational activities are usually led by an instructor (i.e. lecture, demonstrating software, showing and critiquing examples, etc.) or by one or more students (i.e. project presentation, brainstorming in groups, solving problems in class etc.) The extent of freedom given to students during class is dependant on the instructor and their chosen activities. §.§ Patterns In this section we describe recurring patterns of use that we came across while teaching visually intensive classes using the content-rich canvas software SAGE3. §.§.§ Burst of Content We frequently find SAGE3 useful for brainstorming activities; for example, students in a video game design class were asked to break into small groups, with each group creating a board within the classroom's room that was used to discuss project ideas based on given criteria. Students could use their board how they wished, organizing reference documents and stickie notes. The instructor visited the board of each group and provided feedback. The students could share their board on the large display when asked to present their ideas. This is a common form of activity seen in classes that require group-work and employ a design process. In terms of the elements discussed above, this would be a student-led content-rich canvas that is created either during or after class, and is likely to take the form of a dense, semantic layout. However, this pattern goes beyond group brainstorming. The Burst of Content pattern indicates that many sources of content are shared at the same time; this can be useful when the instructor is posing a question or a task to all the students, and all the students are expected to respond more or less simultaneously. This would usually create a content-rich canvas that is initially led by the instructor with a random, dense layout formed as students post their responses, and later turned to a canvas with semantic layout either at the hands of the instructor or the students. Overall, this pattern works well for ideation, group discussion, simulation/play, and compare/contrast activities. §.§.§ Side-by-Side Often, teaching materials are incremental or referential: the slide about drawing a graph is built on the content of the slide containing an adjacency matrix, a slide about techniques for zooming relies on the definition of viewports a few slides back, or an instructor would like to compare the code for map() and reduce() which appear on different slides. Normally, instructors find themselves duplicating information between slides or flipping back and forth; a content-rich canvas with a large display eliminates this need. The advantage of content-rich canvases like SAGE3 is collaborative side-by-side placement of teaching materials: a video next to an equation it refers to, live screenshare next to step-by-step instructions, or multiple slides side-by-side for easier referral. This latter feature is enabled by SAGE3’s built-in PDF viewing app, which supports multi-page view and the physical navigation afforded by the large display. Some instructors use a spread of as many as 4 consecutive slides from their slide deck. Any app can also be duplicated when the instructor wants to show side-by-side different elements from the same source. This pattern can be used by an individual, be it an instructor or a student, or by multiple students working together. It naturally leads to horizontal layouts and often takes a sparse form. This pattern works well for narratives (i.e. presenting material in a linear order) and for compare/contrast activities. §.§.§ Content in Advance One author used to create a multi-tabbed browser window with each tab directed at an example to be explored during class. This method of teaching felt awkward; despite preparing in advance, they had to search for the "correct" tab when they reach the relevant material, and switching tabs would put away the lecture slides, making it harder to relate lecture content to the example. This issue can be solved by preparing content in advance on a content-rich canvas. This pattern supports a flexible sequence for presentation of content, and a presenter can use this pattern in the following way: before the presentation starts, the presenter loads the board with content, such as slides, webviews, and videos. While presenting, the presenter highlights content as needed. Spreading materials in this way gives students access to more modalities of the material; students can easily download files from the board, be it PDF files, images or video files. With the SAGE3 webview, the presenter can quickly switch between talking about an example to interacting with an example in a live webview within the board. Students can follow the interactive examples on the display or use the webview to launch the web pages in their own browsers. The main element relevant to this pattern is time: there are no limitations on the kind of layout the presenter creates, and the presenter can be either the instructor or a student. This pattern works well for narratives (linear or non-linear) and for compare/contrast activities. §.§.§ Virtual Board Scroll Many large classrooms contain vertically sliding blackboards, where instructors write their notes on a "fresh" board and slide it up when the board was filled revealing another layer of sliding board. When that one was filled, up it goes, and the previous board was brought down, cleaned, and used again for notes. This was a physical way to maintain persistence of the notes, at least for a while, giving students more time with them. The instructor didn't need to destroy old content because they could "scroll" in a way that gave them a new space to fill with new content, which is the essence of this pattern. While all content remains on the board, and students can access them as they wish, the instructor sets the large display's viewport into the board, virtually scrolling the material to their current focus. This pattern is mostly instructor led (unless a student is presenting), occurs during class, and can use horizontal and/or vertical space, if that verticality is divided into multiple semantically relevant horizontal displays the presenter would like to scroll through. §.§.§ Bespoke Spaces When facing an infinite canvas, users need a starting point; when using a large display, users may not know how to position their content, assuming that content does not cover the full screen, so it makes sense to divide space into areas dedicated to specific uses. In addition to showing their slide deck on the large display, instructors may have a part of the visible viewport show the agenda for the class, and another part for student questions. This approach relates to general trends of using abundant display real-estate, like how PC users with multiple displays tend to assign displays roles like "reference" and "active work"). Another way to use bespoke spaces is to have a section of the board for students to post content; this can be used with regularly placed and labelled stickie notes so every student knows exactly where on the board to post. Students can also create and use their own boards, as mentioned in the brainstorming scenario above, forming an organizational separation from all other material rather than a spatial separation. Using regular or semantic layout, this pattern is useful for students and instructors any time during or outside of class, and is appropriate for narrative, ideation, simulation/play, and compare/contrast activities. §.§.§ Spatial Rearrangement We keep going back to the importance of using the space, looking at different orientations, densities, and meaningful or random ways to arrange the spread of content on the board. This pattern draws attention to the act of moving content around, which can be done by both instructor and students at any time. Often, rearranging content is done to form a specific spread, such as changing a random spread into a semantic or regular one. Rearrangement can also be used to denote relevance or irrelevance of specific content; when content is irrelevant to the current discussion, the app window containing it can be closed or be moved away from the area in focus, out of sight but not necessarily out of mind, where it remains a referent available for access when needed. Rearrangement may also be needed to transition from one orientation to another, such as when the display being used as a communal wall is changed or the board's window is resized. This pattern is the bread and butter of any extensive use of content-rich canvases and can be used in any activity. §.§ Examples In this section we share some examples of activities and their accompanying boards taken directly from a visualization class delivered by one of the authors; the examples were brought up during an interview with them (see further detail in the Section 5.2.2). Brainstorming is very commonly performed on large displays, with one such activity given in the visualization class: Given a dataset, students were asked to come up with thoughtful questions they could answer with visualizations. After the questions were posted, the instructor and student clustered the questions based on similarity to enable exploration and analysis of the design space for visualizations of said dataset. The resulting board is shown in Figure <ref>. This is a simple use-case for the burst of content and spatial rearrangement patterns. Another activity using these patterns was given during another class; the students had to sketch a graph based on a provided adjacency matrix, a common task in visualization classes, such as shown by Kerren et al. <cit.>. The sketches were uploaded to the board and the instructor and students clustered them based on various characteristics such as graph layout. The resulting board is shown in Figure <ref>. Other examples of the burst of content pattern are shown in Figures <ref> and <ref>. In the former, students are given a dataset, explore it and sketch (using the annotation tool) what kind of chart they would create for it. The latter shows a more complex exercise: creating a collaborative multi-view visualization in which students had to post visualization around a theme, US election outcomes, and organize them according to a dimension, the election year. Together they created and analysed a small multiples example of visualization. An example of simulation activity that uses burst of content and spatial rearrangement can be seen in Figures <ref> and <ref>. During a class on Multi-dimentional Scaling (MDS), students received a dataset on animals for their review, and each student was assigned an animal from the data; on a new board, they perform an activity simulating MDS. Students place an image of their animal on the board, and then move their image closer to similar images and further away from dissimilar images, encouraging them to wrestle with the relationship between distances and similarity in two dimensions instead of higher dimensions. One of the interviewee's favorite methods uses the bespoke spaces pattern effectively. With an ordered array of stickies on the board, they assign students a safe space to work and track their class participation, such as shown in <ref>. During the first class, the instructor arranged stickie notes on a SAGE3 board to mimic the seats in class and asked students to write their name on the stickie corresponding to their seat; later, students found a visualization of interest and shared it with the class by placing their found image under their stickie. An example of a class that is more lecture-centric is shown in Figure <ref>, which is a good example of the side-by-side pattern along with content in advance. In this class the instructor focused on TreeMaps. Before class, the instructor prepared examples in the form of interactive apps that demonstrate their slide-deck material, and during class they would seamlessly transition to these interactive examples. These examples show glimpses of other content, meaning virtual board scroll was used to isolate the activity's area on the board. § USER FEEDBACK While our usage patterns are the main contribution in this paper, we would be remiss to not include students' feedback for SAGE3 and SAGE3-based classes. This section covers an evaluation survey administered in two visually intensive classes at the University of Hawaii and another evaluation survey administered in two visualization classes at Virginia Tech. This latter evaluation was followed by an interview session, where two of the authors interviewed the instructor, who is also an author of this paper. §.§ Evaluation 1 In Spring 2023, we distributed a survey among students of two classes that used SAGE3 for in-class instruction; both involved game design projects in small groups. One class was a Junior level class called "Computational Media System" and the other was a Sophomore level class called "Video Game Design and Development." Both classes met in-person on the University of Hawaii campus, and were taught by different instructors from the SAGE3 group. Both instructors used SAGE3 as the substrate for delivering lectures and student activities, although the university wide learning management system, Laulima, was also used for content delivery and assignment submissions. Surveys for evaluating and improving SAGE3 were brought up in class near the end of the semester, and it was clearly explained that submission is anonymous and no repercussions for not submitting it or expressing negative views on SAGE3 would occur. 26 students from the computational media class and 23 students from the game design class submitted the survey, making a total of 49 respondents. The survey had two sections: the first gauged their experience with various SAGE3 features, and the second prompted attitudinal responses regarding the use of SAGE3. Following the Likert scale questions of the survey, students could write comments elaborating on their views. §.§.§ Results The survey responses for the first section on experience with features are summarised in Figure <ref>, and the responses for the second section on their attitude towards SAGE3 are summarised in Figure <ref>. We see a higher concentration of feature use for basic features like entering or navigating a board; for more advanced features, like annotation, SAGECells, follow feature, and app duplication, students admitted to not using them, and were often unaware of them. The attitudinal questions mixed statements with negative and positive sentiment. Overall, results were positive. Looking at "I prefer to not use SAGE3," the 10 (20.4%) students who agreed had issues like "The biggest issue is lag/latency and speed of actions." and "make it faster? idk might just be my computer" regarding technical performance, or comments such as "Can't zoom while cursor is over any type of content" which showed some features went undiscovered by some students. Comments from the 14 (28.5%) students that disagreed with the statement "I prefer to not use SAGE3" included: * "good for collaboration, easy to have things visually present for ease of access" * "I like how it's really good for brainstorming" * "A much bigger screen and no switching between applications" * "It is more interactive compared to just simple powerpoint presentations. You can download material for the class directly from the board" * "SAGE3 made my learning experience easier for my teammates to collaborate. We also get to have fun and share different funny memes." The students that did not want to use SAGE3 used 14.2 of the 25 features we surveyed them on average, while those that did want to use SAGE3 of 16.7 of its features on average. We don't know if this difference shows correlation or causation. The features that seem to be of particular difference between the groups are: navigation, switch pages in a PDF, use annotations, use a webview to share a url, download files from the board, and use the wall outline for content placement. §.§ Evaluation 2 In Fall 2023 and Spring 2024, we surveyed students in an Information Visualization graduate class that used SAGE3 and a large display for in-class instruction and interviewed the instructor in Spring 2024 about their experiences. The class had collaborative projects, interactive discussions, and exercises designed to aid understanding of foundational information visualization concepts. It was conducted in-person on the Virginia Tech campus by the same instructor from the SAGE group both semesters. While SAGE3 was the substrate for delivering lectures and student activities, the university wide learning management system, Canvas, was also used for content delivery and assignment submissions. A new survey for evaluating and improving SAGE3 was brought up in class near the end of Fall 2023 and in the latter half of Spring 2024, with not submitting it or expressing negative views on SAGE3 having no repercussions. 20 students from Fall 2023 and 14 students from Spring 2024 submitted the survey, making a total of 34 respondents. The survey had three sections: the first briefly gauged how much they used SAGE3, the second prompted attitudinal responses on the use of SAGE3 with a big display, and the third focused on perceptions of the usability of SAGE3. Following the Likert scale questions of the survey, students could write comments elaborating on their views. §.§.§ Survey Results The survey responses for the first section on how frequently students used SAGE3 on their personal computer/laptop during class and outside class, are summarized below. The responses for the second section are summarized in Figures <ref>,<ref>, and <ref>, and the third section's responses also are summarized in Figure <ref>. All students used SAGE3 during class, but 6 (17.6%) never used it outside class. 4 of these 6 were fairly positive towards SAGE3; of these four, one felt an instructor projecting their laptop screen made for a better class experience than SAGE3, perhaps because they found using SAGE3 difficult based on their responses to the usability questions. The 2 other students thought having lots of content on SAGE3 was distracting and felt overwhelmed by the amount of content; of these two, one felt Zoom classes were a better medium than SAGE3 while the other liked SAGE3 classes more than the three alternatives given. These 2 students also found SAGE3 difficult to use and navigate in. The attitudinal questions began with a list of statements on potential benefits of SAGE3 with a large display. As seen in Figure <ref>, students tended to agree this setup could benefit students on engagement, focus, ability to participate and contribute, enjoyment, and learning. The next part of the attitudinal questions gave statements, some positive and others negative. As seen in Figure <ref>, the overall sentiment was positive, although a decent percentage of respondents at least slightly agreed that having lots of content on SAGE3 boards led to distraction (44.1%) or that they felt overwhelmed by the amount of content on a board with the large display (23.5%). After the statement parts of the attitudinal questions, respondents rated 4 usage patterns for SAGE3 with the large display. The patterns mentioned in this survey were not articulated as the more general patterns identified in this paper, although there is some relation between the questions and some of the patterns. Specifically, the first 2 questions validate the Side-by-Side pattern, while the latter 2 validate the Spatial Rearrangement and Bespoke Spaces patterns. As seen in Figure <ref>, almost every respondent rated each usage pattern as at least slightly helpful. We note that one respondent did not have a positive attitude to the patterns of dedicating space for students and clustering student contributions, explaining in the survey that "people can easily interrupt others' work. my work got deleted mistakenly by others. Sage3 is also not very fast and it takes time for it to update. (I would type something and it didn't show it until after a couple of minutes.)". The final part of the attitudinal questions asked respondents to compare SAGE3 on the big display with 3 other instructions mediums, as seen in Figure <ref>. 7 respondents in total (20.6%) rated SAGE3 as at least slightly worse than at least one of the three listed instructions mediums, with one of these respondents preferring all three of the listed instruction mediums over SAGE3; this respondent did not explain their ratings, unfortunately. This 1 respondent took the class in Fall 2023, before certain improvements to SAGE3 were made. Next came the usability section, whererespondents rated their agreement with a positive statement ("I found the SAGE3 user interface to be easy to use") and a negative one ("I found navigating within SAGE3 difficult"); the results of this are summarized in Figure <ref>. Overall, most students at least slightly agreed that the SAGE3 interface was easy to use, and a slight majority (52.9%) felt navigation was difficult. §.§.§ Interview Results We interviewed the Information Visualization class instructor on the following topics: comparing SAGE3 with a large display to prior teaching methods without SAGE3 or a large display, methods and activities for making the most of a large display with SAGE3, and benefits and challenges of using SAGE3 with a large display for visualization education. We summarize interview findings not already covered below. With SAGE3 and a large display, the instructor could show multiple slides, visualization examples, and more at the same time. Without such an environment, they tended towards slide decks and having many web browser tabs open, which the instructor noticed had 2 key disadvantages when compared to SAGE3 with a large display: the traditional instruction style required more virtual navigation and limited collaboration and interactivity. Specific observations include: * You can only show one item (slide, browser tab, etc.) at a time, limiting the ability to do tasks such as comparative analyses. * You waste time scrolling, changing tabs, and doing other virtual navigation methods that could be faster with physical navigation. * While doing virtual navigation (e.g. changing slides, tabs), you may forget part of what you were looking for, especially if you had to try several different tabs or slides to find the right one. * Interactive exercises may have the instructor try to show students' ideas instead of enabling students to show what they are thinking. * While some online boards like Miro or Google Jamboards give students space to sketch, such tools are limited by the smaller screen space available without a large display; this leads to virtual navigation being necessary, with all the problems that can bring. Given the expansive space a large display has and the affordances of content-rich canvases like SAGE3, our interview also covered best practices in such an environment. Some examples the instructor gave are detailed in the section on Patterns above. The instructor preferred methods that inspired collaboration via content contribution and content rearrangement and used the space not only for semantic reasons but for class administration as well, such as tracking student participation by having them post something relevant on the board in a bespoke space. Finally, we wrapped up with a discussion of the benefits of using SAGE3 with a large display for visualization education; the instructor noted improved student engagement and higher-level learning. They noticed students seemed much more engaged with the exercises during class and even expressed excitement after class. In previous semesters, the instructor felt they were "pulling teeth" to get students to engage; with SAGE3 and the large display, students were far more willing to talk about their ideas, contributions, and more. Furthermore, the instructor noted that the ability to collaboratively explore many example designs and wrestle with relevant theoretical questions and ideas may help enable higher-level learning and analysis of the semantics of a visualization design space. While this difference is the instructor's subjective experience, it does suggest the change in environment can provide, in their words, a "force multiplier" for learning. The instructor did note one trade-off of using SAGE3 with a large display: when many items are on a board, it becomes laggy and slow. Thus, the instructor would create a new board about every two weeks. § DISCUSSION AND FUTURE WORK Every person in our group uses a different mixture of these patterns in their classes, and this is influenced by both the instructor's preferences and the physical setups available in their classrooms. One group member has two large displays in their classroom, one at the front and one to the side. This layout makes it easy to use the Bespoke spaces pattern, with the side board relegated to "class status" content including the class syllabus, presentation schedule, and assignment due dates. The main display is used in a Side-by-Side manner to show lecture notes (given by this instructor as a web page) and example videos. Another team member relies primarily on the Content in Advance pattern to arrange PDF files, video files, and links strategically on the board. This instructor also uses Burst of Content and Spatial Rearrangement, asking groups of students to iterate over their designs based on new material discussed in class. During presentations, this instructor uses the Side-by-Side approach with the presentation schedule in a webview on one side of the display and the presenting group screensharing on the other side. One member of our group with a large display that supports touch interaction uses annotations regularly to sketch charts. Even using the same tool, the variability in classroom environments and instruction methods leads to different teaching experiences, yet the patterns described above are general enough to be incorporated whenever a content-rich canvas and large display combination is available. The unifying factor is the "space to teach" afforded by a content-rich canvas like SAGE3 and large displays. Our experience shows that our setups help us successfully engage students in learning required materials in ways that are collaborative and interactive, making headway in answering questions posed by Bach et al. <cit.> regarding methods and environments for visualization and visually-intensive education. A review of students' feedback does suggest weaknesses in our approach. Content-rich canvases, such as SAGE3 and Miro, have a learning curve <cit.> and many find navigation difficult. These difficulties could potentially have affected student motivation to learn a new system, which may have in turn affected, in our case, SAGE3 features explored and perceptions of usability like ease of navigation. Students who didn't use SAGE3 often tended to not like it; this may be due to the aforementioned problems leading to low motivation to learn how to use it. We intend to increase efforts to train students to use SAGE3, evaluate and improve SAGE3's design and features for usability issues, and study this confounding factor in future evaluations. Some students encountered technical problems, such as lag, which dampened their experience. This is a problem we are aware of; lag worsens if the board is too dense with content, and it is often better to branch into new boards when first experiencing delay. We should also note that SAGE3 is still under development and the software is improving all the time; many of the technical problems that were experienced in the Fall 2023 semester are already dealt with. For more information about SAGE3 and its development, check out the SAGE3 Github <cit.> and wiki <cit.>. The vast majority of students felt using a content-rich canvas like SAGE3 with large displays provided advantages over other forms of instructions: they appreciated that more could be shown on the large display, the support for group work, ability to download material, and how it left room for playfulness. The biggest question raised as a result of the student evaluations is How much content is too much? The surveys indicate that some students find the SAGE boards chaotic, distracting, even overwhelming. This perception suggests that there are factors in play with a large display plus content-rich canvas environment that could detract from the benefits such an environment can bring to educational settings. Finding a good balance of content and techniques that utilize the collaborative, interactive affordances of large displays and content-rich canvases, like SAGE3, while also not overwhelming students with too much content remains an interesting avenue for future research. Some example questions that could be answered along these lines include "how much content is too much", "what causes students to feel overwhelmed in a large display plus content-rich canvas environment", and "how can we address and mitigate the chaos while maintaining the benefits of large displays plus content-rich canvases?" In the future we also anticipate that the SAGECells will have a bigger role in visualization classes since they can provide computational capabilities similar to a standard computational notebook but with the advantage of using a spatial arrangement, a feature that shows promise according to some studies <cit.>. In addition, developers have been creating plugins for SAGE3 and experimenting with running AI on SAGE3's backend. This opens possibilities that can affect visualization education, and merits future research. Perhaps one interesting outcome from using a content-rich canvas like SAGE3 for education is that it creates a visual representation, a visualization, of the lessons. Viewing a board after class conveys a sense for class progress and activities in a way that cannot be replicated in traditional classes. § CONCLUSION This paper touches on challenges instructors come across in higher education, such as the need to utilize collaboration, interactivity, and play while teaching visually-intensive courses like visualization, especially with novel environments and tools that may be conducive for such courses. We bring up the concept of content-rich canvases, which are virtual surfaces containing many types of content meaningfully arranged spatially, and suggest using content-rich canvases in combination with large displays is beneficial for visually intensive courses. The heart of the paper details 6 usage patterns for an environment with at least 1 large display and a content-rich canvas software that we identified by analysing our own notes from teaching classes using the content-rich canvas software SAGE3 with large displays, which are Burst of Content, Side-by-Side, Content in Advance, Virtual Board Scroll, Bespoke Spaces, and Spatial Rearrangement. We discuss likely elements of the patterns in terms of layout, time, and lead actor. We also provide concrete examples for activities and their resulting boards taken directly from a visualization course. We also incorporated evaluations that share students' attitude toward classes that use content-rich canvases, specifically in the form of SAGE3 boards. The students' impressions are vastly positive, feeling it improves collaboration, interactivity, and engagement. However, they also expose some issues. Students that do not gain familiarity with the system seem to not be fond of content-rich canvases like SAGE3, indicating a need for more training as the semester starts as well as further study of what affects the difficulty of using a content-rich canvas in a collaborative setting and how such issues can be addressed through refined designs. Some technical issues like lag occur and can be a detriment; this can be solved by starting new boards when old boards become overburdened. Most importantly, some students found the content-rich canvas approach, as done with SAGE3, overwhelming, which leads us to ask questions like "how much content is too much", a question we need to explore in future work. This project is funded in part by the National Science Foundation awards: 2004014, 2003800, 2003387, 2149133, and the Academy for Creative Media System. abbrv-doi-hyperref
http://arxiv.org/abs/2409.02914v1
20240904175243
Can LVLMs Obtain a Driver's License? A Benchmark Towards Reliable AGI for Autonomous Driving
[ "Yuhang Lu", "Yichen Yao", "Jiadong Tu", "Jiangnan Shao", "Yuexin Ma", "Xinge Zhu" ]
cs.CV
[ "cs.CV" ]
Latent Watermarking of Audio Generative Models Robin San Roman Meta, FAIR Inria Nancy Pierre Fernandez Meta, FAIR Inria Rennes Antoine Deleforge Inria Nancy Yossi Adi Meta, FAIR Hebrew University of Jerusalem Romain Serizel Inria Nancy Received XXXX; Accepted YYYY =========================================================================================================================================================================================================== § ABSTRACT Large Vision-Language Models (LVLMs) have recently garnered significant attention, with many efforts aimed at harnessing their general knowledge to enhance the interpretability and robustness of autonomous driving models. However, LVLMs typically rely on large, general-purpose datasets and lack the specialized expertise required for professional and safe driving. Existing vision-language driving datasets focus primarily on scene understanding and decision-making, without providing explicit guidance on traffic rules and driving skills, which are critical aspects directly related to driving safety. To bridge this gap, we propose IDKB, a large-scale dataset containing over one million data items collected from various countries, including driving handbooks, theory test data, and simulated road test data. Much like the process of obtaining a driver's license, IDKB encompasses nearly all the explicit knowledge needed for driving from theory to practice. In particular, we conducted comprehensive tests on 15 LVLMs using IDKB to assess their reliability in the context of autonomous driving and provided extensive analysis. We also fine-tuned popular models, achieving notable performance improvements, which further validate the significance of our dataset. The project page can be found at: <https://4dvlab.github.io/project_page/idkb.html> § INTRODUCTION In recent years, Large Vision-Language Models (LVLMs) <cit.> have emerged as powerful tools in AI, showcasing impressive capabilities in areas such as visual dialogue and document understanding. Building on the general knowledge of LVLMs, some approaches <cit.> have leveraged these models to enhance the efficiency, robustness, and interpretability of autonomous vehicles, addressing the intricate challenges of autonomous driving in open world. However, LVLMs are often trained on vast and generic datasets, lacking the specialized expertise required for the driving domain. This gap in domain-specific knowledge can lead to potential inaccuracies when these models are applied to self-driving systems, where precision and reliability are paramount. To address this issue, many vision-language driving datasets <cit.> towards LVLM fine-tuning have been developed. Most of these datasets simply add textual annotations to traffic images from existing datasets, which limits the complexity and diversity of the scenarios they cover. While a few datasets <cit.> are specifically collected and annotated with more challenging driving scenarios, they still primarily focus on tasks like scene perception and decision-making, rather than providing structured driving knowledge. As a result, models built on these datasets can only implicitly learn driving knowledge through the supervision of driving decisions. However, this approach differs significantly from how humans learn to drive, which involves studying driving instructions, traffic laws, driving rules, driving skills, and methods for handling emergency situations. Consequently, these models often lack a comprehensive understanding of driving knowledge, leading to unstable and unreliable performance in real-world applications. To this end, we propose Intelligent Driving Knowledge Base (IDKB), the first large-scale vision-language dataset dedicated to professional driving knowledge and experience. Typically, humans learn to drive systematically by studying driving materials, taking theory tests, and practicing on the road. To enable LVLMs to effectively “earn a driver's license” and guarantee their driving safety, we compiled an extensive collection of driving handbooks and test questions from various countries, covering traffic laws, rules, driving techniques, and crisis management skills. In addition to theoretical data, we generated practical data by simulating diverse road scenarios in CARLA <cit.>, including variations in weather, lighting, traffic conditions, and more. This comprehensive effort resulted in a dataset of over 1 million data entries with various formats, spanning 15 countries, 9 languages, and 4 vehicle types. By capturing diverse driving regulations and practices from various regions, our dataset provides a solid foundation for thoroughly evaluating LVLMs and enhancing their ability to acquire safe and efficient driving capabilities. Based on IDKB, we conducted an extensive evaluation for 15 existing LVLMs to assess their degree of mastery in driving knowledge and skills. As shown in Fig. <ref>, the evaluated LVLMs generally lacked strong driving domain knowledge, underscoring the need for fine-tuning with high-quality, structured, and diverse driving knowledge data for effective application in autonomous driving. We also fine-tuned several of these models using our dataset, and the experimental results demonstrate that explicit and structured driving knowledge significantly enhances the performance of LVLMs, leading to more effective and accurate outcomes. Our findings highlight the importance of incorporating specialized, domain-specific knowledge into LVLMs to better equip them for the complex and safety-critical task of autonomous driving. Our key contributions can be summarized as follows: * We introduce IDKB, the first large-scale vision-language dataset explicitly containing both driving theory and practical knowledge. * We evaluate 15 existing LVLMs on our dataset and provide a comprehensive analysis of their driving abilities. * We offer fine-tuned, open-source LVLMs trained on our dataset, which possess enhanced professional driving expertise. § RELATED WORK §.§ LVLMs for Autonomous Driving Recently, research on Large Vision-Language Models (LVLMs) has surged, with multimodal models such as GPT-4V <cit.>, Qwen <cit.>, and LLaVA <cit.> demonstrating strong performance across a wide range of general tasks. Leveraging these generalized capabilities, several approaches have begun to integrate LVLMs with autonomous driving algorithms to enhance self-driving car performance and interpretability. For example, DriveGPT4 <cit.> processes multimodal input data and generates both text responses and vehicle control signals by fine-tuning a LVLM on an instruction-tuning dataset. AgentDriver <cit.> converts driving situations into textual descriptions with human-like intelligence, then uses an LLM to reason and plan. Similarly, DriveVLM <cit.> employs a LVLM to output planning trajectories through a Chain-of-Thought (CoT) reasoning process. However, due to being trained on vast amounts of general data, LVLMs often lack the specialized driving knowledge necessary for accuracy and reliability in driving. Therefore, a dataset that covers specific and comprehensive driving knowledge is crucial for both evaluating and enhancing LVLMs for autonomous driving. §.§ Vision-Language Driving Datasets With the rise of Large Vision-Language Models (LVLMs), numerous vision-language datasets for autonomous driving have been developed for better understanding driving scenes. The pioneering works BDD-X <cit.> and BDD-OIA <cit.> annotated video datasets with textual descriptions and explanations of ego car actions. Many subsequent multimodal datasets have relabeled existing self-driving datasets. Talk2Car <cit.> adds free-form, high-quality natural language commands to the nuScenes dataset. NuScenes-QA <cit.> creates 460,000 question-answer pairs based on 3D object relationships to evaluate models' understanding and reasoning abilities. DriveLM <cit.> constructs perception, prediction, and planning question-answer pairs in a graph structure to simulate human reasoning, thus enhancing end-to-end autonomous driving systems. VLAAD <cit.> utilizes GPT-4 to generate question-answer pairs from BDD-X <cit.>, producing an instruction-following dataset that features complex reasoning, detailed descriptions, and conversation. However, these datasets rely heavily on existing datasets and often lack complex scenarios. To address this, some datasets are collected and annotated from scratch. DRAMA <cit.> identifies critical objects in traffic scenarios and provides corresponding linguistic descriptions of driving risks. DriveVLM <cit.> presents SUP-AD, a scene understanding and planning dataset with annotations on challenging and long-tail scenarios. CODA-LM <cit.> is a large-scale multimodal self-driving dataset focusing on road corner cases. However, these works lack explicit knowledge of traffic regulations, rules, and driving techniques, limiting LVLMs in their ability to develop a comprehensive and abstract understanding of driving knowledge. In contrast, our dataset mirrors the human process of acquiring driving knowledge by collecting detailed annotations from driving handbooks and test questions, while also integrating theoretical learning with practical application through simulated road scenarios in CARLA. By explicitly presenting this knowledge, our approach closely aligns with human learning styles, enabling LVLMs to acquire and integrate driving knowledge more efficiently and reliably, ultimately enhancing their performance in driving-related tasks. § METHODS Intelligent Driving Knowledge Base (IDKB) is structured as a driving knowledge resource, mirroring the process individuals follow to acquire expertise when obtaining a driver's license. This process typically involves studying driving handbooks, taking theory tests, and practicing on the road. In this section, we introduce the data construction pipeline<ref> and present the statistics and characteristics of our dataset. §.§ Data Construction §.§.§ Driving Handbook Data Driving handbooks are highly structured and comprehensive resources, covering laws, regulations, techniques, safety, and more. They serve as the foundational step in learning to drive. By studying these handbooks, an intelligent system can develop a thorough and well-rounded understanding of the driving domain. We collected 206 documents, including traffic laws and driving handbooks, totalling 23,847 pages from 15 different countries via the Internet. As shown in Fig. <ref>, we collect and organize unordered data from these documents in systematic way. We first employ layout detector and Optical Character Recognition (OCR) technology to extract data blocks and the text within these blocks. Subsequently, we developed an algorithm to cluster and sequence the data blocks in a way that aligns with human readability. Finally, we filter out duplicate and irrelevant data. An example is presented in Fig. <ref>. §.§.§ Driving Test Data Driving test data represent an alternative format for the knowledge covered in driving handbooks. While the handbooks offer a structured and comprehensive overview, the test questions reorganize this knowledge into multiple-choice and short-answer formats. This approach allows an intelligent system to both reinforce what it has learned and assess its understanding in a more interactive manner. Engaging with these questions ensures that the system has thoroughly internalized the handbook content, making this process an essential part of learning. To construct this part of data, we extensively collected questions from driving tests of 15 different countries. As depicted in Fig. <ref>, the data collection process is similar with that of driving handbook data. We extract relevant information from various driving tests, and then reorganize these metadata into standard question-answer formats. Most entries we collected are multiple-choice questions, with one or more correct answers, while a smaller portion belongs to open QA questions. Two annotated examples are presented in Fig. <ref>. Data Augmentation. To enhance the diversity of data and expand the scale of the dataset, we employed GPT-4o model to augment Driving Test Data. To ensure the enhanced data is well-structured, we divide the item into the question stem, options, and explanation sections. Each section undergoes incremental enhancement three times using GPT-4o. After completing the enhancements, the sections are combined to form the enhanced question-answer pair. To assure quality and avoid duplication, we require GPT-4o to output three distinct enhanced versions of the data in a single response, following a specific format. Invalid data is identified and removed through format checking and manual screening afterwards. In addition, for each data entry containing images, we generate textual descriptions of the images using GPT-4o and incorporate these descriptions into the dataset for further application. Data Quality Control. To ensure high data quality, we implemented a two-step verification process at the end of the collection pipeline. Initially, an automated program filters out obvious low-quality data, such as images with extremely low resolution or very short text. After this automated removal, we conduct a manual review to further refine and ensure the quality of the remaining data. §.§.§ Driving Road Data After studying the driving handbooks and test questions, the intelligent system gains sufficient theoretical knowledge of driving. The next crucial step is to apply and reinforce this knowledge in real-world driving scenarios, which ensures that the system can effectively translate theoretical understanding into practical, real-world competence. However, existing datasets for real-world driving scenarios are often limited in scope, scale, and coverage of traffic signs, making it difficult to thoroughly understand road conditions. To address these limitations, we leverage the CARLA <cit.> simulator to generate a large, high-quality dataset that offers a more comprehensive understanding of traffic regulations at a low cost. Our approach begins with constructing custom simulated environments in the CARLA simulator to generate a batch of traffic sign understanding data. We further expand this dataset by extracting scenes containing traffic signs from the Bench2Drive (Jia et al., 2024) dataset, thereby creating additional annotated traffic sign data. As demonstrated in Fig. <ref>, our driving road data collection process involves the following three steps. Scene Construction. In the scene construction stage, we first collect high-resolution traffic signs from different countries and create 3D models for each traffic sign using the CARLA-UE4 editor. Then we select two of CARLA’s large maps and set the traffic signs at appropriate locations within the CARLA simulator. Camera Data Collection. In the camera data collection stage, we generate an ego vehicle equipped with camera sensors to collect image data by driving the vehicle manually in the CARLA simulator. To ensure the authenticity of the simulation data, we randomly generate a large number of vehicles and pedestrians in the CARLA world and control them through the Autopilot mode and the WalkerAIController provided by the CARLA, respectively, to simulate the real road conditions. To ensure the diversity and richness of the data, we set different weather conditions, times of day, and road surface slipperiness to simulate real camera views and driving scenes. While collecting the camera sensor data, we also record the position, bounding box, rotation and other necessary information of each actor in the CARLA world for the next step. In total, we drive the ego vehicle in the CARLA simulator for about 20 hours and obtain approximately 400,000 frames of camera data. Data Annotation. After obtaining the image data and actor information, we automatically pick out the frames containing traffic signs based on the position, distance and direction of the sign relative to the ego vehicle. Then we manually build a dictionary to define the descriptions and explanations for each sign, and attach the text annotations to each frame in an automated process. Finally, we obtained a total of 112,388 data samples, including both multiple-choice and question-and-answer formats. Data Quality Control. Images that lack traffic signs or are obscured by adverse weather conditions or occlusions are considered as low-quality data. To maintain high data quality, we manually check and remove any images that do not meet the required standards. §.§ Dataset Statistics In total, IDKB provides 1,016,956 data entries. As shown in Fig. <ref>(a), Driving Test Data constitutes the largest portion of the dataset (84.0%), with CARLA and Driving Handbook Data accounting for 11.1% and 5.0%, respectively. Our dataset spans multiple countries and multiple languages. As presented in Fig. <ref>(b)(c), apart from English-speaking countries, we also include driving knowledge from China, Italy, Germany, and others. In terms of vehicle types, we categorized the data into four classes: Car (standard passenger vehicles including sedan, jeep, ...), Truck (large vehicles including minivan, commercial, LGV, ...), Bus (including minibus, trailer, coaches, ...), and Moto (Motobike, Motocycle). Fig. <ref>(d) presents the distribution of the vehicle types. To better analyze the knowledge coverage of our dataset, we employed proprietary LVLMs to classify all the questions into four major categories according to their semantics, including Laws & Regulations (22.2%), Road Signs & Signals (38.6%), Driving Techniques (22.0%), and Defensive Driving (17.1%). More data details are provided in Supplementary. §.§ Data Characteristics As shown in Tab. <ref>, compared with existing vision-language autonomous driving datasets, IDKB possesses four main novel characteristics as follows. §.§.§ Diverse Data Type Our dataset contains diverse data types, encompassing both Question and Answer (QA) and Multiple-choice Question (MCQ). Most existing datasets typically focus on a single question format. By including both QA and MCQ formats, IDKB enhances its utility for various applications, such as training models for open-ended and structured queries, thus providing a more comprehensive testing ground for autonomous driving models. §.§.§ Diverse Data Source Our dataset integrates both real-world and synthetic data sources to provide a comprehensive coverage of driving scenarios. We collected driving manuals and test questions from various countries across the internet, which form the basis of our real-world data. To complement this, we enriched our dataset with data from CARLA-simulated road scenes. Relying solely on real-world road data has its limitations, as it may not cover all possible scenarios encountered in diverse driving environments. By incorporating CARLA simulations, we address this limitation and ensure that our dataset encompasses a wider range of scenarios. This combined approach allows the system to effectively translate theoretical knowledge from driving manuals and test questions into practical operational capabilities. §.§.§ Diverse Data Domain IDKB exhibits exceptional domain diversity, covering 15 different countries, 9 languages, and 4 types of vehicles. This extensive coverage makes the dataset particularly versatile, enabling its application to various regional contexts and linguistic environments. Most existing datasets are limited to a specific country or language, typically focusing on the US and English. In contrast, IDKB's global approach ensures that models trained on it are better equipped to handle international variations in driving conditions, regulations, and vehicle types, facilitating broader applicability in autonomous driving systems worldwide. §.§.§ Diverse Knowledge Domain Our dataset also offers comprehensive coverage of knowledge domains relevant to autonomous driving. It includes detailed information on Traffic Laws and Regulations, Road Signs and Signals, Vehicle Control and Driving Techniques, and Defensive Driving strategies. While other datasets may focus on one or two knowledge areas, IDKB provides a holistic view of the driving environment. This broad knowledge diversity is crucial for developing models that need to understand and navigate complex driving scenarios, making IDKB an invaluable resource for advancing autonomous driving technologies. § EXPERIMENTS In this section, we evaluate 15 Large Vision-Language Models (LVLMs), both open-source and closed-source, using our proposed dataset. We begin by introducing the selected LVLMs and the tasks they are required to perform. Then, we outline the evaluation methods used for each task. Next, we present a quantitative evaluation of each task. §.§ Experiment Setup Selected LVLMs. We test 15 representative LVLMs that differ in terms of parameters, open-source availability, and their vision encoders (CLIP ViT <cit.>, EVA-CLIP-ViT <cit.>, SAM <cit.>, SigLIP <cit.>) as well as their LLMs (QWen <cit.>, Vicuna <cit.>, Yi <cit.>, DeepSeek <cit.>, InternLM <cit.>, LLaMA <cit.>, ChatGLM <cit.>, FLAN-T5 <cit.>). For a fair comparison, all LVLMs are used to infer questions from our dataset based on the same prompt. Further details on the selected LVLMs are provided in the Supplementary. Data Split. We selected all of the driving handbook data, along with ninety percent of the driving test data and driving road data, to form the training set. The remaining ten percent of the driving test data and driving road data were used to create the test set. For more detailed statistics on the training and test sets, please refer to the Supplementary. Tasks. We evaluate the LVLM's performance using two data sources: driving test data and driving road data, each containing multiple-choice questions (MCQ) and question-and-answer (QA) tasks. In the MCQ tasks, the LVLM must select the correct answer(s) from the provided options, while in the QA tasks, it is required to generate the most relevant response to a given question. §.§ Evaluation Details For MCQ tasks, we use regular expressions to extract options from the LVLM outputs and compare them with correct answers, measuring accuracy as the metric. Partially correct answers are not accepted in multi-answer questions. Rule-based extraction is challenging due to the free-form nature of LVLM outputs. To address this, we introduce a instruction-following test where the output must include the string “Option [A to F]” as prompted. For QA tasks, we use ROUGE <cit.> and SEMScore <cit.> to measure similarity between LVLM outputs and the reference answers. ROUGE evaluates N-gram overlap, while SEMScore assesses semantic similarity using sentence embeddings. We calculate average scores for both MCQ and QA metrics across data sources, creating Test Data and Road Data Scores. The mean of these two scores is the IDKB Score, reflecting the LVLM's overall mastery of driving knowledge. More details about metrics are provided in Supplementary. §.§ Main Results In this subsection, we analyze various LVLMs on our test set, focusing on overall performance, distinctions between driving test and road data, handling of single versus multiple answers, comparison between proprietary and open-source LVLMs, and adherence to instructions. We also highlight the impact of fine-tuning with our dataset, showcasing key improvements in model performance. More details and analysis will be provided in Supplementary. Overall Evaluation In Tab. <ref>, we present the performance results of various VLMs on our test set. GPT-4o achieved the highest overall performance with an IDKB score of 0.64. Among the open-source models, XComposer2 stood out with the best performance, achieving an IDKB score of 0.45. Most open-source models fell within an IDKB score range from 0.35 to 0.4. However, BLIP2, XComposer, Yi-VL and VisualGLM underperformed, with IDKB scores of 0.27, 0.28, 0.28 and 0.29, respectively. Overall, the evaluated LVLMs generally did not demonstrate strong driving domain knowledge, highlighting the need for high-quality, structured and diverse driving knowledge data for effective applications in autonomous driving. Driving Test Data vs. Driving Road Data LVLMs generally perform better on driving road data than on driving test data. Most open-source LVLMs scored around 0.25 on Test Data Score, while the average Road Data Score was 0.44. A similar trend was observed in the two proprietary LVLMs. This suggests that while many LVLMs have a basic understanding of traffic signs, they lack a deeper comprehension of traffic laws, regulations, and driving skills—areas more thoroughly assessed in driving test data. Single Answer vs. Multiple Answer The performance of LVLMs on multi-answer questions showed clear polarization. About half of the LVLMs outperformed on multi-answer questions compared to single-answer questions. Conversely, the other half performed significantly worse on multi-answer questions than on single-answer ones. Notably, models such as VisualGLM-6B, Qwen-VL, CogVLM, XComposer, and Monkey-chat struggled particularly with multi-answer questions, failing to provide accurate responses. This disparity in performance suggests that the complexity of driving knowledge required for multi-answer questions presents a substantial challenge for certain VLMs, underscoring the variability in their capabilities. Instruction Follow Ability Overall, proprietary models excel in adherence to instructions, with accuracy of 0.99, indicating minimal deviation from prescribed answer templates. In contrast, among open-source models, only VisualGLM-6B, XComposer, ShareGPT4V-6B, XComposer2, and Monkey-chat demonstrate a relatively high level of instruction-following capability. The remaining open-source models are prone to producing non-compliant text, with Yi-VL-6B exhibiting the most significant issue—only 11% of its outputs align with the input requirements. Some LVLMs may struggle with consistency and accuracy in adhering to specified instructions, underscoring the need for additional fine-tuning to enhance their ability to comply with instructions in practical applications. Proprietary vs. Open-Source Proprietary LVLMs generally outperform open-source LVLMs in evaluation results, likely due to their larger number of parameters and more extensive knowledge base. This advantage is more pronounced with driving test data, which requires external knowledge of laws and rules—areas where proprietary models excel. In contrast, when dealing with driving road data, which emphasizes traffic sign recognition, some open-source LVLMs like XComposer2 can achieve performance levels comparable to those of proprietary models. Significant Improvement through Fine-Tuning To better evaluate the impact of structured driving knowledge data on model performance in this domain, we fine-tuned four representative LVLMs, each with a different visual encoder or LLM. As shown in Table <ref>, the fine-tuned LVLMs achieved IDKB scores comparable to, or even matching, those of proprietary models with significantly larger parameters. The improvement in the Test Data Score is particularly striking, with MiniCPM-Llama3-V2.5's performance doubling in this metric. This suggests that our data equips LVLMs with valuable expertise in driving laws, rules, techniques, and handling special situations. Additionally, the model's ability to interpret traffic signs, as indicated by the Test Road Score, showed improvement over the original, suggesting an enhanced understanding of traffic regulations in road scenarios. These results underscore the importance of our dataset in enhancing LVLM competence within the driving domain. By equipping models with structured and diverse driving knowledge, our dataset plays a crucial role in strengthening LVLMs' expertise, ultimately contributing to the development of safer and more reliable autonomous driving systems. § BENEFITS OF DRIVING KNOWLEDGE FOR DOWNSTREAM AUTONOMOUS DRIVING TASKS In this section, we showcase the application of the Qwen-VL-chat model, fine-tuned on the IDKB dataset, for the planning task using the nuScenes <cit.> dataset. This demonstrates how integrating driving knowledge data can significantly enhance performance in downstream tasks. §.§ Setup Based on Agent-Driver <cit.>, we generated fine-tuning data for nuScenes trajectory planning and fine-tuned Qwen-VL-chat for this task. To demonstrate the value of IDKB data for autonomous driving, we also fine-tuned another Qwen-VL-chat model using a combination of the nuScenes planning data and our IDKB data. For inference, as shown in Fig. <ref>, we follow a prompt method similar to DriveVLM <cit.>. The LVLM first identifies the traffic signs on the road and then receives key information to predict the trajectory for the next three seconds. §.§ Results and analysis The nuScenes planning task relies on imitation learning from human-generated planning data, while the IDKB dataset is designed to align with the human model of driving knowledge acquisition. This alignment allows the LVLM fine-tuned with IDKB to gain a deeper understanding of driving knowledge, including traffic laws, regulations, and driving skills, which in turn leads to more rational and safer route planning. Our experimental results support this hypothesis. As shown in the table, the model fine-tuned with IDKB data demonstrates superior performance, with a 32% reduction in average L2 distance and a 65% decrease in collision metrics, indicating safer and more rational planning. Additionally, as illustrated in Fig. <ref>, the model fine-tuned with both nuScenes and IDKB data successfully identifies the 'ROAD WORK AHEAD' sign, understands the need to slow down, and drive with caution. The planned trajectory confirms this, with the decreasing offset between consecutive frames reflecting deceleration. § CONCLUSIONS In this paper, we introduced IDKB, a pioneering large-scale dataset designed to bridge the gap in domain-specific driving knowledge within LVLMs. IDKB includes over 1 million entries on driving regulations, scenarios, and practices from 15 countries and 9 languages. We evaluated 15 LVLMs using IDKB to assess their performance in the context of autonomous driving and provided extensive analysis. Additionally, we fine-tuned several popular models, achieving significant improvements in their performance, which further highlights the importance of our dataset. ieeenat_fullname The supplementary first provides more details of our dataset, covering data collection, augmentation, and key statistics, with sample presentations in Sec. <ref>. We then outline the experimental setup, including datails of selected LVLMs, dataset division, assessment methods, and metric calculations in Sec. <ref>. Next, we analyze the benchmark results in Sec. <ref>, focusing on LVLMs' performance across different data domains and knowledge coverage. Finally, in Sec. <ref>, we show that models fine-tuned with our dataset, in addition to nuScenes <cit.>, produce safer, human-compliant trajectory planning, highlighting the impact of enriched driving knowledge. § MORE DETAILS ABOUT DATASETS In this section, we present detailed information on the data collection process (Sec. <ref>), provide comprehensive statistics on the dataset (Sec. <ref>), and include additional data visualizations (Sec. <ref>). §.§ Data Construction §.§.§ Driving Handbook Data Collection Driving handbooks are highly structured and comprehensive resources, covering laws, regulations, techniques, safety, and more. To extract unordered data from these documents, we first utilize layout detection and Optical Character Recognition (OCR) technology to extract data blocks and the text within these blocks. Specially, we use LayoutLMv3 <cit.> as our layout detection and PaddleOCR as our OCR toolkit. Next, we utilize DBSCAN <cit.> to cluster blocks to let each column in a multi-column PDF falls into the same cluster. After that, we sequence the data blocks based on the position and which column it's in. Finally we filter out duplicate and irrelevant data. §.§.§ Driving Road Data Collection In the CARLA, the position and rotation of an actor are represented by carla.Location and carla.Rotation, respectively. The carla.Location format is given as (x, y, z) in a rectangular coordinate system, measured in meters. While CARLA is based on the UE4, the carla.Rotation differs in its definition, using (pitch, yaw, roll) to describe orientation. Specifically, pitch represents rotation around the Y-axis, yaw represents rotation around the Z-axis, and roll represents rotation around the X-axis, all measured in degrees. During the Camera Data Collection stage, we captured the position and rotation of the ego vehicle and traffic signs. Subsequently, in the Data Annotation stage, images containing traffic signs were identified based on the position, distance, and rotation of the traffic signs relative to the ego vehicle. The distance is calculated as follows: distance = √((x_sign-x_ego)^2 + (y_sign-y_ego)^2) where, x_sign, y_sign represent the location of the traffic sign and x_ego, y_ego represent the location of the ego vehicle. In CARLA, the yaw ranges from -180^∘ to 180^∘. A yaw of 0^∘ indicates that the actor is facing the positive X-axis. A positive yaw value corresponds to a counterclockwise rotation (left turn), while a negative yaw value indicates a clockwise rotation (right turn). The locational angle of the traffic sign relative to the ego vehicle is calculated as: yaw_loc = (arctan(y_sign - y_ego/x_sign - x_ego) ×180/π) - yaw_ego where, yaw_ego represent the rotation around the Z-axis of the ego vehicle. Then the yaw_loc is normalized to the range [-180^∘, 180^∘] by: yaw_loc = yaw_loc 360 yaw_loc = yaw_loc - 360 if yaw_loc > 180 yaw_loc otherwise The directional angle of the traffic sign relative to the ego vehicle is calculated as follows: yaw_rot = yaw_sign - yaw_ego where, yaw_sign represent the rotation around the Z-axis of the traffic sign. Then the yaw_rot is normalized to the range [-180^∘, 180^∘] by the same way as above. After testing, we finally determined that the yaw_loc, distance and yaw_rot are respectively ([-35^∘, 35^∘]), ((0, 30m]), ([-180^∘, -160^∘])⋃([160^∘, 180^∘]), and judged in turns. §.§.§ Data Augmentation. To enhance the diversity of data and expand the scale of the dataset, we employed GPT-4o model to augment data. To ensure the enhanced data is well-structured, we divide the item into the question stem, options, and explanation sections and enhance them separately. After that enhancaed sections are combined to form the enhanced question-answer pair. Prompt we used are shown in Tab. <ref>. §.§ Data Statistics Tab. <ref> provides a summary of detailed statistics for IDKB, including numbers on dataset split, question types, image position, average text length, and other relevant statistics. Note that we calculate average question/option length on the unaugmented data, and special tokenizers are utilized to count words for Chinese, Korean and Japanese text. Tab. <ref> presents detailed numbers on country distribution. The annotation language of each country's data is also listed. To obtain a statistical view of the knowledge domains within IDKB, we designed specific prompts and employed proprietary LVLMs to classify all the questions into four major categories based on their semantics. Here we provide detailed definitions of the four kinds of knowledge domains: * Traffic Laws and Regulations: This category focuses exclusively on the statutory and legal aspects of driving. It includes questions about state and local traffic laws, understanding of legal driving ages, alcohol and drug influence laws, and penalties for traffic violations. This category ensures that drivers are aware of legal obligations and the consequences of their actions on the road, which can be a substantial part of a driving test. * Road Signs, Signals, and Lane Markings: This category is dedicated to testing knowledge of road signs (stop, yield, pedestrian crossing, etc.), traffic signals, and different lane markings. This includes interpreting what various signs and markings mean and how to respond to them while driving. The category serves to evaluate a driver’s ability to navigate roads safely and appropriately, understanding that correct responses to signs and signals are crucial for safe driving. * Vehicle Control and Driving Techniques: Here, the focus is on practical driving skills and vehicle management. Questions could include scenarios dealing with vehicle control in adverse weather conditions, the correct use of mirrors, turning and passing maneuvers, and parking techniques. This category tests a driver’s hands-on ability to operate a vehicle safely and maintain control in everyday driving situations as well as in unexpected conditions. * Driver Responsibility and Defensive Driving: This category assesses knowledge related to driver’s behavior and responsibilities. It includes defensive driving techniques, the importance of seat belts, methods to prevent distracted driving, and how to act responsibly in various driving situations (school zones, neighborhoods). Questions about accident procedures, emergency responses, and first aid may also be included. It underscores the importance of personal responsibility and proactive safety measures in driving. §.§ More Data Examples To provide a more intuitive overview of our IDKB dataset, we present additional data examples in this section, as illustrated in Fig.<ref>, <ref> and <ref>, for Driving Handbook Data, Driving Test Data, and Driving Road Data respectively. § MORE DETAILS ABOUT EXPERIMENT SETUP In this section, we provide additional information on the selected LVLMs in Sec. <ref>, outline the evaluation methods for multiple-choice and question-and-answer questions in Sec. <ref>, and explain the computation of the three overall metrics in Sec. <ref>. Then in Sec. <ref> we introduce the prompts we used in the paper. §.§ LVLMs Model Details Tab. <ref> provides details of the LVLMs used in this paper, including their parameter sizes, visual encoders, and LLMs. The selected open-source LVLMs have parameter sizes ranging from 6.6B to 12.1B, featuring 4 different vision encoders and 11 distinct LLMs. Finetuning and inference times also vary across models. For instance, finetuning Qwen-VL-chat with LoRA <cit.> on our dataset, using 8 NVIDIA A40 GPUs with a maximum token length of 2048, took 66 hours and required approximately 24GB of memory. §.§ Evaluation Details Multiple-Choice Questions For multiple-choice questions, we use regular expressions to extract answers according to the logic outlined in Algorithm <ref>. For instruction-following, we simply check whether the LVLM's output contains the pattern “Option + letter”. Question-and-Answer For question-and-answer, we employ ROUGE <cit.> and SEMScore <cit.> to measure similarity between LVLM outputs and the reference answers. ROUGE evaluates N-gram overlap between outputs and the reference answers. For each question we consider enhanced answers as reference too to duel with free-form nature of LVLM outputs, especially for ROUGE. We compute ROUGE-1 and ROUGE-L between the output and each reference answer, and then select the maximum score for both ROUGE-1 and ROUGE-L. For SEMScore, we first utilize sentence transformer <cit.> to embed LVLM outputs and the reference answers. Specifically we use “paraphrase-multilingual-mpnet-base-v2” as our model. Then we measured cosine distance between embeds which represents the difference between output and the reference answers in semantic level. Similar to Rouge, we take the maximum SEMScore as the SEMScore for the output. §.§ Overall Metrics The overall score for MCQs is calculated as the weighted accuracy of both single-choice and multiple-choice questions, with the quantity of each question type used as the weighting factor. The value of instruction-following is excluded from this calculation and is used only as a reference. For QAs, the overall score is a weighted combination of Rouge-1, Rouge-L, and SEMScore, with weights of 0.15, 0.15, and 0.70, respectively. Given the free-form nature of LVLM outputs, semantically similar responses may receive low ROUGE scores due to limited N-gram overlap. To address this, we assign a higher weight to SEMScore to better capture human-like evaluation. The Test Data Score and Road Data Score are calculated as the averages of the overall MCQ and QA scores from the driving test data and driving road data, respectively. The IDKB score is then derived as the average of the Test Data Score and Road Data Score. §.§ Prompt We include the country, vehicle type, and question type in our prompt to provide background knowledge to the LVLMs. Additionally, The prompt also includes a detailed description of the required output format to ensure the responses meet the specified standards. Prompt we used to evaluate LVLMs are shown in Tab. <ref>. § FURTHER ANALYSIS OF LVLM RESULTS In this section, we analyze the entire test set, including both the Driving Test Data and Driving Road Data. We examine the performance of all evaluated models, focusing on overall performance (Sec. <ref>), performance across different countries (Sec. <ref>), languages (Sec. <ref>), and vehicle types (Sec. <ref>), as well as performance across various knowledge domains (Sec. <ref>). §.§ Evaluate LVLMs Performance on test set As shown in Fig. <ref> and Fig. <ref>, for MCQ questions, the proprietary model significantly outperformed the open-source models, with the accuracy gap between the best and worst open-source performers (CogVLM vs. VisualGLM) being nearly twofold. However, for QA questions, the gap is less pronounced, with many open-source models performing comparably to proprietary models. §.§ Evaluate LVLMs Performance by Country We provide the performance of different models on questions from different countries in the Fig. <ref> and Fig. <ref>. MCQ questions involve 14 countries, while QA questions cover 3 countries. For QA questions, the performance gap between models is consistent across the three countries. However, for questions from India, all models scored lower compared to the other two countries. In the case of MCQ questions, we observed significant variations in LVLM performance across countries. LLaVA-v1.5-7B, XComposer, and ShareGPT4V performed exceptionally well on Japanese questions, significantly outperforming other models. LLaVA-v1.5-7B and ShareGPT4V also delivered strong results on Italian questions. For UK questions, GPT-4 achieved an accuracy exceeding 80%, which is particularly impressive. However, on questions from Japan, Korea, and Spain, some LVLMs had an accuracy of 0. This disparity highlights the varying capabilities of different LVLMs across countries and underscores the value of our dataset's diversity in providing a comprehensive evaluation §.§ Evaluate LVLMs Performance by Language Fig. <ref> illustrate the performance of various models on questions across different languages. Since the QA questions in the test set are exclusively in English, this analysis focuses solely on the MCQ questions. The performance disparity between models is more pronounced for minority languages compared to English. LLaVA-v1.5-7B, XComposer, and ShareGPT4V performed exceptionally well on Japanese questions, whereas GPT-4 faced some challenges in this area. CogVLM and BLIP2 showed strong performance across several minority languages, including Spanish, German, French, Korean, and Italian. In contrast, MiniCPM-LLaMA-V2.5 struggled with minority languages, achieving a correct rate of 0 on Spanish and Korean questions, although it performed well on Traditional Chinese. These results demonstrate that models exhibit different strengths depending on the language and region. While some models excel in specific languages or regions, they may underperform in others. This variation underscores the importance of evaluating models across a diverse set of languages and regions to fully understand their capabilities and limitations. §.§ Evaluate LVLMs Performance by Vehicle Type Since all QA questions in the test set are of the “car” type, our analysis concentrates on the MCQ questions. Fig. <ref> illustrates the performance of various models across different vehicle types. Among the open-source models, BLIP2 demonstrates the strongest performance in the motor, truck, and bus categories, while LLaVA-v1.5-7B excels in the car category. Notably, Qwen-VL-chat shows consistent but average performance across motor, truck, and car categories. Interestingly, the models tend to perform better on questions related to motor, truck, and bus than on car-related questions, suggesting that the car category poses unique challenges that may require further exploration. §.§ Evaluate LVLMs Performance by Knowledge Coverage Fig. <ref> and Fig. <ref> present the performance of various models across different knowledge domains. For MCQ questions, proprietary models show higher accuracy in Driving Techniques and Defensive Driving compared to the other categories. In contrast, most open-source models underperform in these areas, indicating that proprietary models may possess better common-sense knowledge. For QA questions, the Signs & Signals category achieves a significantly higher average score than the other categories, although Yi-VL performs poorly in this area. Scores for Driving Techniques and Defensive Driving also surpass those for Laws & Regulations, highlighting that models tend to perform better in more application-oriented categories than in regulatory ones.
http://arxiv.org/abs/2409.03487v1
20240905125224
ScreenMark: Watermarking Arbitrary Visual Content on Screen
[ "Xiujian Liang", "Gaozhi Liu", "Yichao Si", "Xiaoxiao Hu", "Zhenxing Qian", "Xinpeng Zhang" ]
cs.CV
[ "cs.CV" ]
Head-First Memory Allocation on Best-Fit with Space-Fitting Adam Noto Hakarsa September 9, 2024 =========================================================== § ABSTRACT Digital watermarking has demonstrated its effectiveness in protecting multimedia content. However, existing watermarking are predominantly tailored for specific media types, rendering them less effective for the protection of content displayed on computer screens, which is often multimodal and dynamic. Visual Screen Content (VSC), is particularly susceptible to theft and leakage via screenshots, a vulnerability that current watermarking methods fail to adequately address. To tackle these challenges, we propose ScreenMark, a robust and practical watermarking method designed specifically for arbitrary VSC protection. ScreenMark utilizes a three-stage progressive watermarking framework. Initially, inspired by diffusion principles, we initialize the mutual transformation between regular watermark information and irregular watermark patterns. Subsequently, these patterns are integrated with screen content using a pre-multiplication alpha blending technique, supported by a pre-trained screen decoder for accurate watermark retrieval. The progressively complex distorter enhances the robustness of the watermark in real-world screenshot scenarios. Finally, the model undergoes fine-tuning guided by a joint-level distorter to ensure optimal performance. To validate the effectiveness of ScreenMark, we compiled a dataset comprising 100,000 screenshots from various devices and resolutions. Extensive experiments across different datasets confirm the method's superior robustness, imperceptibility, and practical applicability. § INTRODUCTION With the continuous advancement of the Internet and computer technologies, an increasing amount of information is presented in the form of Visual Screen Content (VSC), including images, videos, texts, web pages, windows, and more. In the context of users' personal computers, VSC is displayed on screens without exception. From a visual perspective, this mode of presentation is the most readily acceptable form of expression. At the same time, this also means that the VSC will be easily leaked. For most enterprise and home computers, ensuring data security is a significant challenge. Traditional security measures, such as data encryption, firewalls, access control, and identity management, provide comprehensive protection against data leaks. However, these methods primarily manage permissions, allowing authorized users to capture screen content in real-time using screenshot tools. Current specialized protection techniques for VSC often rely on non-learning watermarking methods, which are even visible. These methods struggle to balance robustness and visual quality effectively, rendering them unsuitable for real-world applications. Therefore, this study focuses on VSC security in screenshot scenarios and aims to propose a universal learning-based screen watermarking method. In recent years, the rapid advancement of multimedia watermarking technology<cit.> has facilitated the protection of multimedia file content. However, it is important to recognize that current watermarking techniques are primarily designed for individual modalities, offering specialized protection for images, videos, text, and other media types. As depicted in Fig.<ref>, there are two main drawbacks: limited scope of protection and response time constraints. Fig.1(a) shows that while traditional watermarking can safeguard specific media content within a single modality or frame, it falls short in providing comprehensive protection across various file types and the vast number of files present on personal computers. Moreover, these methods struggle to counteract millisecond-level capture attacks under dynamic screen conditions. In contrast, the method proposed in this paper, illustrated in Fig.1(b), does not focus on protecting a single media file or specific screen frame at a given time. Instead, it integrates the watermark with the screen through a unique fusion process, offering comprehensive and real-time protection for arbitrary VSC displayed on the screen. This approach has been named ScreenMark. In ScreenMark, we introduce a three-stage watermarking framework utilizing progressive training<cit.>. This approach completes robustness training at various levels across different stages, resulting in a versatile screen watermarking solution for VSC sreenshot scenarios. Traditional watermarking methods often embed watermark information directly into media content via an encoder, leading to increased processing time and limited protection scope. To overcome these limitations, ScreenMark employs an irregular watermark pattern that blends more naturally and comprehensively with screen content, resembling a mask. Moreover, this irregular pattern makes it more challenging for unauthorized users to detect and remove the watermark, thereby enhancing the security of the protection mechanism. Building on this, the three-stage progressive training strategy further refines this approach. Each stage addresses specific challenges: from basic message diffusion and reversal to adaptive screen decoder training, and finally, handling composite distortion. Through progressively complex training scenarios, ScreenMark enhances system resilience in VSC capture and subsequent processing scenarios. Based on the above, the contribution of this paper can be summarized as follows: 1) We introduce a novel and practical multimedia protection scenario that addresses not only single-modal media content but also multi-modal VSC displayed on computer screens. And we point out the limitation of the mainstream single-modal watermarking methods in terms of protection scope and response time in this scenario. 2) To the best of our knowledge, we present the first learning-based watermarking framework specialized for VSC protection, named ScreenMark. In ScreenMark, regular watermarking information is diffused into irregular watermarking patterns and integrated with the screen display. We propose a three-stage progressive training strategy and design various levels of distorters tailored to different stage. 3) To enhance the applicability of ScreenMark to VSC protection, we have compiled a dataset of 100,000 screenshot images. These images were collected using different screenshot tools across a diverse range of devices and resolutions from SD (720x480) to 4K (3840x2160). 4) Extensive experiments demonstrate that ScreenMark matches or even surpasses the performance of four SOTA single-modal watermarking methods in screenshot scenarios in terms of robustness, invisibility, and applicability to real-world situations. § RELATED WORK §.§ Deep-learning-based Watermarking Deep-learning approaches have effectively addressed the limitations of hand-crafted features in watermarking. <cit.> introduced an end-to-end solution using an auto-encoder architecture, establishing a foundation in the domain. To enhance robustness against JPEG compression, MBRS<cit.> proposed a hybrid noise layer of real and simulated JPEG with a small batch strategy. <cit.> utilized a pre-trained model to create a transform-invariant latent space for watermark embedding, achieving higher robustness against various attacks. DWSF<cit.> offers a practical framework for decentralized watermarking, training an auto-encoder to resist non-geometric attacks and incorporating a watermark synchronization module for geometric attacks. Moreover, some recent methods<cit.> explore invertible neural networks for watermark embedding and extraction. However, these primarily protect specific media content in a single modality and are inadequate for the multi-modal VSC in real-time scenarios. §.§ Screen-related Watermarking The visibility of VSC to the public has sparked interest in its preservation among scholars. <cit.> utilized the Human Visual System to create dynamically adaptable watermarks, while <cit.> aimed to prevent screenshot data leakage through full-screen protection. These non-learning-based VSC watermarking methods struggled with balancing robustness and invisibility, making them easily detectable by attackers. Driven by the need for screen protection, screen-shooting resilient watermarking (SSRW) addresses cross-channel content leakage. <cit.> first modeled screen shooting distortions and proposed robust watermarking schemes based on DCT and SIFT. <cit.> introduced CDTF, a network simulating the screen-to-camera process using a multi-million dataset. <cit.> developed a noise layer for the printer-to-camera channel, addressing various distortions. <cit.> designed a 3D reconstruction-based noise layer for the camera-shot channel, achieving camera-shot robustness. Similarly, <cit.> created PIMoG, a noise layer for the screen-to-camera channel, enhancing screenshot robustness. To improve visual quality, <cit.> embedded information in sub-images and used a localization network to identify watermarked regions. To address distortion variability across different screens, <cit.> introduced DeNoL, an efficient decoupling noise layer that simulates distortions accurately with fewer samples by fine-tuning transform layer. However, SSRW, which focuses on modeling the physical distortion of recapture, protects only specified fixed content and is limited in scope and response time, making it ineffective for screen interception scenarios in VSC. In contrast, our work aims to protect arbitrary VSCs, providing real-time protection as the screen changes and multi-modal protection within a single watermark framework. § PROPOSED METHOD §.§ Motivation & Overview To address the limitations in protection scope and response times encountered by current watermarking techniques for screen content, this paper introduces a three-stage watermarking framework specifically designed for screen content. Inspired by the diffusion model<cit.>, which diffuses a regular image with noise to generate an irregular image, this framework adopts a novel approach. The diffusion process can be reversed through denoising, restoring the original image. This suggests directly integrating irregular watermark patterns obtained from the diffusion of regular watermark information with the screen content, rather than embedding information into the protected multimedia carrier using an encoder, as is common in traditional watermarking frameworks. This is particularly important for screen content protection because it allows real-time and cost-effective integration of watermark patterns with screen content. It offers a protection method that is unrestricted by scope and response time, integrating securely and naturally with screen content. Our approach transforms regular watermark information into irregular watermark patterns and integrates them with screen content, eliminating the need for encoder-based information embedding. With this in mind, we designed a three-stage watermarking framework, executing robustness training at different levels. The overall framework shows in Fig.<ref>. §.§ Stage-1: Pairwise Initialization In Stage-1, we initialize a pair of a Message Diffuser and a Message Reverser. These modules effectively facilitate the dissemination of watermark information into watermark patterns and their subsequent reverse recovery. During this stage, we introduce an image-level distorter to enhance robustness against image-level attacks that may occur with screen captures.The framework consists of a Message Diffuser M_D, Message Reverser M_R, and an Image-level Distorter D_I. §.§.§ Workflow Initially, we generate a batch of regular watermark messages I_w , with batch size N_0 and information length L. Subsequently, I_w is subjected to diffusion processing within the M_D, resulting in an irregular watermark pattern P_w.Upon acquiring P_w, each pattern in batch undergoes parallel image-level distortion processing by 𝒟_I^k ∈ D_I, where k ∈{1, 2, …, N_0}, yielding the distorted watermark pattern P_d. Finally, the reverse-processed watermark information I_r is retrieved through the M_R. §.§.§ Architecture The M_D is comprised of a linear layer, N_1 diffusion block, and a convolution Block connected in series. It receives a regular watermark information I_w of size ℝ^ 1× 1× L× N_0 and outputs a regular watermark pattern P_w of size ℝ^ H× W× 3× N_0. The diffusion blocks use upsampling and transposed convolution in parallel to suppress checkerboard effects and enhance feature representation without increasing network depth. Subsequently, in D_I, different batches of I_w are randomly subjected to one of three types of image-level distortions 𝒟_I^k ∈ D_I, specifically Resize distortion, Crop distortion , and Cropout distortion. These distortions are common in actual screenshot scenarios and cannot be restored by third parties without the attack parameters. Conversely, the M_R consists of an HRNet, N_2 reversal block, a double-convolution block, and two linear layers connected in series. It receives a distorted watermark pattern P_d of size ℝ^ H× W× 3× N_0 and outputs a reversed watermark information I_r of size ℝ^ 1× 1× L× N_0. §.§.§ Loss Function In this stage, the loss function serves two purposes: on one hand, control the security and stealthiness of the generated watermark patterns, and on the other hand, ensure the accuracy of the reversed information. To achieve the irregularity and invisibility of the watermark patterns, we propose four types of losses: near-zero loss, dispersion loss, variation loss, and channel loss. Near-zero loss reduce the interference of the watermark patterns on the original screen content. It minimize the mean squared error between the generated watermark pattern P_d and a tensor filled with zeros, signifying that the overall pixel values are close to zero , which can be formulated as: L_zero = MSE(P_d, 0) where 0 represents the zero matrix with shape as P_d. Dispersion loss prevent concentration in specific areas for robustness against image processing attacks. It increases the dispersity of the watermark pattern by calculating the mean absolute value and variance, ensuring uniform distribution across the image, which can be formulated as: L_dispersion = mean(|P_d|) + var(P_d) where |P_d|, mean and var represents the absolute value, the mathematical mean and the variance of P_d, respectively. Variation loss reduce the visual noise introduced by the watermark. It encourages the spatial smoothness of the generated watermark pattern by minimizing drastic changes between adjacent pixels, making the watermark harder to detect by the naked eye, which can be formulated as: L_variation= ∑_i,j^W,H[√((P_d[i, j] - P_d[i+1, j])^2)+ √((P_d[i, j] - P_d[i, j+1])^2)] where i and j respectively represent the row and column indices of pixels in the P_d. W and H represent the width and height of the P_d. Variation loss considers pixel variations in both the vertical and horizontal directions, reducing high-frequency noise while preserving edge information. Channel balance loss reduce noticeable color distortion and maintain the color balance. It minimizes the mean squared error of the mean value interpolation of the R, G, B color channels. This design is particularly important for color-sensitive screen content, which can be formulated as: L_Channel = mean((R_m - G_m)^2 + (R_m - B_m)^2 + (G_m - B_m)^2) where R_m,G_m and B_m respectively represent the mean of R,G and B channel of P_d, respectively. Based on the considerations above, the corresponding loss function for the pattern can be written as follows: L_Pattern = λ_0L_zero + λ_1L_dispersion + λ_2L_variation + λ_3L_channel To ensure the consistency of reversed information with the watermark, we design the loss function for the message using binary cross-entropy, which can be formulated as: L_Message = -1/N∑_i=1^N [I_w_ilog(I_d_i) + (1 - I_w_i) log(1 - I_d_i)] where N is the number of samples in the batch, I_d_i is the predicted value for the i^th sample, and I_w_i is the actual value for the i^th sample. The loss function in the initialization training process of stage-1 can be formulated as: L_stage1 = β L_Pattern + γ L_Message where β and γ each represent the importance of watermarking patterns and watermarking message in this stage. §.§ Stage-2: Adaptive Pre-Training In Stage-2, we involve adaptive training for the screen decoder D_S to accurately decode the irregular watermark patterns and reverse the message. We freeze the parameters of the model from stage 1 and input the patterns into an Alpha blending rendering module R_α for integration with the computer screen. This stage also introduces a pixel-level distorter to increase robustness against pixel-level attacks. §.§.§ Workflow The process of transforming watermark information into watermark patterns remains consistent with Stage-1. Building on this, at this stage, the watermark patterns P_w and the screen content S_c are input into R_α. Through flattening or scaling, R_α integrates P_w with S_c to produce the watermarked screen content S_w. The specific way of integration is to form a kind of mask that floats above the screen through the α channel, so that the full-screen watermark effect can be realized on any resolution screen without any impact on the screen content. After obtaining S_w, we employ Pixel-level Distorter 𝒟_P^k ∈ D_P to process with parallel distortion, where k ∈{1, 2, …, N_0}, resulting in the distorted watermarked screen content S_d. Finally, the distorted watermark pattern P_d is decoded from S_d using D_S. §.§.§ Architecture The R_α utilizes Direct3D for efficient graphics processing and the Windows API for window management, incorporating pre-multiplied alpha technology to optimize transparency handling. Initially, window creation and configuration are performed using Windows' CreateWindowEx, ensuring the window stays atop others while allowing mouse events to pass through, maintaining unobtrusive user interaction. Subsequently, Direct3D is utilized for GPU-accelerated rendering, enhancing efficiency and ensuring speed during terminal information switching. Finally, Pre-multiplied alpha image processing is applied before loading, where each pixel's color value is multiplied by its alpha value, simplifying transparency blending calculations. This computation can be described as follows: S_w = α P_w + (255 - α) S_c where α take the value of 5, meaning the watermark pattern affects less than 2% of screen content pixels. To optimize the image rendering, an advanced shader dynamically adjusts image transparency during rendering, leveraging pre-multiplied alpha techniques for automatic color and transparency blending. In D_P, batches of S_w are randomly selected among three types of image-level distortions, denoted as 𝒟_P^k ∈ D_P. These types include JPEG compression, Gaussian noise, and Gaussian blur. The D_S consists of three consecutive ConvBlock, two ResBlock, and an additional ConvBlock, linked in series. It receives S_d of size ℝ^ H× W× 3× N_0 and outputs a P_ds of the same size. §.§.§ Loss Function This stage facilitates the D_S in pretraining adaptively, building on the training foundation laid in the previous stage, to decode watermark patterns from screenshots of watermarked screens that have undergone pixel-level distortion attacks. The goal is to match the extracted pattern as closely as possible to the original pattern before distortion. With the weights from Stage-1 frozen, pre-training+ for the screen decoder is not interfered by other factors. The loss function in stage-2 is formalized as follows: L_stage2 = MSE(P_ds, P_w) §.§ Stage-3: Enhancement Fine-Tuning In Stage-3, we will synergistically fine-tune the model weights acquired from the Message Diffuser M_D, Message Reverser M_R, and Screen Decoder D_S from the previous two stages. Furthermore, we introduce an additional Joint-level Distortion Layer D_J that encompasses both image and pixel-level distortions. This enhances the model's robustness when integrated with screen content, effectively compensating for the limitations of the previous two stages. In order to make the model have just the right amount of robustness, the level of the distorter and its position are different. Based on the aforementioned model, we have named the complete network ScreenMark. The loss function during the enhancement fine-tuning process is as follows: L_stage3 = L_stage1 + L_stage2 § EXPERIMENTS §.§ Experimental Settings §.§.§ Benchmarks. We are the first learning-based watermarking specialized for VSC protection and have no directly relevant baseline model to compare against. In order to measure our performance in terms of robustness, we still compared our method with four state-of-the-art(SOTAs) single-modal watermarking methods, i.e.,Stegastamp<cit.>, PIMoG<cit.>, MBRS<cit.>, DWSF<cit.>. §.§.§ Datasets. Given the absence of a suitable screenshot dataset for VSC protection, we created a dataset called ScreenImage, comprising 100,000 screenshots from various devices and resolutions ranging from SD (720x480) to 4K (3840x2160). We randomly selected 50,000 images as our training dataset. To evaluate the ScreenMark, we randomly sample 1,000 images each from ImageNet<cit.> and ScreenImage(excluding training) respectively. Notably, MBRS only accepts a fixed input size post-training, necessitating the scaling of test images to 128x128. Detailed dataset categorization and collection are displayed in APPENDIX. §.§.§ Implementations. Our method is implemented using PyTorchpaszke2019pytorch and executed on an NVIDIA GeForce RTX 4090 GPU. In terms of experimental parameter, the information length L is 100. The height H and width W of watermark pattern P_W is 512, optimized for blending with screens of different resolutions without implying a minimum size limit. The batch size N_0, diffusion block N_1 and reversal block N_2 is 16, 5 and 2, respectively. In Stage-1, the loss function weight factors β and γ are 0.1 and 1, respectively. For L_Pattern , λ_0, λ_1, λ_2 and λ_3 are set to 1.0, 0.5, 0.1 and 0.01, respectively. Through 8 sets of ablation studies, we balanced choices based on fast training and high visual quality. The α of Alpha-Fusion Rendering Module R_α is set to 5, allowing us to control the range by adjusting α to achieve a trade-off. Robustness remains stable when the PSNR is between 36 and 42 dB (α∈[5,8]), which aligns with the PSNR range of the SOTAs. We use the Adam optimizerkingma2014adam with a learning rate of 1e-5, and set the training epochs to 100, while the compared methods adopt their default settings. Hyperparameter ablations are provided in APPENDIX. §.§.§ Metrics. We consider Peak Signal to Noise Ratio (PSNR), Structure Similarity Index Measure (SSIM), Learned Perceptual Image Patch Similarity<cit.> (LPIPS) to evaluate visual quality. And consider Bit Accuracy Rate (BAR) evaluate robustness performance. §.§ Robustness Performance In this section, we compare the robustness of our ScreenMark method with four SOTAs across various attack types. The watermark length was set to 100 bits in our experiments. The types and implementation details of the attack settings also align with those used in SOTAs. To ensure objective results, we used the ImageNet and ScreenImage datasets for our experiments. Further experiments on watermark length, robustness against severe, hybrid and real-world scenarios attacks are included in the APPENDIX. §.§.§ Robustness against Image-level Attacks We evaluate the robustness of the ScreenMark and SOTAs against image-level attacks in different factors. In Tab.<ref>, the headers represent the proportion of the Crop, Cropout, and Resize attacks in the original images, measured in percentage. The attack methods used in experiments align with the SOTAs for consistency and comparability. Notably, Our ScreenMark consistently achieves over 94% bit accuracy, with an average performance of 97.81% across both datasets, regardless of the attack type. Although in some cases do not outperform the best method, our performance remains close to the top. §.§.§ Robustness against Pixel-level Attacks In addition to image-level attacks, we also assessed robustness against pixel-level attacks, which are common in social networking scenarios. Tab.<ref> reports the bit error rate against pixel-level attacks in different factors. The table headers indicate the factors for each attack: JPEG compression quality factor (QF), Gaussian noise standard deviation (σ), and Gaussian blur kernel size (κ). Our ScreenMark method demonstrates exceptional stability and performance across various pixel-level attacks. In many cases, ScreenMark achieves the highest or second-highest bit accuracy rates, highlighting its robustness. Overall, ScreenMark consistently performs at a level comparable to the best method, and often surpasses other methods by approximately 1%, showcasing its reliability and effectiveness. §.§.§ Robustness in Real screenshot Scenarios To verify the practical application of ScreenMark, we tested its robustness in real screenshot scenarios. We collected screenshots using various publicly available tools, including Windows Screenshot, Snipaste, Greenshot, and WeChat Screenshot, across different resolutions. In experiments, watermarked VSC screenshots were randomly cropped with a fixed cropping size of 400*400 pixels, not exceeding 8% of the area of a 1080P image. The screenshots were saved in JPG format, with the compression quality determined by the default settings of each tool. The results are inclued in APPENDIX, indicating that ScreenMark maintains a bit accuracy rate above 94% across all resolutions and tools. This level of accuracy ensures that the watermark can be fully and correctly extracted in practical applications, especially when error-correcting codes are incorporated. §.§ Visual Quality The present work not only addresses the shortcomings of mainstream watermarking methods in VSC protection scenarios, demonstrating strong and stable robustness, but also achieves impressive visual quality. We verify the excellent performance of our work in terms of visual quality through qualitative visualization and quantitative metrics. §.§.§ Visualization of Watermarking Residuals As described in Stage-1, we control the generated watermark pattern to be an irregular image that is close to zero, evenly dispersed, gently varying, and as balanced in RGB channels as possible. This minimizes the impact on the original VSC quality while avoiding malicious recognition and erasure by watermark attackers. Our proposed ScreenMark is fused with arbitrary VSC, protecting massive media contents on screen in real time that SOTAs cannot. Considering this, we calculated and magnified the residuals of the watermarked image compared to the original image by 20 times, using a randomly selected test image from ScreenImage. The results of the visualization are shown in Fig.<ref>, confirming that our watermark pattern meets our design expectations. Richer visualization of ScreenMark is shown in APPENDIX. §.§.§ Quantification of Watermarked Images We introduced relevant image quality metrics from existing watermarking frameworks to assess visual quality. PSNR measures the image quality reference value between the maximum signal and background noise in dB, with higher values indicating less distortion. SSIM quantifies structural similarity between two images, with values from 0 to 1, and higher values indicating more similarity. LPIPS measures the difference between two images, with lower values indicating greater similarity. Table <ref> reports the visual quantification of watermarked images using different methods. ScreenMark achieves the best performance in both SSIM and LPIPS metrics, thanks to our pattern control and alpha fusion strategy. §.§ Other Comparison We have demonstrated the visual quality and robustness performance of ScreenMark, the key metrics for mainstream watermarkings. However, due to the unique scenarios addressed in this work, additional differences between ScreenMark and SOTAs need to be experimentally verified. The first key difference is the advantage of the proposed three-stage progressive training strategy in ScreenMark compared to the traditional end-to-end training approach. The second difference lies in the scope of protection, where ScreenMark offers a broader range of coverage than single-modality watermarking, as depicted in Fig.<ref>, so it will not be verified again. The third difference is the response time for watermark embedding in dynamic VSC changes, where ScreenMark significantly outperforms single-modality watermarking. §.§.§ The Ablation of Training Strategy One key advantage of ScreenMark is its three-stage progressive training strategy, compared to the traditional end-to-end training approach. This strategy provides two main benefits: it allows the model to gain experience with simpler tasks first, simplifying the learning process for more complex tasks and preventing premature convergence to local optima. Additionally, staged training enables the use of more focused and refined loss functions and optimization strategies at each stage. We conducted an ablation study to compare different training strategies, as shown in Figure <ref>. Our results indicate that the three-stage strategy achieves convergence by the 15th epoch, while E2E training exhibits oscillations in the loss function. §.§.§ The Comparison of Temporal Limitation Digital watermarking for VSC must be capable of real-time protection, as screenshot commands can be scripted to achieve millisecond-level interception. Existing single-modal watermarks can not embed in dynamic screen content in real time. We validated the responsiveness of SOTAs to VSC changes. As shown in Tab.<ref>, our watermark, fused directly to the screen, achieves a reaction time of 0 milliseconds, outperforming SOTAs. Notably, the time here is not the execution time of program, but the reaction time to reload the watermark information to the new VSC when it changes. § CONCLUSION In this paper, we propose ScreenMark, a robust deep-learning-based robust watermarking scheme for arbitrary VSC protection, using a three-stage progressive watermarking training strategy. The message diffuser and message reverser facilitate the transformation between regular watermarking information and irregular watermarking patterns. The alpha-fusion rendering module integrates these patterns into VSCs of any resolution, while the screen decoder extracts the watermark information from distorted watermarked screenshots. We built a dataset with 100,000 screenshots from various devices and resolutions. Extensive experiments demonstrate ScreenMark's effectiveness in robustness, imperceptibility, and practical applicability.
http://arxiv.org/abs/2409.02855v1
20240904162851
Vacuum Radiation Pressure Fluctuations on Electrons
[ "L. H. Ford" ]
hep-th
[ "hep-th", "hep-ph", "quant-ph" ]
Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts 02155, USA § ABSTRACT This paper is a continuation of a study of the properties and applications of quantum stress tensor fluctuations. Here we treat the vacuum fluctuations of the electromagnetic energy-momentum flux operator which as been averaged in space and time. The probability distribution of these fluctuations depends upon the details of this averaging and may allow fluctuations very large compared to the variance. The possibility of detecting their effects on electrons will be considered. The averaging of the flux operator will arise from the interaction of an electron with a wave packet containing real photons, The vacuum radiation pressure fluctuations can exert a force on the electron in any direction, in contrast to the effect of scattering by real photons. Some numerical estimates of the effect will be given. Vacuum Radiation Pressure Fluctuations on Electrons L. H. Ford September 9, 2024 ==================================================== =14pt § INTRODUCTION This paper will deal with quantum fluctuations of radiation pressure and their possible observable effects. The vacuum fluctuations of stress tensor operators has been a topic of several investigations in recent years <cit.>-<cit.>. A key result which has emerged is the the probability distribution for these fluctuations is very sensitive to the details of how they are measured. On a formal level, a quadratic operator, such as a stress tensor component, must be averaged in time in order to have a well defined probability distribution. Physically, this averaging is linked to the details of the measurement process. The rate of decay of the Fourier transform of the averaging functions with increasing frequency determines the asymptotic form of the probability distribution. Typically, this form is an exponential of a small fractional power. As a result, the probability of large fluctuations can be orders of magnitude larger than would have been predicted by a Gaussian distribution. Some possible observable effects of these large fluctuations might include enhancement of the quantum tunneling rates <cit.>, or light scattering by zero point density fluctuations in a liquid <cit.> . In the present paper, a different process will be addressed: the effects of quantum radiation pressure fluctuations on the motion of electrons. A discussion of radiation pressure fluctuations on atoms was given in Ref. <cit.>. A key feature of the probability distribution for vacuum radiation pressure fluctuations is being symmetric; the pressure is equally to occur in any direction. In contrast, the radiation pressure due to real photons exerts a force in the direction in which the photons are traveling. Here a model will be presented in which space and time averaging of the electromagnetic momentum flux operator is produced by a localized wave packet containing real photons. The scattering of real photons by an electron can give the electron linear momentum in the direction of motion of the wave packet. However, the vacuum radiation pressure fluctuations can potentially contribute momentum in the opposite direction. If this contribution can be observed, this could constitute observation of vacuum radiation pressure fluctuations. The outline of this paper is as follows: Section <ref> will review selected aspects of the quantum radiation pressure operator and its fluctuations. In particular, Sect. <ref> will deal with large fluctuations. Section <ref> will present a model for the interaction of both real photons and vacuum fluctuations with an electron. Some numerical estimated of the magnitude of the vacuum effects will be given. The results of the paper will be summarized in Sect. IV. Units in which c = ħ =1 will be used unless otherwise noted. § THE RADIATION PRESSURE OPERATOR The electromagnetic momentum flux in the z-direction is T^tz(t, 𝐱) = (𝐄×𝐁)^z = E^x B^y -E^y B^x , where 𝐄 and 𝐁 are the quantized electric and magnetic field operators, respectively. Recall that in units where ħ = c = 1, the electromagnetic momentum flux operator is also the energy flux operator, the Poynting vector. §.§ Coherent States and the Classical Limit A classical electromagnetic wave may be considered to be a highly excited coherent state. For the case of a single excited mode, such a state may be defined as an eigenstate of the photon annihilation operator for this mode: a |z ⟩ = z |z ⟩ where z is an arbitrary complex number, The mean number of photons in this state is ⟨ a^† a ⟩ = |z|^2 , and the variance in this number is ⟨ (a^† a)^2 ⟩ - ⟨ a^† a ⟩^2 = |z|^2 . If |z| ≫ 1, then the fractional fluctuations are small, and the expectation values of the electric and magnetic field operators approximate classical solutions of Maxwell's equations. However, fluctuations around these mean values are always present on some level, and can produce physically observable effects including radiation pressure fluctuations on a mirror. The mean momentum flux in a coherent state, ⟨ z| T^tz |z ⟩, is the classical radiation pressure. In quantum theory, this pressure may be viewed as originating from the momentum carried by the individual photons in the coherent state. Similarly, quantum fluctuations around the mean value may be viewed as arising from fluctuations in photon number <cit.>. Consider the case of a coherent state for a traveling wave mode moving in the +z direction., in which case ⟨ z| T^tz |z ⟩ > 0. Fluctuations in the number of photons passing by per unit time lead to fluctuations around this mean value, but will never change its sign. In this picture, the minimum value of the radiation pressure is zero, and this value is only reached when no photons arrive. The situation is quite different in quantum field theory. Here the vacuum state can be a source of rich phenomenology, including the Casimir effect and the Lamb shift. §.§ Vacuum Fluctuations and Spacetime Averages The vacuum fluctuations of the radiation pressure operator are well-defined only if it has been averaged in time. Let S^z be the momentum flux sampled in both time and space with averaging functions f(t) and g(𝐱) S^z = ∫_-∞^∞ T^t z(t, x) f(t) g(𝐱) dt d^3 x . Note that the operators T^t z, and hence S^z, are automatically normal ordered, as ⟨0|T^t z|0⟩ = ⟨0|S^z|0⟩ =0. We assume that f(t) and g(𝐱) are infinitely differentiable, non-negative functions which are normalized so that ∫_-∞^∞ f(t) dt = ∫ g(𝐱) d^3 x = 1 . Here f(t) and g(𝐱) are interpreted as describing the effect of a physical measurement of the averaged momentum flux. Although spatial averaging is not essential for S^z to be well defined, here we assume it is present as a part of the measurement. Furthermore, we require that f = g=0 outside of a finite spacetime region, as a physical measurement should occur in such a region. As a consequence, f(t) and g(𝐱) cannot be analytic functions. Such functions which are nonzero in a finite interval are referred to as having compact support. Define their Fourier transforms by f̂(ω) =∫_-∞^∞ dt e^-iω t f(t) , and ĝ(𝐤) = ∫ d^3 x e^i 𝐤·𝐱 g(𝐱) . Infinitely differentiable but compactly supported functions must have a Fourier transform which decays faster than any power of ω as ω→∞, but more slowly than an exponential. A class of such compactly supported functions is described in Sect. II of Ref.  <cit.>. For these functions, f̂(ω) decays as an exponential of a fractional power: f̂(ω) ∼γ e^- β (τω)^α . Here τ is the characteristic temporal duration of f(t), and the above form holds when τω≫ 1. This asymptotic form depends upon the constants 0 < α < 1, β >0, and γ. The value of α is especially crucial, as it determines the magnitude of the vacuum fluctuations of S^z. Smaller values of α cause f̂(ω) to fall more slowly as ω increases, leading to larger fluctuations. We expect ĝ(𝐤) to have a similar asymptotic form for large k, except with τ replaced by the characteristic spatial sampling scale, ℓ. Suppose that the electric and magnetic field operators are expanded in terms of plane wave modes, with photon creation and annihilation operator coefficients, a^†_j and a_j, where j =(𝐤, λ) is a mode label describing a photon's wavevector 𝐤 and polarization λ. In this basis, the averaged flux operator S^z is represented as S^z = ∑_i j (A_i j a^†_i a_j + B_i j a_i a_j + B^*_i j a^†_i a^†_j ) . Here A_i j = √(ω_i ω_j)/3 V f̂(ω_i -ω_j ) ĝ(𝐤_i-𝐤_j) , and B_i j = √(ω_i ω_j)/3 V f̂(ω_i +ω_j ) ĝ(𝐤_i+ 𝐤_j) , where V is a quantization volume and ω_j = |𝐤_𝐣| is the mode angular frequency. Note that Eqs, (<ref>) and  (<ref>) have the same form as the corresponding relations for the operator :φ̇^2:, given in Ref. <cit.> , where φ̇ is the time derivative of a massless scalar field. The origin of the numerical factors in Eqs. (<ref>) and (<ref>) is explained in Sect IIB of Ref. <cit.>. Recall that time averaging with τ > 0 is essential for a quadratic operator such as S^z, but space averaging is not. The averaged operator is still well defined in the limit where we average in time at a single spatial point, in which case g(𝐱) →δ(𝐱) and ĝ(𝐤) → 1. §.§ The Moments and Eigenstates of S^z For the case of a massless scalar field in two spacetime dimensions, it is possible to give explicit exact expressions for the probability distribution of vacuum flux fluctuations for selected choices of the temporal sampling function <cit.>. No such exact results are known in four dimensions, but there are at least two approaches for approximately determining the probability distribution P(x) for stress tensor fluctuations. Here x = τ^4 S^z is a dimensionless measure of the flux. The first approach involves calculation of the moments: μ_n = ⟨0|(S^z)^n|0⟩ , and was used in Refs. <cit.>. The rate of growth of μ_n as n increases may be used to infer asymptotic form, Eq. (<ref>). More specifically, the form of P(x) near a given value of x is determined by the μ_n where <cit.>. n ≈ x^α/3 in the world line limit, and n ≈ x^α in the spacetime averaged limit. Unfortunately, the moments approach suffers from the ambiguity that the moments grow too rapidly to satisfy the Hamburger moment criterion, which is the condition under which the set of moments {μ_n } uniquely determine P(x). However, the moments do determine the averaged features of the distribution. When two distinct distributions possess the same moments, they typically differ by an oscillatory function which averages to zero. Fortunately there is an alternative method which is free of this ambiguity. This is diagonalization of an operator of the form of Eq. (<ref>) by a Bogolubov transformation. This involves a linear transformation of the photon creation and annihilation operators a_j^† and a_j to a new basis with operators b_k^† and b_k in which S^z takes the diagonal form S^z = ∑_kλ_k b^†_k b_k + C , where C and the λ_k are constants. The eigenstates of S^z are number eigenstates in the new basis, | n_k ⟩_b, where b^†_k b_k | n_k ⟩_b = n_k | n_k ⟩_b. The corresponding eigenvalues are n_k λ_k +C. The probability distribution in the physical vacuum, |0⟩, is found from the probability amplitude, ⟨ 0 | n_k ⟩_b, to find the k th eigenvalue in a measurement on |0⟩. The b-mode eigenstates are multi-mode squeezed vacuum states in the a-mode basis, which is the basis of physical photon states. In practice, the diagonalization of S^z needs to be performed numerically. This was done for the operator :φ̇^2: in Refs. <cit.>, with results which agree well with the moments approach. For our present purposes, the eigenstates of S^z are the outcomes of a physical measurement of the spacetime averaged momentum flux, and hence of the radiation pressure on an electron, which will be the topic of the Sect. <ref>. §.§ Typical Fluctuations The typical vacuum flux fluctuations have a variance given by the second moment μ_2 = ⟨0|(S^z)^2|0⟩ = 2 ∑_ij B_ij B^*_ij , and a root-mean-square value given by S^z_ rms = √(μ_2) = x_ rms/τ^4. Here x_ rms is a dimensionless constant whose magnitude depends upon the specific choice of the sampling function. The explicit form of μ_2 in the V →∞ may be found from Eq. (<ref>) to be μ_2 = 1/288 π^6 ∫ d^3 k_1 d^3 k_2 ω_1 ω_2 |f̂(ω_i +ω_j )|^2 |ĝ(𝐤_i+ 𝐤_j)|^2 . Now consider the worldline limit, where τ≫ℓ. In this case we have μ_2 =1/18 π^4 ∫_0^∞ dω_1 dω_2 (ω_1 ω_2 )^3 |f̂(ω_1+ω_2)|^2 . As α decreases, μ_2 and hence x_ rms increase, due to an increasing contribution from high frequency modes, as may be seen from Eq. (<ref>). An explicit example of an f̂(ω) with α = 1/2 is the function L̂(ω) in Sect. IIB of Ref. <cit.>, which is approximated by the function ĥ_ fit(ω) in Appendix A of Ref. <cit.>. In this case, x_ rms≈ 2.5. Quantum fluctuations of both linear fields and stress tensors exhibit subtle correlations and anti-correlations. These may be described by a correlation function, which for the flux operator can take the form C(t,t') = ⟨0|S^z(t) S^z(t') |0⟩ . Here S^z(t) and S^z(t') denote flux operators averaged over different spacetime regions localized near times t and t', If C(t,t') > 0, then a flux measurement near t is positively correlated with one near t'. Similarly, C(t,t') < 0 implies anti-correlation. The correlations of vacuum energy density fluctuations is discussed in Refs. <cit.>, for example, where it was argued that these fluctuations tend to be anti-correlated. Thus negative energy density tends to be either proceeded of followed by positive energy density. We can expect a similar anti-correlation in vacuum energy flux fluctuations. Note that C(t,t') describes the correlations of typical fluctuations, and that C(t,t) = μ_2(t), the variance of the variance of the flux at t. §.§ Large Fluctuations Here we summarize previous results on the asymptotic probability distribution for stress tensor fluctuations. Recall that x, defined in Eq. (<ref>), is a dimensionless measure of a momentum flux fluctuation. Then x ≈ x_ rms = O(1) represents a typical fluctuation, and x ≫ 1 is a large fluctuation. For large x, the probability distribution P(x) has the form P(x) ∼ c_0 x^b e^-a x^c , where the constants a, b, c, and c_0 depend upon the choice of sampling function. Often we are more interested in the probability of fluctuations larger than a given magnitude. This is described by the complementary cumulative distribution P_>(x) = ∫_x^∞ P(y) dy . For x ≫ 1, the asymptotic form is P_>(x) ≈c_0/a c x^1+b-c e^-a x^c . Hence, both P(x) and P_>(x) decrease with the same exponential of a fractional power. A natural question which arises is, how to describe the correlations of large fluctuations? This is still an unanswered question. The usual correlation function approach using functions as C(t,t') describes the correlations of typical, not large, fluctuations. Here an alternative approach will be suggested. Recall that the large fluctuations are determined by the large moments, as described by Eqs. (<ref>) and (<ref>). This motivates the definition of generalized correlation functions of the form C_m,n(t,t') = ⟨ 0|(S^z)^m(t) (S^z)^n(t') |0⟩ . Thus, C_11 is the usual correlation function, and C_nn is the correlation function for the operator (S^z)^n. It will be a topic of future research to study applications to the correlations of large fluctuations. §.§.§ Worldline Averaging In the limit when the spatial sampling scale is sufficiently small, we are essentially averaging the stress tensor along a timelike worldline. In this case, the exponent c in the asymptotic form of the probability distribution is given by c = α/3 . Because α < 1, this implies a slowly decreasing tail with c < 1/3, with an enhanced probability for large fluctuations. The criterion for the validity of the worldline approximation is x (τ/ℓ)^3 . §.§.§ Spacetime Averaging When x (τ/ℓ)^3 , the worldlne approximation no longer holds, and the effects of spatial averaging become important. In this case, the exponent c becomes c ≈α . This behavior has been confirmed by the results of numerical diagonalization <cit.>. As x increases for fixed τ/ℓ≫ 1, the probability distribution P(x) makes a smooth transition from the form described by Eq. (<ref>) to that of Eq. (<ref>). The enhanced rate of decrease of P(x) reflects the role of spatial averaging in suppressing large vacuum fluctuations. Because α < 1, the probability distribution is still decreasing more slowly than an exponential function, and much more slowly than the Gaussian distribution which describes random processes. This is a reflection of the fact that quantum fluctuations are highly correlated. § EFFECTS ON AN ELECTRON §.§ Electron Momentum Fluctuations Now we take up the possibility of observing radiation pressure fluctuations by their effects on the motion of an electron. Assume that the electron is initially at rest in the laboratory frame, and is then subjected to radiation pressure with a duration of τ and a magnitude of |S^z| = x/τ^4. Here we assume that the photons, whether real or virtual, producing the radiation pressure have energies of the order of the electron rest mass m or smaller. In this case the photon-electron interaction may be treated as Thompson scattering, for which the cross section is σ_T = 8 π α_fs^2/3 m^2 , where α_fs≈ 1/137 is the fine structure constant. The characteristic change in the electron's linear momentum is Δ p ≈ S_z τ σ_T . If the electron's motion in the laboratory frame remains non-relativistic, this corresponds to a change in speed of order Δ v ≈Δ p/m = 8 π α_fs^2/3 (m τ)^3 x , or Δ v ≈1.5 × 10^-4/ (m τ)^3 x . The probability distribution for the resulting modification of the electron's momentum is given by Eq. (<ref>) for x ≫ 1, where x becomes a function of Δ p: x(Δ p) = 3 m^2 τ^3 / 8 π α_fs^2 . The resulting function P(Δ p) is an exponential function which depends upon a negative power of the coupling constant α_fs, and is hence a non-analytic function. Such a function cannot arise in any finite order of perturbation theory, §.§ Need for Spatial Averaging If we assume that the dimensions of the electron wave packet dictate the spatial averaging scale, then we need to enquire about possible transverse spreading of this wave packet. If the initial transverse dimension is of order ℓ, this implies a transverse momentum of order p_T ≈1/ℓ . In a time τ, this will lead to transverse spreading of the order of δℓ≈p_T/m τ . If we require that δℓℓ , then we have τ/ℓ m ℓ . We may rewrite Eq. (<ref>) as Δ v ≈1.5 × 10^-4/ (m ℓ)^3 (ℓ/τ)^3 x . In the worldline approximation, where Eq. (<ref>) holds, we have Δ v 1.5 × 10^-4/ (m ℓ)^3 . If we expect that ℓ is large compared to the electron Compton wavelength, m ℓ 1, this places a very strong constraint on the magnitude of Δ v which can be achieved in the worldline approximation. For this reason, we turn to the case where spatial averaging is important. §.§ Probability Estimates Here we wish to make some rough estimates of the probability for a stress tensor fluctuation of a given magnitude using results obtained in Refs. <cit.> and <cit.>. These references consider the case of the operator :φ̇^2:, and we assume that the probability of a large fluctuation of this operator is of the same order as those of S^z. We further assume that the spatial sampling function, g(𝐱), is spherically symmetric, and that its Fourier transform, ĝ(𝐤) = ĝ(k), decays for large k as does f̂(ω) in Eq. (<ref>), but with τ replaced by ℓ. As described in Sect IIB of Ref. <cit.>, ĝ(k) is determined by a one-dimensional function ĥ(ω). The asymptotic spacetime averaged distribution in the spacetime averaged limit is found to have the form P(x) ∝ e^-(1+√(2 s)) √(x/B) , for s = ℓ/τ 1, where α = 1/2. The constant B in this case is B = f(0)/ 8 π |h”(0)| s^2 . Note that as s decreases, B increases, and P(x) falls more slowly with increasing x. This reflects the fact that for fixed x and decreasing s, P(x) will eventually transition to the worldline form given by Eq. (<ref>).. The proportionality constant in Eq. (<ref>) is not uniquely determined by the moments approach used in Ref. <cit.>, but could be found by the numerical diagonalization approach in Refs. <cit.>. For the purposes of an order of magnitude estimate, we will assume that this proportionality constant is of order one. The probability of a fluctuation of x or greater is now found from Eq. (<ref>) to be of order P_> ≈2 √(B)/1+√(2 s) x^1/2 P(x) . If we use the specific sampling functions described in Ref. <cit.>, where f(0) ≈ 1.5 and |h”(0)| ≈ 0.076 , we find B≈ 1.0/s^2 and P(x) ≈ e^-, (1 +√( 2 s)) s √(x) for the case α = 1/2 and b ≈ 0. This result appears to be supported by the numerical calculations in Ref. <cit.> in the upper panel of Fig. 2, where s ≈ 0.14. Now we consider a few numerical examples: §.§.§ m τ =100 , ℓ =0.1 τ , and x = 10^4 Here we find Δ v ≈ 1.5 × 10^-6 and P_> ≈ 2.3 × 10^-4 . §.§.§ m τ =1 , ℓ = τ , and x = 10 In this case, where we are at the limit of the non-relativistic approximation, the results are Δ v ≈ 1.5 × 10^-3 and P_> ≈ 1.3 × 10^-3 . Here we are considering a pulse of photons in the gamma ray energy range. Such pulses might be generated by the back scattering of optical frequency photons from high energy electrons <cit.>. §.§ What constitutes a measurement of S^z ? The mathematical definition of the averaged momentum flux operator is given in Eq. (<ref>). However, we would like to link the sampling functions f(t) and g(𝐱) to a physical measurement. One option seems to let them be determined by the envelope function of a wave packet mode function. Let g(𝐱) be proportional to the envelope function at a fixed time t, and let f(t) be proportional to the envelope function at a fixed point in space as a function of time. Suppose that the electromagnetic field is in a coherent state of this mode. The scattering of the real photon in this state by an electron not only produce radiation pressure, but can also serve to measure changes in motion of the electron. Potentially, this can include changes due to vacuum radiation pressure fluctuations. If the real photons in the wave packet are moving in the +z direction, a single photon-electron scattering event produces a recoil in the same direction, as discussed in the Appendix. In the presence of several electrons, an electron can recoil in the opposite direction due to back scattered photons, but is effect will depend upon the electron density. The vacuum fluctuations are equally likely to produce a force in the opposite direction, and hence electron recoil in the -z direction. A related question is how many photons need to be in the coherent state for the scattering of the photons to constitute a measurement of S^z? If we need a large mean number of photons, |z| ≫ 1, then the scattering by real photons may mask the effects of the vacuum fluctuations. An alternative may be that repeated measurements by wave packets with a relative small number of photons may be effective in measuring S^z, and hence the vacuum radiation pressure fluctuations. If this is the case, then a mean number of photons which is small compared to one may suffice if the number of repeated measurements is large. In this case, vacuum fluctuations could dominate over the effects of photon number fluctuations. § SUMMARY A model for the detection of quantum radiation pressure fluctuations has been presented. The quantum radiation pressure operator is averaged in space and time by the interaction of a localized photon wave packet with an electron. This averaged operator will have vacuum fluctuations which are equally likely be in any direction and to have an asymptotic probability distribution which decays as an exponential of a fractional power, leading to the possibility of large fluctuations. If the quantum state of the electromagnetic field is a coherent of real photons traveling in the +z-direction, the expectation value of the radiation pressure on an electron will also be in this direction, and is due to the effect of photon-electron scattering. However, there will be two sources of fluctuations in this pressure: (1) Fluctuations in the number of photons in the coherent state <cit.>. These cause variations in the magnitude of the pressure, but cannot change its sign. This is due to the fact discussed in the Appendix that an electron at rest scattering with a photon moving in the +z-direction will recoil in this direction. (2) Vacuum fluctuations of the radiation pressure operator, which are equally likely to impart momentum in either direction. If the operator averaging function may be taken to be the wave packet envelope function, then the probability of large fluctuations depends upon the asymptotic Fourier transform of the envelope function. and hence the asymptotic power spectrum of the photon pulse. Some numerical estimates for the probability of the vacuum fluctuations required for various electron speeds are given Sect. <ref>. At the limit of the non-relativistic approximation we see that Δ v ≈ 10^-3 could occur with a probability of the order of a few times 10^-3. This might be observable if the vacuum fluctuation effect can be dis-entangled from the effects of scattering by real photons. This might be possible if the mean number of photons can be made small enough. This is topic for further work. As noted in Sec. <ref>, the effects of large vacuum radiation pressure fluctuations on the state of the electron is a non-perturbative effect. Another future topic will be extension of the present model beyond the non-relativistic approximation. This approximation arises in part from the use of the Thomson cross section, Eq. (<ref>), to describe the scattering of virtual photons by an electron. However, the Compton cross section decreases slowly (as a logarithm) as the photon energy increases above the electron rest mass energy. This suggest that the non-relativistic approximation may be a good order of magnitude estimate at these higher energies. Other topics for further work include a better understanding of the correlations between large fluctuations discussed in Sect. <ref>, and of the non-perturbative correction to the electron's quantum state discussed in Sect. <ref>. I would like to thank Chris Fewster and Ken Olum for useful conversations. This work was supported in part by the National Science Foundation under Grant PHY-2207903. § ELECTRON-PHOTON SCATTERING KINEMATICS Here we review the relativistic kinematics of an electron-photon collision. We also derive the key result that an electron which is initially at rest in the laboratory frame cannot recoil into the direction from which the photon came. In this frame, an initial photon moving in the +x direction with energy ω_0 collides with an electron at rest, as illustrated in Fig. <ref>. The electron recoils with a speed v at an angle θ, and the photon with energy ω at an angle ϕ, as shown. The initial four-momenta of the electron and photon are p^μ_i = (m,0,0,0) and k^μ_i= ω_0 (1,1,0,0) , respectively. The corresponding final momenta are p^μ_f= m γ (1,v cosθ, v sinθ,0) and k^μ_f = ω (1, cosϕ, sinϕ,0) , where γ = (1-v^2)^-1/2, or v = (1-γ^2)^1/2/γ. Conservation of energy and momentum, p^μ_i + k^μ_i= p^μ_f+ k^μ_f lead to ω = 1 - ω_0 -γ , ω _0 = (1- γ^2)^1/2 cosθ + ω cosϕ , and (1- γ^2)^1/2 sinθ = = ω sinϕ . Here units in which m = 1 have been adopted. Use of (<ref>) to eliminate ω leads to ω _0 = (1- γ^2)^1/2 cosθ + (1 - ω_0 -γ) cosϕ , and (1- γ^2)^1/2 sinθ = (1 - ω_0 -γ) sinϕ . Finally, use of cos^2 ϕ + sin^2 ϕ =1 leads to the result cosθ = √(γ - 1/γ + 1) ω_0 + 1/ω_0 . Because γ≥ 1, this implies cosθ≥ 0, and hence θ≤π/2. This is our key result, that the electron cannot back scatter into the direction from which the photon came. There is a simple intuitive explanation: Energy conservation requires ω < ω_0. The final photon energy must be less that the initial photon energy to provide kinetic energy to the electron. As a result, the final +x component of the photon's momentum will be less than its initial value. This requires that the electron recoil in the +x direction to conserve momentum. 99 FewsterFordRoman:2010 C.J. Fewster, L.H. Ford and T.A. Roman, Probability Distributions of Smeared Quantum Stress Tensors, Phys. Rev. D 81, 121901(R) (2010), arXiv:1004.0179. FF15 C.J. Fewster and L.H. Ford, Probability Distributions for Quantum Stress Tensors Measured in a Finite Time Interval , Phys. Rev. D 92, 105008 (2015), arXiv:1508.02359. FFR2012 C. J. Fewster, L. H. Ford and T. A. Roman, Probability Distributions for Quantum Stress Tensors in Four Dimensions,” Phys. Rev. D 85, 125038 (2012), arXiv:1204.3570. FF2020 C. J. Fewster and L. H. Ford, Probability Distributions for Space and Time Averaged Quantum Stress Tensors, Phys. Rev. D 101, 025006 (2020), arXiv:1909.07295. SFF18 E. D. Schiappacasse, C. J. Fewster and L. H. Ford, Vacuum Quantum Stress Tensor Fluctuations: A Diagonalization Approach, Phys. Rev. D 97, 025013 (2018), arXiv:1711.09477. WSF21 P. Wu, E. D. Schiappacasse and L. H. Ford, Space and Time Averaged Quantum Stress Tensor Fluctuations, Phys. Rev. D 103, 125014 (2021), arXiv:2104.04446. WSF23 P. Wu, E. D. Schiappacasse and L. H. Ford, Frequency spectra analysis of space- and time-averaged quantum stress tensor fluctuations, Phys. Rev. D 107, 036013 (2023), arXiv:2211.12001. HF17 H. Huang and L. H. Ford, Vacuum Radiation Pressure Fluctuations and Barrier Penetration, Phys. Rev. D 96, 016003 (2017); arXiv:1610.01252. HF22 H. Huang and L. H. Ford, Vacuum decay induced by quantum fluctuations, Phys. Rev. D 105, 08025 (2022), arXiv:2005.08355. FS09 L. H. Ford and N. F. Svaiter, Quantum Density Fluctuations in Classical Liquids, Phys. Rev. Lett. 102, 030602 (2009), arXiv:0809.1851. WF20 P. Wu and L. H. Ford, Large Zero Point Density Fluctuations in Fluids, Phys. Rev. Research 2, 032028 (2020), arXiv:2005.04266. F21 L.H. Ford, Vacuum Radiation Pressure Fluctuations on Atoms, Phys. Rev. A 104, 012208 (2021), arXiv:2112.14285. Caves80 C.M. Caves, Quantum-Mechanical Radiation-Pressure Fluctuations in an Interferometer, Phys. Rev. Lett. 45 , 75 (1980). Caves81 C.M. Caves, Quantum-mechanical noise in an interferometer, Phys. Rev. D 23 , 1693 (1981). WF01 C.H. Wu and L.H. Ford, Quantum Fluctuations of Radiation Pressure, Phys. Rev. D, 64, 04510 (2001), arXiv:quant-ph/0012144. FF24 C. J. Fewster and L. H. Ford, Probability Distribution for Vacuum Energy Flux Fluctuations in Two Spacetime Dimensions, manuscript in preparation. FR05 L.H. Ford and T. A. Roman, Minkowski Vacuum Stress Tensor Fluctuations, Phys. Rev. D 72, 105010 (2005), arXiv:gr-qc/0506026. FR07 L.H. Ford and T. A. Roman, Phys. Rev. D 76, Energy Density-Flux Correlations in an Unusual Quantum State and in the Vacuum, Phys. Rev. D 76, 064012 (2007), arXiv:0706.1970. Milburn63 R. H. Milburn, Electron Scattering by an Intense Polarized Photon Field, Phys. Rev. Lett. 10, 75 (1963). Milburn65 C. Bemporad, R. H. Milburn, N. Tanaka, and M. Fotino, High-Energy Photons from Compton Scattering of Light on 6.0-GeV Electrons, Phys. Rev. 138, B1546 (1965).
http://arxiv.org/abs/2409.03655v1
20240905161031
Privacy versus Emotion Preservation Trade-offs in Emotion-Preserving Speaker Anonymization
[ "Zexin Cai", "Henry Li Xinyuan", "Ashi Garg", "Leibny Paola García-Perera", "Kevin Duh", "Sanjeev Khudanpur", "Nicholas Andrews", "Matthew Wiesner" ]
eess.AS
[ "eess.AS", "cs.LG" ]
Quantum complexity and localization in random quantum circuits Pingal Pratyush Nath^0000-0001-5311-7729 September 9, 2024 ============================================================== § ABSTRACT Advances in speech technology now allow unprecedented access to personally identifiable information through speech. To protect such information, the differential privacy field has explored ways to anonymize speech while preserving its utility, including linguistic and paralinguistic aspects. However, anonymizing speech while maintaining emotional state remains challenging. We explore this problem in the context of the VoicePrivacy 2024 challenge. Specifically, we developed various speaker anonymization pipelines and find that approaches either excel at anonymization or preserving emotion state, but not both simultaneously. Achieving both would require an in-domain emotion recognizer. Additionally, we found that it is feasible to train a semi-effective speaker verification system using only emotion representations, demonstrating the challenge of separating these two modalities. voice privacy, emotion recognition, speaker verfication, speech anonymization, voice conversion, speech synthesis § INTRODUCTION Voice-driven interaction has been integrated into various aspects of human life, making tasks more convenient and hands-free. This technology has seen significant growth in the modern era, with notable examples including virtual assistants on smart devices, wearable technology, and customer service applications. However, the increasing use of voice-driven interaction raises security and privacy concerns, particularly regarding the exposure of speech recordings to fraudsters and hackers when transmitted over untrusted public networks <cit.>. Consequently, the personally identifiable information in the raw speech signal can be susceptible to leakage or extraction <cit.>. To mitigate privacy concerns associated with the potential interception and misuse of speech data, speech anonymization is employed to protect the most sensitive information, speaker identity, within speech. Specifically, speech anonymization aims to suppress acoustic characteristics that could be used to identify the speaker while at the same time preserving other characteristics, chiefly linguistic content, within the speech. The field of speech anonymization is still nascent, with formal definitions and a comparison platform for solutions on standardized datasets and protocols recently established by the VoicePrivacy Challenge series <cit.>. Since speech anonymization inherently involves altering and transforming speech, most research has centered on techniques such as voice conversion (VC), speech synthesis, noise addition, and traditional signal processing methods to achieve anonymization <cit.>. Among the developed anonymization techniques, the x-vector-based method <cit.>, used as the baseline for the VoicePrivacy challenge, offers a flexible choice of pseudo-speaker and achieves adequate performance in privacy and utility assessments. Essentially, the x-vector-based method employs a framework similar to an any-to-any VC approach, synthesizing anonymized speech by conditioning the framework with x-vector <cit.> speaker representations to produce pseudo-speakers' voices. Several subsequent studies have improved the x-vector-based method from various angles to boost its privacy protection ability <cit.>, such as constructing x-vectors via singular value modification <cit.> and using a generative model to sample pseudo-speakers in the x-vector space <cit.>. Beyond the approaches described above that achieve speech anonymization through acoustic models like those used in VC techniques, other research explores a speech synthesis-based method by cascading automatic speech recognition (ASR) and text-to-speech (TTS) systems <cit.>, which can significantly eliminate speaker identity footprints in speech. Recent developments in the speech anonymization community have presented a more complex anonymization scenario. Besides preserving linguistic content and hiding speaker identity, an anonymization system should also maintain unchanged paralinguistic attributes <cit.>. Under these conditions, researchers struggle to conceal speaker identity while retaining paralinguistic attributes, highlighting the trade-off between utility (paralinguistic attributes) and privacy (speaker identification) in this setting <cit.>. The VoicePrivacy Challenge 2024 emphasizes the preservation of emotional state <cit.>. Additionally, the challenge recognizes the risk that an attacker could access anonymized data and train a new speaker verification model on it. Therefore, understanding how emotion and speaker information are entangled in speech signals is essential in this anonymization context to overcome the privacy-utility trade-off. Earlier work on anonymization has explored the privacy-utility trade-off, but does not investigate its causes or potential solutions <cit.>. Aside from voice timbre, prosodic features such as melody, rhythm, and intensity—shaped by a speaker’s social environment and critical learning period—also provide significant information about their identity <cit.>. This theory is supported by findings that the source speaker can be recognized to a certain degree in voice-converted speech <cit.>. Inspired by the above research, this paper delves into the factors that might cause the leakage of speaker identity and investigates the relationship between speaker and emotion in speech. To achieve this, we apply various VC-based and cascaded ASR-TTS methods to the anonymization task in the VoicePrivacy Challenge 2024. Our study reveals that speech emotion recognition (SER) and automatic speaker verification (ASV) systems rely on overlapping speech attributes. Disentangling identity from acoustic properties is a non-trivial task, as these properties are closely related. While we can minimize the trade-off between privacy and emotion preservation given prior knowledge of the corresponding in-domain emotion recognizer, the challenge of separating speaker and emotion information in speech remains significant. Finally, our results suggest that emotion recognizers can serve as a reliable objective evaluation metric in emotional speech synthesis. § METHOD §.§ Task Definition and Evaluation Metrics In the semi-informed speech anonymization task a user supplies speech data and attempts to protect their identity using a speech anonymization system. An attacker, who has access to the anonymized data, attempts to discover the speaker's identity. A speech anonymization system is created to obscure the user’s identity while maintaining the linguistic content and paralinguistic attributes. In the VoicePrivacy Challenge 2024, the goal is to maintain the emotional state of the speech post-anonymization. As such, the anonymization performance is evaluated from two fronts: privacy evaluation and utility evaluation. The privacy metric measures how well the system conceals the original speakers’ identities, whereas the utility metrics access the retention of content and emotional state. Figure <ref> illustrates the anonymization and evaluation pipeline of this task. The core element is the anonymization module, which converts each original input audio into anonymized audio under the following conditions: 1. preserving linguistic content, 2. preserving emotional state, and 3. removing speaker identity information. The privacy evaluation pipeline adheres to a standard speaker verification process. The verification model is trained on anonymized data labeled with the original speakers’ identities. An effective anonymization system should sufficiently distort and obscure the original identities at the waveform level, preventing the speaker verification system from identifying different speakers. During evaluation, pairs of source speech from the evaluation dataset are anonymized and treated as enrollment and test speech. The speaker verification model then assesses whether the two utterances originate from the same original speaker. With a perfect anonymization system, the verification system, acting as the attacker, performs no better than random guessing. The main metric for privacy evaluation is the equal error rate (EER), calculated based on similarity scores from pairs of utterances in the anonymized evaluation set, known as trials. A lower EER indicates a greater risk of speaker re-identification, thus a higher EER indicates better performance in preserving voice privacy. For utility evaluation, the anonymized evaluation data is transcribed using a speech recognition system. The performance in preserving content is assessed by comparing these transcripts to the ground truth content from the source data and measuring the word error rate (WER). Similarly, an emotion recognizer is employed on the anonymized data to determine the emotional state of the anonymized speech. In this case, four emotion states—Happy, Neutral, Sad, and Angry—are evaluated. An anonymization system demonstrates good emotion preservation performance if the emotional state of the anonymized speech matches that of the original speech. Preservation of emotion state is measured using the Unweighted average recall (UAR) <cit.>. In general, A lower WER denotes superior preservation of linguistic content, whereas a higher UAR indicates superior preservation of emotion states. §.§ Anonymization Approaches We employ two primary synthesis approaches to achieve anonymization for the described task. The speech anonymization process is shown in Figure <ref>, where one method is based on voice conversion (VC) models, and the other employs a cascaded ASR-TTS pipeline. VC is a method that changes the voice of the source speech to match that of a target speaker, preserving the content and most prosodic features. This technique aligns closely with the objectives of the anonymization task. We explore two VC-based systems that can convert source utterances to a variety of target speakers. The first VC-based anonymization system, similar to the x-vector-based system from the VoicePrivacy Challenge, conditions on speaker representations to convert the source voice to a target speaker’s voice. The model uses the content representation extracted by a pre-trained self-supervised learning (SSL) model called ContentVec <cit.> as input. This approach uses a transformer-based VC system <cit.> to convert the input content representation into the target speaker’s Mel-spectrogram by conditioning on the target speaker’s representation vector. The audio waveform is then reconstructed from the Mel-spectrogram using a HiFi-GAN vocoder <cit.>. Another VC-based solution utilizes kNN VC <cit.>, functioning at the WavLM-feature <cit.> level. The kNN-VC system maps the WavLM features of the source utterance to those of the target speaker using k-nearest neighbor regression. Each frame from the source speech is replaced by the average of the k-nearest neighbor target WavLM features, followed by a HiFi-GAN vocoder to synthesize the target utterance. This approach, unlike the previous VC method, necessitates a target speech corpus for conversion. While VC-based systems can effectively modify the acoustic characteristics related to the timbre of source speakers, certain prosodic features, reflecting the speakers’ habitual speaking styles, remain unchanged. These prosodic features could be used to identify the speaker. To mitigate this, we employ a cascaded ASR-TTS pipeline to enhance anonymization by modifying the speaking style of the source speech. As shown in Figure <ref>, we first transcribe the source utterance using an ASR system. Subsequently, a multi-speaker TTS system generates the anonymized utterance, cloning the voice and speaking style from a prompt utterance. § EXPERIMENTS In this section, we explore the relationship between emotion and speaker information using the anonymization pipeline from the VoicePrivacy 2024 challenge, focusing on English corpora. We anonymized datasets using the systems in Section <ref> and evaluated their privacy and utility performance. For the cascaded approach, we used various utterance prompts, from randomized speech to audio containing some original speaking style, to study the impact on speaker identity exposure. After identifying the trade-off between privacy and emotion preservation, we explored strategies to mitigate it. Additionally, we analyzed the extraction of speaker information from emotion embeddings to understand their overlap and the challenges in disentangling them. §.§ Dataset The VoicePrivacy 2024 challenge uses subsets from the LibriSpeech <cit.> and IEMOCAP <cit.> corpora for development and evaluation. More details can be found in the data description section of the challenge’s evaluation plan <cit.>. There are 10 subsets specifically designated for the evaluation process. The subsets libri-dev-asr and libri-test-asr are used for ASR evaluation. The subsets libri-dev-enrolls, libri-dev-trials-f, libri-dev-trials-m, libri-test-enrolls, libri-test-trials-f, and libri-test-trials-m are used for evaluating privacy (speaker verification) performance. The subset libri-train-clean-360 is employed for training the speaker verification system following anonymization. For emotion preservation performance, the subsets IEMOCAP-dev and IEMOCAP-test are utilized. We also incorporate the LibriTTS <cit.> speech synthesis dataset in our experiments. This dataset comprises 585 hours of clean speech data at 24kHz from 2,456 speakers. We also utilize the VoxCeleb1 <cit.> dataset in our study, with the training data including 148,642 utterances from 1,211 speakers and the test set comprising 4,874 utterances from 40 speakers. There is no overlap between the training and testing data in both the source and target datasets. §.§ Experimental Details We train the speaker embedding-conditioned VC model outlined in Section <ref> using the LibriTTS training sets. Content features are extracted with the pre-trained ContentVec_legacy-500 model,[<https://github.com/auspicious3000/contentvec>] and speaker embeddings are obtained from an ECAPA-TDNN model <cit.> by SpeechBrain <cit.>. The VC conversion system, including the feature transformation and vocoder modules, is trained on audio recordings at a 24kHz sample rate. After anonymization, we downsample the synthesized audio to 16kHz for evaluation. For the kNN-VC method, k is set to 4. In the cascaded ASR-TTS approach, we employ the ‘medium-en’ model from Whisper[<https://github.com/openai/whisper>] <cit.> as our ASR system to transcribe the source utterance. The Whisper model achieves a WER of 3.38% on the libri-dev-asr set and 3.29% on the libri-test-asr set. For the study of privacy and emotion preservation, we choose the open-source synthesis model XTTS,[<https://github.com/coqui-ai/TTS>] which is a generative TTS model providing high-fidelity synthesis and capable of voice and style cloning based on a prompt audio segment. §.§ Anonymization Performance We anonymized the utterances from the datasets selected by the VoicePrivacy challenge[<https://github.com/Voice-Privacy-Challenge/Voice-Privacy-Challenge-2024>] using various approaches detailed in Section <ref>. The anonymization performance of different systems is summarized in Table <ref>, with the corresponding systems annotated as follows: * Origin: The original, unanonymized speech. * ConVec2Mel-VC: The speaker embedding-conditioned VC system we developed. During anonymization, the target embedding is extracted from a randomly selected utterance from LibriTTS. * kNN-VC: The kNN-based VC method. For each utterance, the WavLM feature pool is obtained from a randomly chosen target speaker in LibriTTS, with the target pool comprising at least 5 minutes of audio. * ConVec2Mel-VC-XTTS: This system utilizes the cascaded ASR-TTS method. For each source utterance, the prompt speech is the corresponding anonymized speech from ConVec2Mel-VC. * kNN-VC-XTTS: This system follows the cascaded ASR-TTS approach. For each source utterance, the prompt speech is the corresponding anonymized speech from the kNN-VC system. * XTTS: The cascaded ASR-TTS anonymization method, where the prompt utterance during inference is randomly chosen from the LibriTTS dataset. As indicated in the table, VC-based anonymization systems perform better in emotion preservation. Both ConVec2Mel-VC and kNN-VC show comparable performance, with an average UAR of around 49%, implying that the speech attributes retained by these systems support the emotion recognizer in identifying the target emotion. Nevertheless, some hidden speech characteristics tied to speaker identity remain unanonymized, allowing the speaker verification model to detect these patterns, resulting in an average EER of less than 20%. Specifically, ConVec2Mel-VC achieves an EER of 9.7% in privacy evaluation, while the kNN-VC approach attains an average EER of around 15.19%. Therefore, although the VC anonymization systems preserve some level of emotion, they also leak speaker information. Both systems maintain the original content well, as reflected in the WER results. The XTTS system achieves the highest privacy performance among all systems, with an EER close to 50%, by cloning a random voice and speaking style from another utterance. However, when the XTTS system is conditioned on an utterance with a modified voice but preserved prosodic attributes, resulting anonymization systems like ConVec2Mel-VC-XTTS and kNN-VC-XTTS can achieve higher emotion preservation scores. For instance, the ConVec2Mel-VC-XTTS system, which clones the speaking style from the ConVec2Mel-VC system, achieves an average UAR of 42.97%. This is lower than the UAR of ConVec2Mel-VC but higher than the XTTS approach, which has a UAR of 33.65%. Notably, the emotion preservation performance of kNN-VC-XTTS, while better than XTTS, is not significantly higher. This might be due to the lack of temporal coherence in the kNN-VC, leading to a distorted distribution compared to normal speech and causing the XTTS system to struggle with cloning the corresponding speaking style. This again leads to speaker identity leakage from these preserved attributes. Regarding privacy preservation, the ConVec2Mel-VC-XTTS system achieves an EER of about 34.65%, which is lower than the XTTS approach. This suggests that attributes other than voice timbre can reveal speaker information and are helpful in emotion recognition. §.§ Achieving the best of both worlds The systems discussed above demonstrate a clear trade-off between privacy and emotion preservation performance. The results are shown in Figure <ref>. As emotion preservation performance rises, speaker information leakage takes place, leading to a decrease in privacy performance. Based on the above results, we propose that randomly cloning a speaker’s voice with a different speaking style expressing the same emotion could break this trade-off. To test this, we use emotion embeddings extracted from emotion recognizers as a proxy to find a target utterance for voice and style cloning in the cascaded ASR-TTS approach. This study examines two types of emotion embeddings. The first is a concatenated emotion representation from embeddings extracted by five pre-trained emotion recognizers provided by the challenge. This in-domain representation, derived from systems trained with IEMOCAP, represents the optimal emotion preservation achievable when the anonymization system has access to the SER system. The second embedding is obtained from an out-of-domain extractor[<https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim>] <cit.> trained with the MSP-Podcast dataset <cit.>. For each utterance to be anonymized, the prompt audio is selected from the LibriTTS dataset using the following steps: 1. randomly select 5000 utterances from the dataset, 2. calculate the cosine similarity between the emotion representations of source utterance and all 5000 utterances, 3. randomly choose one target utterance from the top 10 utterances with the highest similarity scores. The last two lines of Table <ref> present the results of the anonymization system that employs the emotion-proxy anonymization strategy. Emo_MSP-XTTS is based on the out-of-domain emotion recognizer, while Emo_IEMOCAP-XTTS relies on the in-domain emotion recognizer. Emo_IEMOCAP-XTTS achieves strong emotion preservation performance with a UAR of 52.43% and, simultaneously, high privacy performance with an EER of 45.24%. This system is marked in Figure <ref> as the ideal system, breaking the privacy-emotion preservation trade-off shown earlier. However, this assumes that the anonymization system has prior knowledge of the emotion recognition system. In the alternative scenario, where the anonymization system possesses out-of-domain prior emotion knowledge, Emo_MSP-XTTS achieves privacy performance comparable to Emo_IEMOCAP-XTTS but struggles to find the best match for emotion, with an emotion preservation UAR of 36.94%. Despite this, the emotion preservation score is higher than that of the XTTS system, suggesting that the emotion-proxy strategy is effective in the anonymization task. §.§ Speaker-Identifying Information in Emotion Embeddings We extract emotion embeddings by models trained with IEMOCAP for 10 randomly selected speakers from libri-dev set and plot the embedding space by projecting it to 2D space using t-SNE. As observed from Figure <ref>, although the embeddings are learned by training models to classify four emotions, embeddings from distinct speakers are distributed apart, while embeddings from the same speaker are clustered together in the representation space. This suggests that emotion embeddings carry a certain amount of speaker information, leading to the trade-off in speech anonymization. To explore how much SER and ASV systems depend on the same speech characteristics for recognition, we employ the emotion recognizer as an utterance-level representation extractor and train a speaker verification model solely using the emotion representations. The emotion embeddings serve as input, followed by a hidden layer with 192 neurons, a dropout rate of 0.5, and the ReLU activation function to map the input to speaker embeddings. A speaker classifier is then employed to predict the target speaker. The training architecture and hyperparameter settings adhere to the challenge’s recipes, except that the input features are changed to emotion embeddings, and the speaker embedding extractor is simplified to a hidden layer instead of using the ECAPA-TDNN structure. The experiment is conducted on two datasets: libri-train-360 and VoxCeleb1. For both datasets, we split the data into training and validation subsets with a ratio of 0.9 and 0.1, respectively. For the model trained on the libri-train-360 dataset, we set the epochs to 50, while for the VoxCeleb1 dataset, we set the epochs to 200. After training, we select the model with the best performance on the validation set for evaluation. For each experiment, we use 5 different random seeds, training and evaluating the model on the corresponding data. The verification performance on the evaluation sets is shown in Table <ref>. Speakers from the test sets are distinguishable with models trained solely on emotion embeddings. Specifically, the model trained on libri-train-360 achieves an EER of 19.28% on the lib-dev-f set and EERs of less than 10% on other evaluation sets. The model trained with VoxCeleb1 achieves an EER of approximately 12.68% on the corresponding test set. These results indicate that a certain level of speaker information is embedded in the emotion embeddings. Such information can be extracted and learned by a single linear layer, highlighting the challenge of disentangling speaker and emotion attributes to fully conceal speaker identity while preserving the emotional state in speech anonymization. § DISCUSSIONS AND CONCLUSIONS To explore factors that expose speaker identity, we use VC approaches without paralinguistic attribute control in our study. Future work will investigate VC systems <cit.> that incorporate speaker-emotion disentanglement abilities in the anonymization setting. The XTTS system’s ability to clone the target speaker’s voice and speaking style, along with the better emotion preservation performance of the cascaded ASR-TTS system when cloning voice-converted utterances rather than random utterances, demonstrates the effectiveness of emotion cloning. Since there is no current standard for objective evaluation of emotion cloning in the speech synthesis field, our results indicate that the emotion recognition pipeline from the VoicePrivacy 2024 challenge could be well-suited for this purpose. While our experiments used prompt speakers and utterances from the LibriTTS dataset, using anonymized voices from more expressive corpora with richer emotions could provide a clearer insight into the entanglement between speaker information and emotion. Additionally, our study did not address the degree of speaker information exposure between speaking style and timbre, which remains unknown. The emotion recognizer trained with the IEMOCAP dataset, which has a limited number of speakers, contains retrievable speaker information. It would be interesting to see whether an emotion recognizer trained on a dataset with a larger number of speakers, like MSP-Podcast, retains more speaker information. This would be useful in understanding the speech attributes influencing data-driven speaker and emotion recognizers. In summary, this paper explores the entanglement between speaker and emotion in speech. Our experimental results on the speech anonymization task demonstrate that enhancing privacy preservation performance results in decreased emotion preservation, highlighting the trade-off between these two attributes. However, this trade-off can be overcome if the anonymization system incorporates a robust emotion recognizer. Furthermore, emotion recognizers also retain speaker information, suggesting that speaker and emotion recognizers depend on similar speech characteristics for recognition. Thus, disentangling emotion and speaker attributes from speech remains a challenging and significant task to address. §.§ Acknowledgement This work was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the ARTS Program under contract D2023-2308110001. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. IEEEtran
http://arxiv.org/abs/2409.03043v1
20240904192756
Can Your Generative Model Detect Out-of-Distribution Covariate Shift?
[ "Christiaan Viviers", "Amaan Valiuddin", "Francisco Caetano", "Lemar Abdi", "Lena Filatova", "Peter de With", "Fons van der Sommen" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
CovariateFlow C. Viviers Eindhoven University of Technology, Eindhoven, The Netherlands [email protected] Philips IGTs, Best, The Netherlands Can Your Generative Model Detect Out-of-Distribution Covariate Shift? Christiaan Viviers10000-0001-6455-0288 Amaan Valiuddin1Equal contribution0009-0005-2856-5841 Francisco Caetano110000-0002-6069-6084 Lemar Abdi1Lena Filatova2Peter de With10000-0002-7639-7716 Fons van der Sommen10000-0002-3593-2356 ========================================================================================================================================================================================================================================== § ABSTRACT Detecting Out-of-Distribution (OOD) sensory data and covariate distribution shift aims to identify new test examples with different high-level image statistics to the captured, normal and In-Distribution (ID) set. Existing OOD detection literature largely focuses on semantic shift with little-to-no consensus over covariate shift. Generative models capture the ID data in an unsupervised manner, enabling them to effectively identify samples that deviate significantly from this learned distribution, irrespective of the downstream task. In this work, we elucidate the ability of generative models to detect and quantify domain-specific covariate shift through extensive analyses that involves a variety of models. To this end, we conjecture that it is sufficient to detect most occurring sensory faults (anomalies and deviations in global signals statistics) by solely modeling high-frequency signal-dependent and independent details. We propose a novel method, CovariateFlow, for OOD detection, specifically tailored to covariate heteroscedastic high-frequency image-components using conditional Normalizing Flows (cNFs). Our results on CIFAR10 vs. CIFAR10-C and ImageNet200 vs. ImageNet200-C demonstrate the effectiveness of the method by accurately detecting OOD covariate shift. This work contributes to enhancing the fidelity of imaging systems and aiding machine learning models in OOD detection in the presence of covariate shift. The code for CovariateFlow is available at https://github.com/cviviers/CovariateFlowhttps://github.com/cviviers/CovariateFlow. § INTRODUCTION Identifying abnormal image statistics is critical for deploying precise sensing technology and reliable machine learning. Out-of-Distribution (OOD) detection methods model the available data or a set of In-Distribution (ID) features, to identify test examples drawn from a different distribution. Notably, generative models offer an unsupervised paradigm to model without making explicit assumptions on the form of the OOD data. With a plethora of possible covariates (abnormal variations in high-level image statistics) and potential downstream machine learning image applications, unsupervised generative modelling is a promising approach for general OOD detection. The prevailing approaches for OOD detection predominantly focus on the semantic contents of the image data. Therefore, this study elucidates covariate shifts, i.e. the change in distribution of high-level image statistics (covariates) subject to consistent low-level semantics. Likelihood-based methods, such as Normalizing Flows (NFs), offer an intuitive way of OOD detection by evaluating the likelihood of test samples. However, as evidenced in previous research <cit.>, NFs have exhibited limitations in effective OOD detection, often assigning higher likelihoods to OOD samples. Various works have explored this phenomenon and proposed alternative methods to direct likelihood estimation <cit.>. Recent theoretical investigations <cit.> indicate that these methodologies are inherently susceptible to certain types of OOD data. Moreover, the metrics employed for evaluation exhibit a predisposition towards specific categories of OOD data, suggesting an intrinsic limitation in the current approach to OOD detection. In this study we explore this phenomena while improving the covariate OOD detection capabilities of NFs. Additionally, we address this shortcoming by proposing to unify the log-likelihood (LL)-based metric with the typicality score <cit.> in a simple Normalized Score Distance (NSD). Other generative models have in various contexts been applied to the task of semantic OOD detection, ranging from density-based methods <cit.> to different reconstruction-based models <cit.>. However, OOD covariate shift within the context of generative models remains largely unexplored. We indicate two branches of covariate shift: (1) domain covariate shift, such as images in different styles (e.g. natural vs. sketch) and (2) domain-specific covariate shift (also known sensory anomalies <cit.>), images under different sensory conditions (e.g different lighting, cameras or sensory level degradation (Figure <ref>). Covariate shifts are recognized for their potential to significantly degrade the predictive performance of the model, where in some specialized imaging applications it can indicate system failure. Detecting these covariate factors and the distribution shifts under consistent semantic content <cit.>, will enhance the safety and reliability of imaging systems in diverse fields and the machine learning systems built on-top of these images <cit.>. This necessitates their detection, and if possible, the quantification of its severity. To this end, we are, to the best of our knowledge, the first to implement unsupervised, domain-specific OOD covariate shift detection. Images across different applications can demonstrate complex noise patterns and variability due to factors such as equipment variations, environmental conditions, and the specific nature of the imaged objects or scenes <cit.>. A novel and effective strategy for improving OOD detection should utilize the data-dependent (heteroscedastic) noise that is present in the signal. This inherent noise serves as a rich source of information that can be exploited to differentiate between ID and OOD samples. In fact, the noise patterns in images can encode subtle differences that may not be apparent from the semantic content in the image alone. To address these challenges and leverage the nuanced information encoded in noise patterns across various imaging applications, we propose a streamlined approach that models the conditional distribution between low-frequency and high-frequency signal components. This method contrasts with conventional techniques that attempt to model the entire signal distribution, which may inadvertently obscure critical covariate details. We employ a simple filtering approach that segregates the image into distinct low-frequency and high-frequency components. By focusing on the interaction between these frequency components, the proposed approach effectively detect covariate shifts. The research contributions of this work are as follows. * Unsupervised OOD covariate shift detection with comprehensive evaluation of generative model using CIFAR10(-C) and ImageNet200(-C) * CovariateFlow: A novel application of conditional Normalizing Flows for high-frequency heteroscedastic image-component density modelling. * Normalized Score Distance: which unifies Typicality and Log-likelihood as a general metric for OOD detection in normalizing flows. * Accurate detection of distribution shift and sensory-level changes indicating high sensitivity for potential fault detection. § BACKGROUND AND RELATED WORK §.§ Semantic Out-of-Distribution Detection Approaches to OOD detection are generally divided into two categories: supervised, which necessitates labels or OOD data, and unsupervised, which relies solely on in-distribution data. Although semantic OOD detection does not constitute the core focus of this study, we nevertheless provide a concise overview of the recent developments, since these methodologies hold the potential to translate to covariate OOD detection. For an in-depth exploration of OOD detection methodologies, we refer to the comprehensive review by Yang  <cit.>. §.§.§ Explicit Density Methods: A straightforward method for OOD detection involves the use of a generative model, p(x; θ) parameterized by θ and trained to fit a given distribution over data x. The process evaluates the likelihood of new, unseen samples under this model with the underlying assumption that OOD samples will exhibit lower likelihoods compared to those that are ID. The Evidence Lower Bound (ELBO) employed in Variational Auto Encoders (VAEs) <cit.> can be used for OOD detection by evaluating a lower bound on the likelihood of a test sample. Plumerault  <cit.> introduced the Adversarial VAE – a novel approach that marries the properties of VAEs with the image generation quality of Generative Adversarial Networks (GANs), thereby offering a robust auto-encoding model that synthesizes images of comparable quality to GANs while retaining the advantageous characteristics of VAEs. Unlike VAEs, Normalizing Flows (NFs) <cit.> offer exact and fully tractable likelihood computations. With the introduction of coupling layers <cit.>, NFs can be arbitrarily conditioned and seem to be excellent contenders for conditional OOD detection. However, as evidenced in previous research <cit.>, NFs have exhibited limitations in effective OOD detection, often assigning higher likelihoods to OOD samples. This limitation has been associated with an inherent bias in the flow model architectures, which tends to prioritize modeling local pixel correlations over the semantic content of the data <cit.>. Exploration by Gratwohl et al. <cit.> and Nalisnick et al. <cit.> posits that this phenomenon can be attributed to the fact that ID images are not high-likelihood samples but, rather, constituents of the typical set of the data distribution. Consequently, the investigation into methods that assess the typicality <cit.> of data instances, as an alternative to direct likelihood estimation, has gained traction. Despite empirical evidence demonstrating the efficacy of typicality in OOD benchmarks <cit.>, recent theoretical investigations <cit.> indicate that these methodologies have inherent susceptibilities to specific OOD types and an evaluative bias towards particular OOD categories, thereby underscoring the complexity of OOD detection. §.§.§ Image Reconstruction-based Methods: These OOD detection methods are based on the principle that models are less effective at accurately reconstructing data that significantly deviates from the training distribution. Graham  <cit.> improves on an innovative approach to OOD detection that leverages the potent generative prowess of recent denoising diffusion probabilistic models (DDPMs) <cit.>. Unlike prior reconstruction-based OOD detection techniques that necessitated meticulous calibration of the model's information bottleneck <cit.>, their method utilizes DDPMs to reconstruct inputs subjected to varying degrees of noise. In this work, we implement this DDPM method as baseline for OOD covariate shift detection. §.§ Covariate shift In essence, covariate shift refers to the phenomenon where images share consistent semantic content (i.e. similar subjects), and yet, are captured under varying imaging conditions. The degree of variation in these conditions signifies the magnitude of the shift. For example, a minor shift might involve images of a subject under varying lighting conditions, while a more substantial shift, such as transitioning from natural images to graphical sketches of the same subject, exemplifies a transition towards domain shift (Figure <ref>). This paper concentrates on in-domain covariate shifts, as these scenarios represent instances where machine learning silently fails <cit.>. In related work on covariate shift detection, Averly . <cit.> adopt a model-centric approach to address both covariate and semantic shifts, suggesting a methodology for identifying instances that a deployed machine learning model, such as an image classifier, fails to accurately predict. This strategy implies that the decision to detect, and potentially exclude, a test example, is dependent on the specific model in question. While being effective for well-established machine learning models, this method inherently links the detection of shifts to the peculiarities of the individual model, resulting in each model having its unique set of criteria for rejecting data, which may vary broadly, even when applied to the same dataset. A significant drawback of this approach is its reliance on a robust pre-trained model, which poses a challenge for scenarios where identifying covariate shift is the primary objective, leaving such cases without a viable solution.Generalized ODIN <cit.> is another direction of work that adopts the model-centric approach. This method replaces the standard classification head and, instead, decomposes the output into scores to behave like the conditional probabilities for the semantic shift distribution and the covariate shift distribution. This approach is then only evaluated on out-of-domain covariate shift such as the DomainNet <cit.> benchmark. Follow-up work by Tian  <cit.> further explores calibrating the confidence functions proposed in <cit.>, which realizes improvements on both semantic and covariate OOD detection. They additionally apply their refinement on in-domain covariate shift, such as CIFAR10 vs. CIFAR10-C.Besides the above-mentioned work, covariate shift has been studied predominantly from a robustness perspective <cit.> or in a domain adaptation setting <cit.>. The defense against adversarial attacks <cit.> is another research direction that falls in the domain of covariate shift. This perspective stems from the recognition that adversarial examples, by nature, often represent data points that deviate significantly from the distribution observed during model training, thereby inducing a form of covariate shift. Researchers have leveraged insights from adversarial robustness <cit.> to devise methods that can identify and mitigate the effects of such shifts, focusing on enhancing model reliability and security against deliberately crafted inputs designed to deceive. Fortunately, the shift introduced is completely artificial and typically a shift targeted at a specific model. §.§ Normalizing Flows (NFs) Consider an image sampled from its intractable distribution as 𝐱∼P_𝐗. Additionally, let us introduce a simple, tractable distribution p_𝐙 of latent variable 𝐳 (which is usually Gaussian). Normalizing Flows utilize k consecutive bijective transformations f_k:ℝ^D→ℝ^D as f = f_K ∘…∘ f_k∘…∘ f_1, to express exact log-likelihoods log p(𝐱) = log p_Z(𝐳_0) - ∑_k=1^K log| d f_k(𝐳_k-1)/d𝐳_k-1|, where z_k and z_k-1 are intermediate variables, and 𝐳_0 = f^-1(x). Numerous bijections have been introduced which balance expressivity and have a simple evaluation of the Jacobian determinant in Equation (<ref>). Specifically, coupling flows have seen much success <cit.>. Since they can be parameterized through arbitrary complex functions, we explore conditioning the flow on the frequency components of an image. §.§ Typicality Examining sequences of N independent and identically distributed (i.i.d.) datapoints 𝐱_n, the typical set comprises all 𝐱_n that satisfy H(𝐗)-ϵ≤ -1/N∑_n=1^N log_2 p(𝐱_n) ≤ H(𝐗)+ϵ, where ϵ represents an arbitrarily small value and H(𝐗) denotes the Shannon entropy of the dataset. In other words, the empirical entropy of the set approaches to the entropy rate of the source distribution. Leveraging the Asymptotic Equipartition Property (AEP), it is deduced that 1/N∑_n=1^N log_2 p(𝐱_n)→ H(𝐗) s.t. N→∞, leading to the conclusion that the probability of any sequence of i.i.d. samples of sufficient length approaches unity. Thus, despite the typical set representing merely a small subset of all potential sequences, a sequence drawn from i.i.d. samples of adequate length will almost certainly be considered typical <cit.>. In various studies, indications have emerged that NFs perform poorly when the likelihood is utilized as a metric for detecting OOD samples <cit.>. It can be argued that datasets are a typical sequence of samples, rather than high in likelihood, also known as the Typical Set Hypothesis (TSH). Therefore, in the recent work by Nalisnick  <cit.>, an innovative approach is proposed for OOD detection that leverages typicality as an evaluation metric in lieu of likelihood. This methodology was further refined in subsequent studies <cit.>, introducing approximate mass. Motivated by the fact that typical samples are localized in high-mass areas on the PDF, the metric evaluates the gradient of the LL w.r.t the input data, also known as the score. It can be expressed mathematically as ∂ L(x;θ)/∂x, where x denotes the input, L the evaluated LL by the model parameterized by θ, and . represents the Euclidean norm. Despite some criticism on TSH <cit.>, this metric demonstrates superior performance in OOD detection across various benchmarks <cit.>. § APPROACH §.§ Definition of Covariate Shift Formally, semantic- and in-domain covariate shifts can be delineated as follows. Consider samples from the training distribution, x ∼ P_X, and anomalous data from an OOD source x̂∼ P_X̂, subject to a low-pass filter (l, l:ℝ→ℝ) to obtain the low-frequency components, x_L = l(x) and the high-frequency components x_H = x - l(x). Semantic shift is characterized by a discrepancy in the marginal probability distributions, P_X_L≠ P_X̂_L, when the conditional probability distributions of high-frequency components remain consistent, P_X_H|X_L≈ P_X̂_H| X̂_L. Conversely, covariate shift is identified when the conditional probability distributions diverge, P_X_H|X_L≠ P_X̂_H|X̂_L, but the marginal probability distributions of the low-frequency components remain the same P_X_L≈ P_X̂_L. Furthermore, these definitions hold with in the supervised setting with predefined targets (Y). §.§ CovariateFlow In the development of methodologies for detecting covariate shift within datasets, several critical factors must be meticulously considered to ensure efficacy and accuracy. Firstly, (1) the process of resizing images can significantly alter the distribution of high-frequency statistics, potentially obscuring key data characteristics. Secondly, (2) the inherent nature of encoding architectures, which essentially function as low-pass filters <cit.>, may limit their capacity to fully capture the complex distribution of noise present within the data. This limitation is particularly relevant as covariate shifts often manifest through alterations in the general image statistics, thereby necessitating a method capable of discerning such nuances. Thirdly, (3) the utilization of only log-likelihood-based evaluation in NFs, has proven to have a predisposition towards low-level semantics and is more sensitive to high-frequency statistics. An effective method should be sensitive to covariate shifts affecting all frequency bands, from noise degradations to contrast adjustments. In light of the above considerations, Normalizing Flows (NFs) emerge as a particularly suitable candidate for modeling the imaging features essential for detecting covariate shift. NFs are distinct in that they abstain from any form of down-sampling or encoding processes to preserve their bijective property. It is also recognized that NFs prioritize pixel correlations over semantic content <cit.>. However, given the expectation that covariate shift involves changes in high-frequency image statistics, accurately modeling the complete image distribution—including both low-frequency semantics and high-frequency components—presents significant challenges, particularly given the relatively limited capacity of NFs compared to more recent generative models <cit.>.To address these challenges, we introduce a novel method that simplifies the modeling of components critical for covariate shift detection. Our approach involves a filtering strategy that divides the image into separate low-frequency and high-frequency components, thereby allowing the detection system to concentrate specifically on the high-frequency elements to improve detection capabilities. Consider an input signal x and the low-pass filter l, the conditional distribution of high-frequency components (𝐱_H = x - l(x)) given the low-frequency components (𝐱_L = l(x)) . By recognizing that certain high-frequency components are correlated with low-frequency signals, we can model this relationship conditionally. On this premise, we develop CovariateFlow (Figure <ref>), a novel approach of modeling the conditional distribution between high-frequency and low-frequency components using conditional NFs as log p(𝐱_H|𝐱_L ) = log p_Z(𝐳_0) - ∑_k=1^K log| d f_k(𝐳_k-1, x_L)/d𝐳_k-1|. This formulation sets the foundation for a detection system that is finely attuned to the nuances of covariate shift, enhancing its ability to identify and respond to shifts in high-frequency image statistics. The proposed model is predominantly defined by (1) a signal-dependent layer (SDL) <cit.>, (2) conditional coupling flow <cit.>, (3) an unconditional 1×1 convolutional (conv.) layer <cit.> and (4) uniform dequantization,. The SDL layer and conditional coupling layer are additionally conditioned on x_L. The 1×1-convolution and conditional coupling flow is repeated K times depending on the dataset at hand. We employ a Gated ResNet <cit.> as f_Θ and a checkerboard masking strategy <cit.> in our coupling layers. Figure <ref> depicts a high-level overview of the model architecture. We employ a simple Gaussian filter for l, to decompose the signal into low-frequency and high-frequency components. To minimize any assumptions about the high-frequency components, we use a conventional Gaussian kernel. A kernel with a standard deviation (σ) of one proved to yield the best performance. The coupling layers are depicted in Figure <ref> and the model totals a mere 945,882 trainable parameters with 16 coupling layers. For a detailed description on training details, ablation experiments and inference time comparisons, we refer to Section <ref> in the supplementary material. The code for the model will be made publicly available. §.§ Unifying Log-likelihood and Typicality The inductive bias of NFs towards structural complexity when evaluating with LL has been discussed in Section <ref>. As an alternative, evaluation on typicality using the gradient of the LL w.r.t. the input data, has shown improvements in semantic OOD detection over LL <cit.>. However, it is understood that the metric and model are similarly biased towards certain categories of data <cit.>. As such, we propose to combine LL evaluation with the Typicality score to overcome the limitations of each approach individually. Our approach standardizes both the LL and the typicality scores in terms of their respective training statistics. After standardization, we can transform each metric into an absolute distance from the expected mean. The LL distance and gradient score distance can then simply be added to obtain a unified distance. In this manner, the evaluation is sensitive to all deviations, rather than only being lower in score, thereby reducing the effect of the biases of the respective metrics. The following paragraph gives the details of the mentioned approach. Consider a sample x∼ P_X with log-likelihood log p(x). Furthermore, we denote the magnitude of the gradients as ||∇_x log p(x)||, i.e. the approximate score. The means for the empirical likelihoods are determined through μ_L=𝔼_P_X[log p(x)], and of the approximate scores with μ_T=𝔼_P_X[ ||∇_x log p(x)|| ]. Similarly, we can denote σ^2_L=𝔼_P_X[(x-μ_L)^2] and σ^2_T=𝔼_P_X[(x-μ_T)^2] for their respective variances. We can then obtain the Normalized Score Distance (NSD) for a new sample x^* as the summation of the standardized L1-norms through NSD(x^*) = | log p(x^*) - μ_T/σ_T| + | ||∇_x log p(x^*)|| - μ_L/σ_L|. Figure <ref> depicts this procedure for two degradation types on CIFAR10. §.§ Datasets CIFAR10(-C) & ImageNet200(-C): CIFAR10 <cit.> and ImageNet200 with their respective corrupted counterparts, CIFAR10-C <cit.> and ImageNet200-C <cit.>, serve as exemplary datasets for developing and evaluating unsupervised covariate shift detection algorithms. CIFAR10 and ImageNet200 provide a collection of images that encompass a broad range of in-distribution covariate shifts, ensuring a suitable level of diversity. On the other hand, the corrupted versions introduce real-world-like (undesired) degradations, such as noise, blur, weather, and digital effects. Figure <ref> depicts 3 of the 15 effects employed in the ImageNet200(-C) dataset. Images are utilized in their original resolution at 64 × 64 pixels. CIFAR10-C consists of 19 corruptions in total with images at 32 × 32 pixels. This setup enables testing the covariate shift detecting performance across multiple distortion types and severity levels. In all our experiments we train the models only on the original dataset's training set and then test it against all of the corruptions at every severity level. For CIFAR10 this is the original dataset's test set (ID test) and CIFAR10-C's 19 corruptions at 5 severity levels (95 OOD test sets). Similarly, we treat the ImageNet200 test set as ID test and the 15 corruptions at 5 severity levels from ImageNet200-C as 75 OOD datasets. The datasets follow the OpenOOD <cit.> benchmarks[https://github.com/Jingkang50/OpenOODhttps://github.com/Jingkang50/OpenOOD]. § EXPERIMENTS The following section describes the conducted experiments and presents the key results obtained in our investigation. Further detailed experimental results can be found in the supplementary materials. Specifically, we present results on CIFAR10 (Section <ref>), ImageNet200 (Section <ref>) and extensive ablation experiments with the proposed CovariateFlow (Section <ref>). §.§ Evaluation Metrics & Models To evaluate the model's ability to detect OOD covariate shifts, we utilize metrics commonly found in related work: the Area Under the Receiver Operating Characteristic (AUROC) curve and the False Positive Rate (FPR) at a 95% True Positive Rate (TPR). In all our experiments with CIFAR10(-C) and ImageNet200(-C), we use the designated test set (10k samples) to compute each metric. Our contributions include contextualizing the VAE, AVAE, GLOW evaluated with log-likelihood and the DDPM with the reconstruction loss, within OOD covariate shift as baseline models. Furthermore, we evaluate GLOW using typicality and the proposed NSD and CovariateFlow with all the aforementioned metrics. Most models are trained from scratch on the ID data. For the VAE-FRL <cit.>, a method leading in semantic OOD detection, the available pretrained CIFAR10 weights[https://github.com/mu-cai/FRLhttps://github.com/mu-cai/FRL] are employed. A detailed description of the implemented models can be found in Section <ref> of the supplementary materials. §.§ Covariate Shift in CIFAR10 and ImageNet200 Table <ref> showcases various models and their averaged AUROC across all the degradations per CIFAR10-C/ImageNet-C severity level. While some models excel in handling specific types of degradation, only the overall performance is truly relevant, as one typically cannot predict the type of perturbation that will occur in real-world settings. A detailed breakdown of the results per perturbation is shown in Section <ref> of the supplementary materials. In Table <ref> it can be seen that models preserving the data dimension and maintaining the high-frequency signal components, such as the DDPM and NF-based approaches, perform best. ImageNet200-C contains fewer noise-based degradations than CIFAR10-C. The NF models evaluated with LL generally perform well on noise perturbations (Table <ref> and Table <ref>) and because of this disparity in the types of degradations present in the datasets, LL evaluation exhibits a drop in average performance from CIFAR10 to ImageNet200. The VAE-FRL is designed to focus on semantic content and thus fails to accurately detect a change in general image statistics. It can be observed that CovariateFlow with NSD consistently outperforms the other methods at every severity level, realizing an average improvement of 5.6% over GLOW on CIFAR10 and 7.8% over GLOW on ImageNet200 when evaluated with NSD. This shows the strength of the proposed NSD metric, consistently improving over just LL or Typicality on both the GLOW model and CovariateFlow. Figure <ref> showcases an explicit example of how NSD consistently performs well under different degradations. Section <ref> and Section <ref> presents a comprehensive evaluation of various methods for every OOD covariate shift type between the CIFAR10(-C) and ImageNet200(-C) datasets. Table <ref> focuses on the models' performances across three specific degradations (Gaussian Noise, Gaussian Blur, and Contrast) at five severity levels, that summarize the general results seen across all degradations. ImageNet200-C does not contain Gaussian Blur, but in general, the same trend can be observed between the two datasets for all the employed models. A complete comparison between all the models and their average performance per degradation type (averaged over severity levels) can be seen in Table <ref>. To evaluate the impact of filter kernel size on performance, we conducted an experiment using CovariateFlow. Figure <ref> illustrates the average AUROC achieved with varying Gaussian filter sizes. The results indicate that a smaller filter yields the highest average performance. Example evaluations from the CovariateFlow (NSD) model are presented in Figure <ref>. Notably, the evaluated scores increase with each severity level, although the rate of increase is not linear or consistently increasing between the different degradation types. The CovariateFlow model is fully invertible and, as such, can generate heteroscedastic high-frequency components. Figure <ref> depicts an example with sampled high-frequency components, the reconstructed image and a comparison between the reconstructed image and the original image. Importantly, the sampling process is stochastic and the sampling range is not limited in the example. § DISCUSSION The findings from our analyses validate our hypothesis that OOD covariate shifts can be effectively identified by explicitly modeling the conditional distribution between low-frequency and high-frequency components. The proposed CovariateFlow, designed to specifically capture this distribution, surpasses other methodologies in detecting covariate shifts in CIFAR10 and ImageNet200. Given the diverse array of subjects and covariate conditions within the corrupted datasets, focusing on this conditional distribution streamlines the model's task, allowing it to concentrate on the most relevant distribution for the detection process. r0.45 < g r a p h i c s > The average AUROC obtained with CovariateFlow on ImageNet200 vs. ImageNet200-C (all corruptions) at different filter sizes. The figure depicts the score obtained with evaluating using log-likelihood, typicality and the proposed NSD. Extending on the analysis with Table <ref>, the VAE-based models show adequate performance in detecting noisy degradations due to their inductive bias towards modeling low-frequency image components. On the other hand, the model falls short for this exact reason when exposed to any blurring or color degradations in the images. The DDPM with the LPIPS + MSE metric, present strong performance on noise and blurring-based covariate shift, but struggles when exposed to color shift. This is likely due to color reconstructions happening earlier in the reconstruction schedule. Consistent with existing literature <cit.>, the NF-based methods evaluated using LL are extremely sensitive to noisy degradations. However, any blurring or color shift is evaluated as being highly probable under the modelled distribution, highlighting the bias of LL-based evaluation towards lower textural content. Employing the newly proposed typicality metric shows the exact opposite behaviour. Both GLOW and the proposed CovariateFlow, fail at detecting noise-based covariate shift, but show remarkable improvements on both blurring and color-based covariate shifts when evaluated with typicality. Combining typicality and LL in the newly proposed NSD metric, accentuates the strengths of each, enabling strong detecting performance across most of the covariates with CovariateFlow. NSD enhances the OOD detection capabilities of both the standard GLOW model and the proposed CovariateFlow, establishing it as a general and robust metric for OOD detection in NF-based models. On the higher resolution images from ImageNet200, the model also shows some effectiveness in distinguishing JPEG compression as OOD, a notoriously difficult perturbation to detect. When to use CovariateFlow: Despite GLOW (LL)’s slightly superior performance in general noise detection, CovariateFlow, leveraging NSD, proves to be better overall. This provides a clear and general recommendation to the reader: LL is preferred in case strictly noise-based shifts are expected. Without a priori knowledge on the OOD shift type (which is usually the case), CovariateFlow with NSD is optimal. This work demonstrates that it is possible to detect (even slight) perturbations in a target domain without introducing biases or prior knowledge of these perturbations into the model, unlike some contrastive learning approaches <cit.>. It only assumes access to a sufficiently large dataset that captures the in-distribution covariate shifts and aims to detect any covariate shift outside of this distribution. § FUTURE WORK & LIMITATIONS Some concerns can be raised about the complexity of the typicality computation, since test time inference requires a forward pass to compute the LL followed by a backpropagation computation per sample. This increases the memory requirements when deploying the model and decreases the overall inference speed. However, in scenarios where accurate OOD covariate shift is essential, CovariateFlow provides the best accuracy vs. speed trade-offs (see Section <ref>). This work primarily focuses on detecting covariate shift, with explicit covariate shifts introduced to assess performance. Many publicly available datasets exhibit both semantic and potential covariate shifts. Although the proposed approach demonstrates effectiveness in CIFAR10 vs. SVHN (Table <ref>), future work should explore domain-specific datasets with limited ID covariate conditions to test the sensitivity of the proposed approach. As depicted in Figure <ref>, the scores acquired through evaluation with CovariateFlow (NSD) correctly increase with each severity level, however not at the same rate for each degradation type. Future work should explore the latent representations of each degradation to potentially aligning these scores with image quality metrics <cit.> for blind image quality assessment applications. § CONCLUSION This paper explores Out-of-Distribution (OOD) detection, specifically targeting covariate shifts caused by changes in general image statistics. This work introduces CovariateFlow, a novel approach utilizing conditional Normalizing Flows (cNFs) for effectively targeting heteroscedastic high-frequency image components, demonstrating its superior efficacy in detecting OOD shifts across diverse datasets such as CIFAR10(-C) (74.9 % AUROC) and ImageNet200(-C) (72.2 % AUROC). Our analysis reveals that by meticulously modeling the conditional distribution between low-frequency and high-frequency components, CovariateFlow outperforms existing models, particularly when employing the Normalized Score Distances (NSD) metric, which is a synthesis of log-likelihood and typicality evaluations. This approach not only highlights the importance of addressing covariate shifts for enhancing the fidelity of imaging systems, but also underscores the potential of unsupervised generative models in improving machine learning models' robustness against OOD data. Acknowledgments: This research was funded by the Philips IGTs and the Xecs Eureka TASTI Project. splncs04 § SUPPLEMENTARY The supplementary material is organized as follows: Section <ref> describes the implementation details of all the models employed in this paper. Section <ref> has a step-by-step rundown on how we obtain the Normalized Score Distance. Section <ref> provides detailed results on CIFAR10 and CIFAR10-C of the experiments and Section <ref> results on ImageNet200 and ImageNet200-C as described in the Experiments section of the main paper. Finally, we provide a series of additional ablation experiments in Section <ref>. §.§ Implementation Details In this section, we detail the unsupervised training methodologies employed for five distinct baseline models and CovariateFlow aimed at OOD detection. VAE and Adversersial VAE: The VAE is trained to minimize the standard ELBO <cit.> loss. Model evaluations using SSIM and KL-divergence presented the best AUROC results. The AVAE model integrates adversarial training <cit.> into the variational autoencoder framework to enhance its capability in generating realistic samples. For OOD detection, one can leverage the reconstruction loss (Mean Squared Error (MSE)), the KL-divergence and the discriminative loss to compute a OOD score. We adopt the implementation described in <cit.>. In both the VAE and AVAE we employ a 4 layer deep network with a latent dim = 1024. The models were trained for 200 epochs following a cosine annealing learning rate scheduler. VAE-FRL: The VAE with frequency-regularized learning (FRL) <cit.> introduces decomposition and training mechanism which incorporates high-frequency information into training and guides the model to focus on semantically relevant features. This proves effective in semantic OOD detection. We employ the pretrained model as publicly available[https://github.com/mu-cai/FRL/tree/mainhttps://github.com/mu-cai/FRL/tree/main]. For the CIFAR10 experiments, the model consists of a standard 3 layer deep VAE with strided convolutional down-sampling layer, transposed convolutional up-sampling and ReLu non-linear functions. The model has a latent dimension of 200. The OOD score is obtained by the log-likelihood (lower bound in the case of the VAE) minus the image complexity. The formulation is given as S(x) = -log p_θ(x) - L(x), where L(x) is the complexity score derived from data compressors <cit.>, such as PNG. Denoising Diffusion Probabilistic Model: We implemented the Denoising Diffusion Probabilistic Model (DDPM) following the specifications outlined in <cit.> and as publicly available [https://github.com/marksgraham/ddpm-oodhttps://github.com/marksgraham/ddpm-ood]. The method employs a time-conditioned UNet <cit.> architecture with a simplified training objective where the variance is set to time-dependent constants and the model is trained to directly predict the noise ϵ at each timestep t: L(θ) = 𝔼_t,x_0,ϵ[ ϵ - ϵ_θ(x_t)^2 ]. We aim to reconstruct an input x_t across multiple time steps (t), utilizing the DDPM sampling strategy which necessitates t steps for each reconstruction x̂_0,t, with each step involving a model evaluation. To enhance efficiency, we leverage the PLMS sampler <cit.>, a recent advancement in fast sampling for diffusion models, which significantly decreases the number of required sampling steps while preserving or enhancing the quality of samples. For evaluating the reconstructions, we employ both the mean-squared error (MSE) between the reconstructed and the input image, and the Learned Perceptual Image Patch Similarity (LPIPS) metric <cit.>, the latter of which assesses perceptual similarity through deep feature distances. For each of the N reconstructions we compute these 2 similarity measurements. Finally we average these scores (over the two metrics and all the reconstructions) to derive an OOD score for each input, integrating both quantitative and perceptual accuracy assessments. The model architecture is implemented exactly as described in <cit.>. For training, we set T=1000 and employed a linear noise schedule, with β_t ranging from 0.0015 to 0.0195. The training process spanned 300 epochs, utilizing the Adam optimizer with a learning rate of 2.5e^-5. During the testing, we utilized the PLMS sampler configured to 100 timesteps and, in line with AnoDDPM <cit.>, we only test reconstructions from T = 250. Since we do not intend to detect semantic anomalies in this work and are more interested in high frequency image components, we focus on reconstructions later in the schedule. Finally, we experiment with the DDPM model trained on CIFAR10 and evaluated at different reconstruction starting points. Figure <ref> depicts our results obtained with different reconstruction starting points and the average AUROC across all the degradations in CIFAR10-C. GLOW: Normalizing Flows enable OOD detection by modeling the ID data distributions with invertible transformations through a maximize the log-likelihood training objective. We employ the GLOW <cit.> architecture, as publicly available [https://github.com/y0ast/Glow-PyTorchhttps://github.com/y0ast/Glow-PyTorch], in this study. Additionally, following the recent work in typicality (Section <ref>), we train our model with the approximate mass augmented log-likelihood objective as described in <cit.>. We incorporate the approximate mass as a component in the loss function formulation. Let L(x; θ) = log(p(x; θ)) denote the average log-likelihood (LL) of the model, parameterized by θ, evaluated over a batch of input data x. Our revised training objective is expressed as: min_θ( -L(x; θ) + α∂ L(x; θ)/∂ x) where α > 0 signifies a hyperparameter that balances the trade-off between local enhancement of the likelihood and reduction of the gradient magnitude. We employ α = 2 in the GLOW implementation. At test time, we compute the per sample LL and gradient score. These components are used to compute the NSD as described in Section <ref>. CovariateFlow: Section <ref> describes the CovariateFlow model proposed in this work. Figure <ref> depicts the architecture and general flow of information during training and when computing the OOD scores. Figure <ref> illustrates a detailed diagram of the low-frequency conditioned coupling steps employed in the model. Additionally, following the image decomposition through the Gaussian filtering, we encode the individual components as 16-bit depth data to avoid information loss. Our model is completely invertible and can thus also generate signal-dependent high-frequency components. The models are prepared following the typicality augmented training objective (Equation <ref>). We use an Adam optimizer (starting lr = 5e^-4) with a one-cycle annealing learning rate scheduler for 300 epochs across all our experiments. The code for the model is available at https://github.com/cviviers/CovariateFlowhttps://github.com/cviviers/CovariateFlow. §.§ Detailed analysis of the normalized score distance (NSD) This section details the computation of the NSD from the LL and typicality score. Figure <ref> depicts this process through the evaluation of the GLOW model applied to three different OOD covariate shifts. In Figure <ref> the LL and typicality (gradient score) of the model subject to Gaussian Noise can be seen. Following the process described in Section <ref>, column 2 depicts the standardization of both scores using validation statistics. This is followed by converting the scores to absolute distance from the expected mean in column 3. The LL distance and gradient score distance can then simply be added to obtain a unified distance (Figure <ref>). The same flow is depicted in Figure <ref> and Figure <ref> for the model subject to Gaussian Blur and Figure <ref> and Figure <ref> for Contrast change. Following this standardized approach, the change in each measure (LL and gradient score) w.r.t. the validation statistics are utilized and combined to provide a single and effective OOD score. All the results depicted in The Figure <ref> depicts the ID CIFAR10 test scores vs. the OOD CIFAR10-C scores. §.§ Detailed Results on CIFAR10 vs. CIFAR10-C The following section presents detailed results obtained with various models on our experiments with ID CIFAR10 and CIFAR10-C as OOD. Our analysis examines the reconstruction capabilities of the DDPM across various initial time steps, T. Figure <ref> presents the mean AUROC curve calculated for reconstructions assessed using the LPIPS, MSE, or a combination of LPIPS and MSE metrics at each time step. Notably, at larger time steps (e.g., T=250), the distinction in average reconstruction error between the ID CIFAR10 test set and the OOD CIFAR10-C dataset becomes less pronounced, leading to inferior OOD detection performance. This phenomenon is attributable to the high-level image perturbations characteristic of OOD data, which are predominantly addressed in the final stages of the diffusion process. In Contrast, initial diffusion stages focus on generating lower-level image semantics, resulting in reconstructions that significantly diverge from the test image, particularly in terms of low-frequency components. Figures <ref>, <ref>, and Table <ref>, highlight the distinct sensitivities of log-likelihood (LL) and gradient scores when applied to GLOW under severe Gaussian Noise conditions, as depicted in Figure <ref>. These metrics diverge in their assessment, with LL clearly identifying distorted images as OOD, whereas gradient scores suggest such images are more typical than even the ID data. Conversely, Figure <ref> demonstrates the opposite trend for blurred images, where LL overestimates their likelihood relative to ID data, but gradient scores accurately classify them as OOD. These observations corroborate Zhang 's theoretical insights <cit.> about the propensity of certain model-metric combinations to misjudge the probability of natural images. To address these discrepancies, we introduce the NSD metric, which synthesizes LL and gradient movements into a unified OOD detection metric. Figures <ref>, <ref> and <ref> validate the NSD metric's effectiveness in discerning OOD samples across both conditions, with extended results available in the supplementary material. Table <ref> depicts the AUROC for 3 degradations (each severity) from CIFAR-10C that summarizes the performance of all the models employed in this work. Figure <ref> additionally depicts the average AUROC of all the models at each severity. We also present the complete performance evaluation of all the models on CIFAR10-C on all the degradtions and at every severity level. The results are depicted in order of presentation: DDPM T150-LPIPS (<ref>), DDPM T20-LPIPS+MSE (<ref>), VAE (Table <ref>), AVAE (Table <ref>), GLOW-LL (Table <ref>), GLOW-Typicality (Table <ref>), GLOW-NSD (Table <ref>), CovariateFlow-LL (Table <ref>), CovariateFlow-Typicality (Table <ref>) and CovariateFlow-NSD (Table <ref>). §.§ Detailed Results on ImageNet200 vs. ImageNet200-C The following section depicts detailed results obtained with various models on our experiments with ID ImageNet200 and ImageNet200-C as OOD. The results are depicted in order of presentation: DDPM T20-LPIPS+MSE (<ref>), GLOW-LL (Table <ref>), GLOW-Typicality (Table <ref>), GLOW-NSD (Table <ref>), CovariateFlow-LL (Table <ref>), CovariateFlow-Typicality (Table <ref>) and CovariateFlow-NSD (Table <ref>). §.§ Ablation Study This section details a series of ablation experiments conducted, including an analysis of the effect of the individual components in CovariateFlow on the detection performance (Table <ref>), mean scores per severity of the models and resource aspects are depicted in Table <ref>, model performance on a typically semantic OOD detection problem in Table <ref> and finally an example (Figure <ref>) of heteroscedastic high-frequency components sampled from the fully invertible CovariateFlow. In our ablation experiments, we test the effect of explicitly modelling the conditional distribution between the low-frequency and high-frequency signal components as described in Section <ref>. This is achieved by training and evaluating the CovariateFlow model in four different settings: (1) unconditional coupling flows with the full input image, (2) unconditional coupling flows subject to only the high-frequency components of the image, (3) unconditional coupling flows subject to the high-frequency components and a conditional signal-dependent layer additionally subject to the low-frequency image components and finally, (4) the high-frequency image components applied to the conditional coupling flows and a signal dependent layer subject to the low-frequency components. For each of these implementations we follow the exact same training methodologies as described in Section <ref>. All the images are encoded at 16 bit depth during dequantization to ensure comparability. From Table <ref> it can be seen that while model 1 is limited in modelling the complete data distribution (11.32 Bits per dimension (BPD)), it performs well on detecting OOD covariate shift with NSD (AUROC 67.8%), comparable to the performance obtained with GLOW. Only using the high-frequency image components in an unconditional setting (model 2) yields a somewhat lower OOD detection performance of 65.5% AUROC. Introducing the SDL (model 3), lowers the mean BPD and improves on LL-based OOD detection (58.4%), but adversely effects the NSD evaluation (62.6%). While the SDL layer does not show improvement in the detection performance, it significantly aided in satabalizing model training. Finally (model 4), conditioning every coupling flow in the network on the low-frequency content significantly improves modelling the high-frequency components (9.77 BPD → 5.48 BPD), indicating the value in the additional information. Modelling this conditional relation between the low-frequency and high-frequency components also proves very effective in detecting OOD covariate shift. The model achieves a mean AUROC of 74.9% at detecting covariate factors across all variations and degredations when evaluated with NSD. Table <ref> presents additional information about each of the models employed in this research. This table showcases the mean distance measurements (CIFAR10-C), taken under different evaluation criteria, across increasing severity levels of covariate shifts within the dataset. Such a detailed breakdown allows for a nuanced understanding of each model's resilience and adaptability to changes in input data distribution. Notably, the LL evaluations of GLOW at the highest severity level encountered numerical stability issues, leading to the substitution of some results with the maximum representable floating-point number. This adjustment, while necessary, underscores the challenges in maintaining computational integrity under extreme conditions and the importance of implementing robust handling mechanisms for such anomalies. It is evident from the data that there is a consistent trend of increasing mean distance scores across all models as the severity level escalates, highlighting the impact of covariate shift on model performance. This trend underscores the ability to quantify covariate shift, although only briefly evaluated here. Furthermore, the table delineates the model size, quantified by the number of trainable parameters, and the inference speed, measured in milliseconds. These metrics are critical for understanding the trade-offs between model complexity, computational efficiency, and performance.The data presented in Table <ref> not only elucidates the ability to quantify covariate shifts, but also emphasizes the importance of balancing model complexity and computational efficiency when considering the model deployment conditions. r0.45 1.0! OOD SVHN <cit.> Models AUROC ↑ 1-3 [t]5mm12*[origin=c]90CIFAR10 ID Reconstruction 2-3 DDPM <cit.> 97.9*/ 95.8 2-3 Explicit Density 2-3 Vanilla VAE (SSIM + KL Div) 24.4 AVAE (MSE + KL Div + Adv Loss) 32.0 VAE-FRL <cit.> 85.4* GLOW-FRL <cit.> 91.5* GLOW (LL) 0.7 GLOW (Typicality) 91.3 GLOW (NSD) 89.9 CovariateFlow (LL) 0.3 CovariateFlow (Typicality) 89.9 CovariateFlow (NSD) 90.0 The performance of various models on detecting SVHN as OOD when trained on CIFAR10 as ID. * indicates values taken from the published paper. Modeling the conditional distribution between the low-frequency and high-frequency components using CovariateFlow is highly effective in detecting out-of-distribution (OOD) covariate shifts. The CIFAR10 dataset, known for its diversity, encompasses a range of in-distribution (ID) covariate conditions. When assessing CovariateFlow in the context of a semantic OOD detection problem, such as distinguishing between CIFAR10 and SVHN datasets, it is plausible that some covariate conditions in CIFAR10 overlap with those in the SVHN dataset. Despite this potential overlap, CovariateFlow demonstrates robust performance in identifying the OOD covariate conditions present in the SVHN dataset, as evidenced by the results shown in Table <ref>. Although the DDPM (utilizing all 1000 starting points) achieves the best performance, CovariateFlow offers competitive results. This is notable given its significantly smaller size and its specific design focus on covariate conditions rather than semantic content.
http://arxiv.org/abs/2409.02866v1
20240904164716
Hybrid-Segmentor: A Hybrid Approach to Automated Fine-Grained Crack Segmentation in Civil Infrastructure
[ "June Moh Goo", "Xenios Milidonis", "Alessandro Artusi", "Jan Boehm", "Carlo Ciliberto" ]
cs.CV
[ "cs.CV", "eess.SP" ]
[email protected] [cor1] [label1]organization=Department of Civil, Environmental and Geomatic Engineering, University College London, addressline=Gower Street, city=London, postcode=WC1E 6BT, country=United Kingdom label3]Xenios Milidonis label3]Alessandro Artusi label1]Jan Boehm label2]Carlo Ciliberto [label3]organization=DeepCamera MRG, CYENS Centre of Excellence, city=Nicosia, country=Cyprus [label2]organization=Department of Computer Science, University College London, addressline=Gower Street, city=London, postcode=WC1E 6BT, country=United Kingdom § ABSTRACT Detecting and segmenting cracks in infrastructure, such as roads and buildings, is crucial for safety and cost-effective maintenance. In spite of the potential of deep learning, there are challenges in achieving precise results and handling diverse crack types. With the proposed dataset and model, we aim to enhance crack detection and infrastructure maintenance. We introduce Hybrid-Segmentor, an encoder-decoder based approach that is capable of extracting both fine-grained local and global crack features. This allows the model to improve its generalization capabilities in distinguish various type of shapes, surfaces and sizes of cracks. To keep the computational performances low for practical purposes, while maintaining the high the generalization capabilities of the model, we incorporate a self-attention model at the encoder level, while reducing the complexity of the decoder component. The proposed model outperforms existing benchmark models across 5 quantitative metrics (accuracy 0.971, precision 0.804, recall 0.744, F1-score 0.770, and IoU score 0.630), achieving state-of-the-art status. Deep Learning Applications Semantic Segmentation Convolutional Neural Networks Transformers Hybrid Approach Crack Detection Crack Dataset Fine-Grained Details § INTRODUCTION Cracks in roads, pavements, and buildings pose a serious threat to public safety, causing accidents and damage to vehicles on roads and pavements, and influencing public safety and financial burden on buildings. Traditionally, manual inspections have been used to identify cracks in civil infrastructure, but these methods are labor-intensive, subjective, and prone to human error, resulting in inconsistent results and potential disasters. Therefore, automated crack detection is necessary to provide an objective and highly accurate alternative. Machine learning methods, such as deep learning models, can be used to detect, segment, or classify damage to civil infrastructure, which can be facilitated by the widespread deployment of surveillance and traffic-monitoring cameras. However, training accurate models for crack segmentation is challenging due to a scarcity of well-annotated and diverse datasets, which impacts model robustness and generalizability. Our research aims to address this crucial data gap and develop automated crack detection to prevent dangers and reduce financial risks to communities. Progress in this direction could lead to real-time identification of cracks in the future, ensuring a more dependable and safe utilization of critical concrete structures. The main contributions of this work are: * Combine and refine publicly available crack datasets to create an enhanced and comprehensive crack segmentation dataset. * Introduce a data refinement methodology to combine publicly available datasets using image processing techniques. * Introduce the Hybrid-Segmentor model to efficiently detect cracks in infrastructures, which is based on the encoder-decoder architecture that convolutional neural networks (CNNs) and transformers have efficiently used in the past. * Emphasize the remarkable ability of the proposed model to perform effectively across a diverse range of surface types and under challenging imaging conditions, such as blurred images and areas with complex crack contours. * The code, trained weights of the model, and the full dataset for experiments are publicly available and can be accessed here: https://github.com/junegoo94/Hybrid-Segmentorhttps://github.com/junegoo94/Hybrid-Segmentor § RELATED WORK One of the earliest methods for crack detection was a CNN-based model for pixel-level crack detection using FCN <cit.>. This approach achieves end-to-end crack detection, significantly reducing training time compared to CrackNet <cit.>, a CNN-based model that was the State-Of-The-Art (SOTA) in 2017 without using a pooling layer. While thin cracks can be accurately predicted across a variety of scenes, further enhancements are needed to capture real-time level predictions. In a similar aspect, DeepCrack <cit.> improves on the generalization of FCN architecture by incorporating batch normalization and side networks for faster convergence. Additionally, this research proposes the publicly available DeepCrack dataset <cit.>, which enhances crack detection precision across diverse scenes. Cheng et al. propose a full crack segmentation model based on U-Net <cit.>. Subsequent research further demonstrated that the U-Net is particularly suited for crack segmentation tasks <cit.>. Some researchers pinpoint that using classical image classification structures as encoders, pre-trained with data such as ImageNet <cit.>, strengthens feature extraction in crack segmentation networks, enhancing crack detection performance <cit.>. In addition, various encoder-decoder models have been introduced in the field. Amongst these, DeepCrack2 (not to be confused with `DeepCrack' in <cit.> bearing the same name; we refer to this model as `DeepCrack2' from this point onwards to avoid confusion) is a deep convolutional neural network designed to facilitate automated crack detection through end-to-end training <cit.>. It primarily focuses on acquiring high-level features that effectively represent cracks. This approach involves the integration of multi-scale deep convolutional features obtained from hierarchical convolutional stages. This fusion enables the capture of intricate line structures, with finer-grained objects in larger-scale feature maps and more holistic representations in smaller-scale feature maps. DeepCrack2 adopts an encoder-decoder architecture similar to SegNet <cit.> and employs pairwise feature fusion between the encoder and decoder networks at corresponding scales. DeepCrack2 is one of the most benchmarked models in the crack segmentation community. Despite the abundance of studies that either employ existing deep learning models or enhance them, these approaches may not always result in effective or efficient results in real-world scenarios (Fig.<ref>). Recently, HrSegNet <cit.> was proposed as an approach to consistently maintain high resolution in the images, distinguishing itself from methods that restore high-resolution features from low-resolution ones. Furthermore, the model enhances contextual information by leveraging low-resolution semantic features to guide the reconstruction of high-resolution features <cit.>. These features helped HrSegNet-B64 reach SOTA accuracy and inference speed in crack segmentation. § DATASET We introduce a large refined dataset with the aim of creating a significantly larger and more diverse resource for crack segmentation compared to what is currently available in literature. Since the existing datasets contain a relatively small number of images compared to other well-known tasks in computer vision, large-scale deep learning models are at a high risk of overfitting in these settings. In contrast to most datasets for crack segmentation that collect data based on a single type of surface, the refined comprehensive dataset includes a wide range of surfaces to enhance the robustness and generalizability of trained models. Additionally, due to the characteristics of some cracks, each existing image has a small proportion of crack pixels, which could result in a form of class imbalance. To counteract this bias, we employed a data augmentation strategy to increase the number of crack pixels in our dataset. §.§ Sub-Dataset Details We identified 13 open-source datasets that include different surfaces of pavements, walls, stone, and bricks. Table <ref> shows the details of each dataset. Some datasets provide samples either collected with specific acquisition systems and under diverse background settings (e.g. Aigle-RN, ESAR, and LCMS that collectively form the AEL Dataset) <cit.>; or acquired with smartphone cameras (e.g. CRACK500) <cit.>. A number of small datasets provide road and pavements images, including CrackTree260, CRKWH100, CrackLS315 and Stone331 <cit.>. (e.g. CrackTree260 is a dataset of 260 visible-light road pavement images constructed based on the CrackTree206 <cit.>) DeepCrack <cit.> is a large dataset created as a publicly available benchmark dataset consisting of crack images captured across various scales and scenes, specifically designed to evaluate the performance of crack detection systems. The German Asphalt Pavement Distress (GAPs) dataset, introduced in <cit.>, addresses the comparability issue in pavement distress research, offering a standardised dataset with 1,969 high-quality gray valued images. It covers various distress classes, including cracks, potholes, and inlaid patches. The images have a resolution of 1,920 × 1,080 pixels with a per-pixel resolution of 1.2 mm × 1.2 mm. To enable pixel-wise crack prediction, 384 images are manually selected from GAPs and annotated, forming the GAPs384 dataset <cit.>. Masonry is created consisting of images captured from masonry structures, which exhibit intricate backgrounds and a diverse range of crack types and sizes <cit.>. CrackForest dataset (CFD), one of the most benchmarked datasets, is a labeled collection of road crack images, designed to represent the typical conditions of urban road surfaces <cit.>. Finally, SDNET2018 is a dataset comprising more than 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements, with crack widths ranging from 0.06 mm to 25 mm. Since the dataset does not contain ground truth masks, we use this dataset only for the collection of non-cracked image data <cit.>. §.§ Data Refinement Ground truth masks in existing datasets were generated using different methods, leading to varying resolutions, distortions, and discontinuity. To address this inconsistency, masks were manually inspected and refined using basic image processing where deemed necessary to ensure no irregularities were present, based on a process described previously <cit.>. Due to the inconsistency of AEL datasets with the rest of the datasets (inverted and not binary), dedicated processing steps were performed. First, the values in the masks were inverted. Pixels were then converted to either black or white based on a threshold of 255/2. All images included in our dataset were then cropped to 256 × 256 resolution without overlapping. Finally, due to the reduced number of images with cracks, we augmented our dataset with a significant portion of cracks to address class imbalance. Specifically, images with masks containing over 5000 crack pixels were selected for augmentation, where Gaussian noise was added, and a random rotation of 90^∘, 180^∘, or 270^∘ was applied. Non-crack data from the SDNet2018 dataset <cit.> were also added. Fig. <ref> shows how the original ground truth improved after the refinement process. Irregularities such as small holes, discontinuity, and thinness were corrected. Furthermore, adding the augmented dataset increased the proportion of the crack pixels by 5.8%, which aims to mitigate class imbalance problems. As a result, we created a comprehensive refined dataset with a total of 12,000 images, which is the largest crack dataset to the best of our knowledge. § MODEL DESIGN This section provides an in-depth overview of our Hybrid-Segmentor, an end-to-end crack segmentation model. As illustrated in Fig. <ref>, the input images to our model are going through two distinct encoders: the CNN (ResNet-50 <cit.>) and the Transformers (SegFormer <cit.>) paths. Each of these encoders generates 5 multi-scale feature maps, which are then fused together at each of the 5 intermediate stages. In the last step, the fused feature maps are utilized to produce the final output (simplified decoder). The overall benefits of our Hybrid-Segmentor, through the combination of the two different deep learning architectures are the ability to detect local details and global structural understanding, while spatial hierarchy leads to more accurate crack detection. Integrating features at different scales from both paths enables effective recognition of cracks of various sizes and shapes, leveraging the strengths of both local and global analysis. This ensures higher accuracy and robustness in detecting cracks in diverse types of surfaces. Sections <ref> and <ref> further describe the benefits introduced by the CNN and transformer paths of our architecture respectively. §.§ CNN Path The use of a CNN architecture is guided from the fact that we would like to capture local features from the input image. These features are both fine-grained local details, e.g., small cracks or textures, and high level features, such as abstract shapes. This is achieved through the spatial hierarchy property of the ResNet-50 model used in our Hybrid-Segmentor, which allows the detection of various image features at multiple scales. Additionally, its translation invariant property will help with extracting features regardless of the crack position within the input image. Finally, its capability to preserve high-resolution details will make our model more effective at detecting small cracks or local variations. §.§ Transformer Path The use of a transformer in crack segmentation aims to extract global features from the input image, which are crucial for capturing the overall shape and appearance of a crack. Here, we use as our base key concepts from the SegFormer model <cit.>. Through its self-attention mechanism, this model can recognise the continuity and structure of cracks that span distant regions, understanding how different parts of the crack relate to each other across the image (Long-range Dependency Capture). SegFormer also incorporates a spatial hierarchy, similar to CNNs, by processing features at different scales, making it able to capture both fine details and global structures. Another important property of the SegFormer is its global consistency, which by analysing the image entirely, it provides insights into how cracks are distributed across the entire image, ensuring a coherent understanding of the crack patterns. The transformer of our proposed model utilizes three additional key concepts, which are explained below: Overlapping Patch Embedding <ref>, Efficient Self-Attention<ref>, and Mix-Feed Forward Network (FFN)<ref>. §.§.§ Overlapping Patch Embedding Local continuity is crucial for preserving fine-grained details and spatial coherence, which is important for accurate semantic segmentation. The first iterations of vision transformers used non-overlapping patch embeddings, which could lead to a loss of local continuity between patches. However, to address this, we utilized overlapping patch embedding, as introduced by SegFormer <cit.>, which better preserves local continuity. The Vision Transformer (ViT) <cit.> is an innovative approach to computer vision. It treats images as sequences of patches and processes them similarly to how transformers handle sequences of words in natural language processing. In a typical ViT architecture, an image is divided into N × N patches, which are then linearly embedded into 1 × 1 × C vectors. While this method enables the model to effectively capture global context, it can still be challenging to maintain local continuity among patches when N × N × 3 image patches are represented as 1 × 1 × C vectors. To address this issue, SegFormer employed Overlapping Patch Embedding. Instead of simply dividing the image into non-overlapping 4×4 patches for vector embedding, Overlapping Patch Embedding takes inspiration from how CNNs use sliding windows with defined parameters such as kernel size (K), stride (S), and padding (P). It predefines these parameters to split the input image into patches of size B × C × K^2 × N, where B represents the batch size, C is the number of channels times the stride squared, and N is the number of patches. Merging operations are then performed to transform the reshaped patches to B × C × W × H, where W and H represent the width and height of the merged patches, respectively. As a result, the model captures both fine-grained local details and broader global features more effectively, addressing the issue of losing local continuity while still maintaining global context. §.§.§ Efficient Self-Attention Especially in models like SegFormer with smaller patch sizes like 4 × 4, the self-attention layer presents computational challenges. The traditional multi-head attention process involves creating matrices for query (Q), key (K), and value (V), all of which have dimensions N (H × W) × C, and performing computations using the scaled dot-product attention equation as shown in equation <ref>. Attention(Q, K, V) = Softmax(QK^T/√(d_head))V When dealing with large input images, the computational complexity of the provided equation <ref> can lead to a significant increase in model weight. Therefore, the method that reduces the N (H × W) channels of K and V by applying a sequence reduction process based on a predefined reduction ratio is proposed <cit.>. It is possible to reshape the equation by dividing N by R and multiplying C by R. C × R dimensions can be reduced to C dimensions by linear operation, resulting in N/R× C dimensions for Key and Value matrices. Especially useful for tasks like semantic segmentation, this method efficiently manages computational complexity while preserving representation power (equation <ref>). [ K̂ = Reshape(N/R, C · R)(K); K = Linear(C · R, C)(K̂) ] §.§.§ Mix-FFN ViT <cit.> uses positional encoding for local information, which comes with fixed input resolution constraints and suffers performance drops as resolution changes. In order to overcome this issue, researchers replace positional encoding with a Convolutional 3 × 3 kernel in the FFN, asserting its non-essential role and providing flexibility without resolution restrictions. x_out = MLP(GELU(Conv_3×3(MLP(x_in)))) + x_in In this regard, the equation <ref> simply adds a Convolutional 3×3 layer to the existing FFN within the Transformer encoder. By replacing traditional positional encoding with this adaptation, the model performance is maintained while fewer parameters are required. §.§ Decoder The CNN and transformer paths of our model result in substantial model complexity and size. In order to balance this, the decoder is designed to be as simple as possible. A 256 × 256 × 1 feature map is generated by concatenating the outputs from both paths, as shown in Fig. <ref>. Feature maps from each stage are combined to create a multi-scale, multi-layer feature map, which is then used to create the final output. By integrating the strengths of both paths, this approach optimizes performance and efficiency while simplifying the decoder. § EXPERIMENTAL SETTINGS §.§ Training Setup Models were trained and tested with a batch size of 16 on a GPU cluster with 8 nodes, each with 8 NVIDIA RTX A5000 (24 GB on-board GPU memory), running Rocky Linux 8.5 and using Python 3.10 and PyTorch 1.13. For all models, we use early stopping with patience of 10 epochs to ensure convergence and avoid over-fitting. §.§ Data The refined dataset contains 12,000 images with and without cracks, along with the ground truth for each image. A random shuffling method was used to distribute the dataset between training, testing, and validation sets, with a ratio of 8:1:1. As a result, our dataset consists of 9,600 samples for the training, 1,200 samples for the testing, and 1,200 samples for the validation. § EXPERIMENTS §.§ Benchmarks To assess the performance improvement of our model over traditional segmentation models, we compare it against FCN <cit.> and UNet <cit.>. Additionally, we include the DeepCrack2 model <cit.>, a widely benchmarked crack detection model, as well as SegFormer <cit.> and HrSegNet-B64 <cit.>, which represent SOTA models in semantic segmentation and crack segmentation, respectively. The performance of our model is assessed both quantitatively and qualitatively. For all models, we used the Adam optimizer, set the initial learning rate to 1.00e-04, and used a batch size of 16. Other hyperparameters are provided in Table <ref>. §.§ Loss Functions Experimentally, it has been demonstrated that class imbalance in datasets may be effectively addressed not only using a well designed architecture, but also using a well designed loss function <cit.>. To improve the robustness of our model, we evaluated the performance of various loss functions: Binary Cross Entropy (BCE) <cit.>, Dice <cit.>, the fusion of BCE and Dice <cit.>, and Recall Cross Entropy (RecallCE) <cit.>. The BCE loss has been chosen for its ability to handle skewed pixel distributions effectively. In scenarios where one class significantly outweighs others, BCE loss computes an individual loss for each pixel to ensure proportional class contribution, mitigating any dataset bias. By treating pixels equally, the model is able to focus on accurately classifying minority classes, such as crack pixels, without being biased by dominant classes. BCE loss is described by the following equation: BCE(y, ŷ) = -1/N(y_i log(ŷ)+(1-y_i) log(1-ŷ_̂î))) where N is the total number of elements (pixels in the case of segmentation). y_i is the ground truth label (0 or 1) for the i-th element, and ŷ_i is the predicted probability for the i-th element. Dice loss, which is equivalent to F1-score, is addressing the class imbalance focusing on capturing the overlap between predicted and ground truth masks, which helps address the challenge of minority class representation. By emphasizing object boundaries and assigning non-vanishing gradients to the minority class, Dice loss ensures accurate prediction and better learning for smaller classes. Dice Loss = 1 - 2 · Intersection/Union Its ability to sensitively measure the similarity between prediction and ground truth makes it particularly useful for precise segmentation. It can be used alongside other losses such as BCE loss to keep a balance between handling class imbalance and capturing fine details <cit.>. Here, we used a combination of BCE and Dice losses as follows: BCE-DICE = λ * BCE loss + (1-λ) * Dice loss where λ represents the weight (importance) attributed to the two loss functions and takes values between 0 and 1. Previous methods attempt to improve standard cross-entropy loss in segmentation tasks by incorporating weighted factors. However, this approach can lead to issues such as reduced precision and increased false positive rate for minority classes. To address this problem, RecallCE loss is proposed as a hard-class mining solution. It reshapes the traditional cross-entropy loss by dynamically adjusting class-specific loss weights according to a real-time recall score, offering a more effective way to handle class imbalance and improve segmentation precision <cit.>. We evaluate the performance of our model by comparing the RecallCE loss with the other losses previously mentioned, to determine if it enhances our model's effectiveness. The equation for RecallCE loss is as follows: RecallCE = -∑_c=1^C∑_n:y_i=c^(1-R_c,t) log(p_n,t) where R_c,t represents the recall of class c during optimisation iteration t. § EVALUATION We carry out two prior studies to examine specific aspects of our model: (1) assessing the impact of individual encoder paths and (2) evaluating the performance of various loss functions. Initially, we aim to understand how distinctively each encoder extracts features. Then, we investigate which of the aforementioned individual loss functions and the combination of BCE and Dice losses (by assigning different weights) yields the best results in crack segmentation. Once we determine the loss function providing the optimal performance, we compare our final model against SOTA crack segmentation models. §.§ Encoder Paths We conduct an experiment involving the training and testing of the two different encoders to assess their abilities in feature extraction. Specifically, we aim to determine whether convolutional layers perform well at extracting local features while transformers are adept at capturing global features. Each path was trained as an independent network by removing the influence of the other, and was compared against the fused network. An identical loss function was used for all networks for a fair comparison (BCE-DICE loss with λ = 0.5). The results presented in Table <ref> indicate that the CNN path achieves a higher precision score than the transformer path, while the latter excels in terms of recall. This suggests that the transformer tends to produce more false positives, mistakenly predicting non-crack pixels as cracks. On the other hand, the CNN path tends to produce more false negatives, possibly misclassifying crack pixels as non-cracks. These results suggest that the transformer path captures broader areas as cracks, while the CNN path captures finer details. Combining the two paths into a fused model leverages the power of both and improves the accuracy and precision of crack segmentation, without significantly sacrificing recall. Fig. <ref>, shows example segmentations produced by each of the two encoders, further illustrating the differences in their performance. §.§ Loss functions We utilize various types of losses (BCE, Dice, and RecallCE) for addressing class imbalances and capturing fine-grained details. Our experiments reveal that combining BCE and Dice losses provides a balance between recognizing dominant classes and accurately segmenting minority groups, resulting in a more effective model for imbalanced data than when using the loss functions individually (Table <ref>). We assess these aspects by varying the weights assigned to BCE and DICE loss functions. When BCE and DICE loss weights are roughly equal, the model generally performs better. A BCE-DICE loss with λ = 0.2 outperforms other values in all metrics except for precision. Precision peaks at 0.817, but this trades off with recall, resulting in relatively lower performance in other metrics. Expectedly, RecallCE results in the highest recall score, as it penalizes the model heavily for false negatives while producing well-balanced results for the other metrics. However, this loss is behind BCE-DICE in terms of accuracy and precision, indicating that it may be less effective at addressing class imbalance. In summary, the BCE-DICE loss with λ = 0.2 exhibits the best model performance, and was chosen as the loss function for our final model. §.§ Comparison against SOTA models We compare our best model using BCE-DICE loss (λ = 0.2) to the benchmark models in our experiment. As demonstrated in Table <ref>, our model notably outperforms the other five models. Our model achieved an accuracy of 0.971, a precision of 0.804, a recall of 0.744, an F1-score of 0.770, and an IOU score of 0.630. These results demonstrate the model's exceptional proficiency in crack segmentation tasks. Qualitatively, our model exhibits significant improvements relative to existing models (Fig. <ref>). As shown by rows (A) and (C), our model handles crack discontinuity more accurately. Furthermore, in (B), our model excels at identifying vague cracks that other models fail to detect. When it comes to cracks on different types of surfaces, the proposed model works effectively regardless of the surface. While crack detection on brick surfaces is challenging due to the ambiguity between cracks and brick borders and resulting shadows, as shown in (D), our model is adept at handling such scenarios. On the other hand, models such as FCN incorrectly predict brick borders as cracks. Additionally, a challenge in crack detection involves identifying non-crack areas within cracked regions, which our model effectively addresses, as evident in (E) and (G). Example (H) demonstrates that our model works relatively well on blurred images. Furthermore, (F) demonstrates the superiority of our model in detecting intricate crack contours, in comparison to the other models that have significantly more false positives. § LIMITATIONS Although our model outperforms other benchmarked models in performance, it still exhibits certain limitations. Two primary shortcomings of our model have been identified and presented in Fig. <ref>. Our model outperforms the other models in detecting thicker cracks within web-shaped crack patterns, except for UNet; however, it still faces challenges in identifying the thinner branches in these patterns. As illustrated in example images (A) to (D), while our model successfully detects the most prominent cracks, it struggles with extremely fine and delicate ones. This indicates an area where further improvement is possible. Secondly, our model is sensitive to disruptions caused by distortions, such as occlusions and watermarks. (E) illustrates a situation where the watermark located within the crack is identified as a non-crack area. However, in (F), even with the presence of a watermark, our model fails to predict the cracks hidden by a translucent occlusion. Furthermore, (G) demonstrates an issue where the model does not recognize letters on the road as part of the background. The variation in model performance may be attributed to the clear color contrast between the letters and the background, causing confusion for the model. It should be emphasized that all these challenges are common to all crack detection models. However, our model demonstrates better overall precision and elaboration in crack detection compared to others. We believe that there is room for improvement to the model architecture for addressing these limitations. Additionally, techniques such as Generative Adversarial Networks (GANs) and meta-learning can be harnessed to generate synthetic data during the pre-processing phase to further deal with the lack of data and potential class imbalance issues. Furthermore, recognizing the increasing need for 3D crack image segmentation, the creation of high-quality 3D crack image datasets becomes imperative to advance this domain. § CONCLUSION In this research, we have proposed a novel model for crack segmentation called Hybrid-Segmentor. This architecture incorporates two distinct encoder paths, namely the CNN path and the Transformer path. For the CNN path, we use the well-established ResNet-50 architecture<cit.>, which is renowned for its ability to extract local features. Additionally, we introduce the concept of Overlapping Patch Embedding, Efficient Self-Attention, and Mix-FFN in the Transformer path, derived from the SegFormer <cit.> model. In combining two encoders, these additions optimize computational efficiency and model size, thereby soothing capacity problems. We further simplify the model with a relatively simpler decoder to minimize its size. Through experimentation, Hybrid-Segmentor emerges as a SOTA, outperforming other renowned benchmark models. Our model effectively takes advantage of the two encoder paths, as proved by prior studies evaluating its performance in extracting local and global crack features. Based on the findings of previous studies, the BCE-DICE loss, weighted at 0.2 on the BCE loss, yields the best performance. In qualitative analysis, our model improves in addressing discontinuities, detecting small non-cracked areas within cracks, and recognizing cracks even in low-quality images and diverse surfaces. It is capable of capturing more details in crack contours than previous models. Furthermore, our study introduces a data refinement methodology for combining publicly available datasets comprising 13 open-source crack datasets with refined ground truths. Since these datasets initially used diverse standards for creating ground truth, we merge and improve them to ensure equivalence, thereby increasing their reliability and precision. In addition, we employ a specific data augmentation approach in order to address the issue of class imbalance within our dataset. By extracting data containing cracks with more than 5000 pixels and augmenting them, we are able to incorporate these samples into our dataset. Our effort resulted in a dataset consisting of 12,000 crack images, each with its corresponding ground truth. To enhance our crack detection model, we need to concentrate on enhancing the architecture to efficiently recognize thin, web-shaped cracks, and those that are hidden by occlusions. We could also explore the possibility of using GANs and meta-learning to create synthetic data to overcome data scarcity, particularly in the development of 3D crack image segmentation datasets. § ACKNOWLEDGMENT The research work of Dr. Alessandro Artusi and Dr. Xenios Milidonis has been partially funded from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 739578 and from the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy. elsarticle-num.bst
http://arxiv.org/abs/2409.02448v1
20240904050634
Detecting Korean Food Using Image using Hierarchical Model
[ "Hoang Khanh Lam", "Kahandakanaththage Maduni Pramuditha Perera" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Detecting Korean Food Using Image using Hierarchical Model Hoang Khanh Lam Department of Computer Engineering Dong-A University Busan, South Korea [email protected] Kahandakanaththage Maduni Pramuditha Perera Department of Computer Engineering Dong-A University Busan, South Korea [email protected] ================================================================================================================================================================================================================================================================================ § ABSTRACT A solution was made available for Korean Food lovers who have dietary restrictions to identify the Korean food before consuming. Just by uploading a clear photo of the dish, people can get to know what they are eating. Image processing techniques together with machine learning helped to come up with this solution. Image Processing, Image Multimedia, Food Detection, Machine Learning, Korean Food § INTRODUCTION With the immense popularity towards Korean culture, Korean food has gained huge recognition and popularity throughout the world. Most of the foreign tourists come to South Korea to enjoy the Korean food culture. On the other hand, since most of them do not speak or understand the Korean language, selecting food items suitable for them has become a crisis. Also nowadays, health problems related to eating and food in general are increasing in both quantity and danger. Especially, most of the foreigners living in South Korea have dietary restrictions due to religion, health-related issues or personal preferences. Besides, being allergic to accidentally eating unfamiliar ingredients in new dishes, especially when traveling, can easily lead to unpredictable consequences. Food Image Recognition involves the utilization of visual data to determine food name/label. It has the potential to contribute to various applications, such as dietary assessment, food inspection, food recognition, and food recommendation. § CHALLENGES Obtaining a suitable data-set was the very first challenge faced. After obtaining a proper data-set the model training phase was initiated. Similarity among different types of Korean Food made the training phase more of a challenge. Differentiating food with similar color and texture was difficult. At the same time dealing with poorly taken pictures and incomplete pictures was also an obstacle. Since most of the Korean dishes come with several side dishes, detecting multiple foods at the same time was also an added difficulty. § RELATED WORK Research in the field of computer vision and food image recognition, including the identification of specific cuisines such as Korean food, is continuously evolving. Back in 2019, Korea Food Research Institute, Wanju conducted a research on "The development of food image detection and recognition model of Korean food for mobile dietary management". According to the results of this study, higher accuracy level of 91.3% was obtained with a 0.4ms recognition time. In this study, they have collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size<cit.> On October 2022, group of researches from Department of Computer Science and Engineering, University of Seoul have conducted a study on "Development of Korean Food Image Classification Model Using Public Food Image Dataset and Deep Learning Methods". Their model can be used in a system that automatically classifies the type of food that the user has consumed. This work will benefit the community of users and researchers using trained models. It also benefits users of systems that automatically classify and record the types of food they take. Users can classify images of similar Korean food into 150 classes using the model and record what they eat. The key contribution of this study to the research community is that they created a pre-processed dataset for training a Korean food image classification model. Also,they have used several pre-trained deep neural networks to train models that classify Korean food images into 150 classes and evaluated the classification accuracy and time required for model training<cit.>. § DATASET The data uploaded for detection have to be very clear. At the same time, the photos should be taken from a mobile phone as the images used for training and deploying the system were also obtained through the camera of the mobile phone. Since mobile phones are portable, it is easy for users to capture a wide range of food images.This ensures a large and diverse data-set for training food image recognition models. Mobile phone photos are captured in real-world scenarios with different lighting, angles, and backgrounds. This variety aids in training models to identify food in diverse environments, enhancing their practical usability. Using mobile phone photos for food recognition is convenient for users, as they can easily snap pictures of their meals without the need for specialized equipment. This ease of use encourages greater user participation and contributes to the creation of extensive data-sets. §.§ Public Data Once done with the background research, obtaining a proper dataset was a huge crisis. Several options as aihub, Roboflow were considered. Though there were several suitable datasets on aihub, obtaining such data was difficult for foreigners. After revising many datasets, one suitable for our system was obtained from Roboflow. §.§ Data-set Construction As a basic approach, categorizing food based on visual aspects was initiated. A mechanism to group food based on color, texture, contrast, size, shape, arrangement, temperature, layering and theme was taken into account. Korean food were divided into four main categories namely, Main Dish, Rice, Soup and Side Dish. Each category was divided further into sub categories accordingly. After completing the Background Research, application of a hierarchical model to group data was considered. To create the Korean food recognition system, following steps were followed. Defining the scope of the dataset, compiling a list of dishes, gathering high-quality images, ensuring diversity, and using data augmentation techniques. Labeling each image with the corresponding dish name, organizing the dataset into training, validation, and testing sets, and balancing the dataset. Applicable additional metadata was included. Documentation that describes dataset was created, including the number of images, dish categories, and relevant details. The system was trained and evaluated using machine learning or deep learning techniques. §.§ Hierarchical Dataset A hierarchical model is a structure that organizes elements into levels or layers, where each level represents a specific degree of abstraction or detail. The arrangement of elements reflects their hierarchical relationships and dependencies. Hierarchical models are often represented as trees, where the topmost level is the root, and subsequent levels branch out like the limbs of a tree. This visual representation makes it easy to understand the relationships and navigate through the structure. A hierarchical model could be structured based on ingredients, allowing for the categorization of dishes according to shared components. For example, dishes made with similar base ingredients like kimchi, red chili paste, or rice could be grouped together. Also can provide a user-friendly interface, allowing users to navigate through categories and subcategories easily. In the context of designing a menu, a hierarchical model can assist in creating a well-structured and visually appealing layout. Dishes can be grouped logically, making it easier for customers to navigate and choose their preferences. § OUR METHOD In this part, we present a system for categorizing food products. The approach attempts to address two main problems: the dataset's class imbalance and the absence of obvious visual relationships between various food products. In order to overcome these difficulties, we establish a tier of hierarchy between "food types" and "food items". This is achieved in the training phase by updating the Convolution Neural Network (CNN) model by multi-stage transfer learning and clustering visually related food items iteratively. The trained model in the multi-stage transfer learning step is used directly for food image classification during the validation and testing stages. Besides, to prove the effectiveness of the hierarchical model, we also train the flat classification model as a baseline model. §.§ Flat Classification This research proposes a YOLOv8-based method for smart city food image detection. The suggested methodology aims to mitigate some constraints of prior research and offer enhanced precision, instantaneous identification, adaptability, diminished false alerts, and economical viability. The YOLO collection of algorithms has gained interest in computer vision. Because it maintains a high level of accuracy while maintaining a small model size, YOLO is quite popular. YOLO models may be trained on a single GPU, making them accessible to a broad range of developers. It may be reasonably installed by machine learning specialists on edge hardware or in the cloud. YOLOv8, the most advanced and recent YOLO technique, can be used for segmentation, object recognition, and image classification. Ultralytics, the same company that created the influential YOLOv5 model that helped to define the industry, is the producer of Yolo v8. YOLOv8 contains a few architectural improvements and updates over YOLOv5<cit.>. The YOLOv8 model does not use anchors. It suggests that the object's center is explicitly estimated rather than the item's distance from a known anchor box. After inference, a difficult post-processing step called Non-Maximum Suppression (NMS) is sped up by anchorfree detection, which reduces the quantity of box predictions. Five models are available for identification, segmentation, and classification (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x, respectively). YOLOv8 Nano is the smallest and fastest of them all, whereas YOLOv8x is the most accurate but slowest of them all. The following are the differences from YOLOv5: * C2f module used in place of C3 module. * Change the Backbone’s initial 6x6 Conv to a 3x3 Conv. * Remove Convs Nos. 10 and 14 from the YOLOv5 configuration. * Change the initial 1x1 Conv in the bottleneck to a 3x3 Conv. * Remove the objectness step using the decoupled head. The fundamental building block was changed, using C2f in place of C3, and the stem's original 6x6 conv was swapped out for a 3x3. The module is summarized in Figure <ref>, where "f" is the total number of features, "e" the growth rate, and CBS is a block consisting of a Conv, a BatchNorm, and a SiLU thereafter. The kernel dimension of the first convolution was changed from 1x1 to 3x3, however the bottleneck remains the same as in YOLOv5. §.§ Hierarchical Classification We use multi-stage hierarchical transfer learning to take advantage of the hierarchical structure that has been built and to address the problem of class imbalance in food products. By using this method, we can move knowledge from the top level—food types—to the lower level—food items. The pipeline of our multi-stage hierarchical transfer learning approach is depicted in . Our training procedure is iterative and consists of two stages. * First stage: We use a CNN model (Yolov8) that has been pre-trained on ImageNet for the first iteration [8]. Next, using food kinds as labels, this model is connected to a fully connected layer with a size of T = 4. We keep the parameters in the core of the CNN model that was trained in the previous iteration for use in later iterations. Using food kinds as labels, a new CNN model is built from scratch and connected to a fully connected layer with a size of T = 4. * Second stage: We build a new CNN model by leveraging the parameters from the foundation of the CNN model trained in the second stage. This model is trained with food items as labels and is linked to a fully connected layer with a size of N=32. We use the cross-entropy loss, which is expressed as follows, as our loss function at each training phase: L=-∑_i=1^Ny_s,ilog(p_s,i) Here, s denotes the stage at which the model is being trained. y_s,i represents the ground truth label i at stage s , and indicates the confidence score for predicting label i at stage s. The CNN model is then used in the food item merging process to produce new merging results after being improved through this multi-stage hierarchical transfer learning process. The next iteration of the multi-stage hierarchical transfer learning process is then started by transferring the parameters from the model's core. This loop keeps on until either the risk of over fitting is mitigated by reaching a maximum of 5 iterations, or the validation loss on food item classification stops decreasing in subsequent rounds. Ultimately, predictions about food items are retrieved using the CNN model that was trained in the last iteration of the second stage. The preserved model from the first iteration's stage is utilized to forecast food categories for experimental comparison with related research. § EXPERIMENT We evaluate our proposed method based on average classification accuracy of predicting food items. We divide the Food Image dataset using a 8:1:1 ratio into training, validation, and testing sets in order to train the CNN model. The validation set is used to evaluate whether the trained model should be saved for further testing, whereas the training set is used to train the model. The testing set is set aside for the last assessment of the model. The CNN architecture of our suggested solution is based on the Yolov8. We employ the Adam optimizer for optimization, starting with a 0.001 learning rate. The multi-stage hierarchical transfer learning process is taught over 100 epochs for each stage. If the loss does not decrease after each iteration, we cut the initial learning rate by a factor of 0.03. At first, the flat classification model was used to train the model, and an accuracy of 85.32% was obtained. Since the data is not arranged and sorted in a proper manner, decided to come up with a more neat and organized classification using a Hierarchical model. The hierarchical approach makes it easier to interpret and explain the relationships between different classes. This can be important in applications where understanding the reasoning behind the classification is crucial. The hierarchical approach can reduce the complexity of the classification task by breaking it down into a series of simpler subtasks. Each level of the hierarchy deals with a more specific aspect of the classification, making it easier to manage and interpret. After training the model using the Hierarchical approach, an accuracy of 88.50% was achieved and this outperformed the flat classification. § CONCLUSION In this work, we took advantage of a Korean food image dataset, which assigns related food item to each food image. Within the dataset, we treat food items as subclasses of food kinds, creating a hierarchical structure. We can obtain the relevant nutritional composition data by predicting food items, which advances the objective of image-based dietary evaluation. We used a multi-stage hierarchical transfer learning approach to update the CNN model for extracting image features iteratively during the training phase in order to create visual relations among food items. This approach can also address the problem of class imbalance across food items. Although these efforts, there exist additional plausible approaches that may enhance the precision of food item classification even further. Our goal is to use concepts from multi-modal learning to our future work to improve the way we classify food products and, consequently, the way food image categorization on food items is refined. IEEEtran
http://arxiv.org/abs/2409.02724v1
20240904135940
Surgical Task Automation Using Actor-Critic Frameworks and Self-Supervised Imitation Learning
[ "Jingshuai Liu", "Alain Andres", "Yonghang Jiang", "Xichun Luo", "Wenmiao Shu", "Sotirios Tsaftaris" ]
cs.RO
[ "cs.RO" ]
Reply to Comment on “A slightly oblate dark matter halo revealed by a retrograde precessing Galactic disk warp" Haibo Yuan September 9, 2024 =============================================================================================================== empty empty § ABSTRACT Surgical robot task automation has recently attracted great attention due to its potential to benefit both surgeons and patients. Reinforcement learning (RL) based approaches have demonstrated promising ability to provide solutions to automated surgical manipulations on various tasks. To address the exploration challenge, expert demonstrations can be utilized to enhance the learning efficiency via imitation learning (IL) approaches. However, the successes of such methods normally rely on both states and action labels. Unfortunately action labels can be hard to capture or their manual annotation is prohibitively expensive owing to the requirement for expert knowledge. It therefore remains an appealing and open problem to leverage expert demonstrations composed of pure states in RL. In this work, we present an actor-critic RL framework, termed AC-SSIL, to overcome this challenge of learning with state-only demonstrations collected by following an unknown expert policy. It adopts a self-supervised IL method, dubbed SSIL, to effectively incorporate demonstrated states into RL paradigms by retrieving from demonstrates the nearest neighbours of the query state and utilizing the bootstrapping of actor networks. We showcase through experiments on an open-source surgical simulation platform that our method delivers remarkable improvements over the RL baseline and exhibits comparable performance against action based IL methods, which implies the efficacy and potential of our method for expert demonstration-guided learning scenarios. Surgical task automation, deep reinforcement learning, imitation learning. § INTRODUCTION Reinforcement learning (RL), which is a specialized branch within the broad field of artificial intelligence (AI) and provides a type of machine learning techniques for automated decision-making <cit.>, has witnessed rapid advancements and impactful innovations in various domains, with medical surgical assistance being one prominent area of application <cit.>. In recent years, deep learning models have achieved great success in diverse domains via the spectacular learning capacity and rich representation of deep neural network architectures <cit.>. Reinforcement learning, empowered by advances in deep learning domains, is transforming medical surgical assistance and automated decision-making <cit.> and the evolution of these technologies continuously enhance the accuracy, efficiency, and reliability of medical treatments and interventions <cit.>, in the pursuit of improved patient outcomes. The learning of RL models traditionally depends on large amounts of data collected via online interactions or extensive exploration to optimize the policy for decision-making. The quality and efficiency of model training is associated with the exploration capacity which can be improved by adopting the recipe of incorporating expert demonstrations into the learning process <cit.>. Despite success achieved by imitation learning methods with expert demonstrations <cit.>, it can be impossible to obtain actual actions of the expert in many scenarios. For instance, sensors used to record actions can be affected by noise and introduce inaccuracies that are detrimental to action label reliability. Moreover, demonstrated actions can be executed continuously and thus hard to be converted into actions suitable for robot learning. Manually annotating actions, especially in a detailed and precise manner, is prohibitively expensive and time-consuming, which particularly holds for complex robotic tasks that require expert knowledge. It is thus intriguing and promising to explore avenues for utilizing demonstrations which are collected with an unknown expert policy and composed of pure states <cit.>. We therefore propose a novel approach to incorporating state-only imitation learning into reinforcement learning paradigms. We then can drive the agent policy towards the demonstrated policy to improve the learning efficiency while maintaining the exploration efficacy to retrieve optimal solutions. As we detail in the related work section in Sect. <ref>, our method is different to other works that are trained in an adversarial framework or only perform guidance on action value approximation <cit.>. Our method leverages state-only demonstrations to guide the agent exploration in a self-supervised manner, as demonstrated in Fig. <ref>. Our method shows performance improvements and circumvents known issues with adversarial methods such as instability and limited generalization leading to performance degradation <cit.>. In this paper, we are particularly interested in addressing the challenge of learning with expert demonstrations only consisting of states for surgical task automation. To tackle this challenge, we present an actor-critic RL framework, termed AC-SSIL, which adopts a novel self-supervised imitation learning method, dubbed SDIL, to guide the learning process by retrieving from demonstrations the nearest neighbours of the states visited by the agent and using the target actor network to produce pseudo action labels for exploration guidance. Different from self-imitation learning proposed in <cit.> to reproduce the past good experiences, our method utilizes expert demonstrations that only have state observations to aid agent learning. The contributions of this work are summarized as follows: * An actor-critic RL framework, termed AC-SSIL, to learn policies for automated surgical tasks by incorporating expert demonstrations into RL paradigms to enhance model exploration; * A self-supervised imitation learning method, dubbed SSIL, to leverage demonstration data consisting of pure states to improve the agent training while mitigating the necessity of action annotations; * Our experiments, demonstrate that the proposed AC-SSIL yields significant improvements compared to the RL baseline, and outperforms or is on par with other existing approaches which rely on action labels. Our ablation studies assess the impact of algorithmic designs and show the efficacy of our method in improving model learning and its insensitivity to specific parameters. § RELATED WORKS In this section, we begin with a review of RL methods for surgical task automation and assistance, and then introduce methods for imitating expert behaviour which fall under one of two categories: learning with expert data including demonstrated actions and learning from pure observations. §.§ Reinforcement Learning for Surgical Assistance The evolution of reinforcement learning, a form of machine learning, has facilitated advancements in automated decision-making and assistance in medical surgeries and provided a powerful paradigm where an agent is trained via interactions with an environment to make decisions that can maximize the cumulative rewards, e.g. the success rate and patient recovery <cit.>. Reinforcement learning approaches enable agents to acquire necessary skills to perform surgical tasks from data collected via online interactions or from previous trials, and have demonstrated improvements in model applicability, flexibility, and generalization capacity in automating surgical tasks <cit.>. An RL framework for learning surgical manipulation skills was proposed in <cit.>, which deploys an agent in am implicit curriculum learning scheme and leverages the knowledge of the prior critic via Q-value function transfer. To deal with long-horizon tasks in surgeries, an approach was introduced in <cit.> to divide a task into several sub-tasks and respectively train sub-task policies which are smoothly connected via a value-informed skill chaining method. Despite the promising progress achieved, executing RL methods in surgical tasks can still be challenging due to exploration problems, e.g. sample efficiency and the balance between exploration and exploitation, which potentially impact the performance of RL algorithms. We therefore develop a framework to guide and enhance model learning by leveraging a few expert demonstrations. §.§ Imitation Learning with Expert Demonstrations Expert demonstrations have proven useful in improving the exploration efficiency of RL models by emulating the expert behaviour via imitation learning (IL) approaches <cit.> which show to help the model learn skills to accomplish complicated tasks. Behaviour cloning (BC) <cit.> emerged as a method for imitating expert behaviours from demonstrations by minimizing the distance between the actions proposed by the agent and the demonstrated actions, which can be implemented with offline datasets or incorporated into RL algorithms to offer a regularization for model learning. The work in <cit.> learned a deep neural network control policy for high-speed driving by applying IL to mimic an expert policy. The works in <cit.> explored the designs of adding BC term to RL and showed remarkable improvements for offline RL, owing to the behaviour-regularized actor-critic algorithms. In <cit.>, it was evidently demonstrated that the utilization of a few demonstration trajectories is conducive to the performance of RL models via the combination of the BC loss and the RL objective to facilitate the learning procedure. An advantage weighted actor-critic framework, termed AWAC <cit.>, was proposed to transfer knowledge from previously collected experiences to prevent inefficient exploration and offer a less conservative training paradigm by re-weighting the objective via the estimated action values. The method introduced in <cit.> guides model learning by incorporating demonstrations into agent exploration to improve training efficiency. Although useful in practice, those protocols normally require actions in tandem with the demonstration data. Such actions can be expensive and implausible to collect and annotate. Hence such protocols cannot be applied in state-only regimes where only the environment observations are recorded in expert demonstrations and no action annotations are available. To extend demonstration guidance to state-only scenarios, we propose to guide RL exploration with pure states through a self-supervised imitation learning approach. §.§ Learning from Observations To apply imitation learning in state-only scenarios where expert demonstrations only contain states, state cloning (SC) <cit.> was proposed to enforce the agent to produce state transitions that are similar to those picked up from demonstrated trajectories <cit.>. A similar method performs behavior cloning using the actions inferred by an inverse dynamics model <cit.>. Although they offer an alternative to BC approaches, such methods involve training a separate model to capture the environment dynamics, which can cause training instability and lead to performance degradation when the prediction accuracy is limited, e.g. the environment changes are too complex to learn or the amount of samples is insufficient to develop a high-precision model. Additionally, they may require establishing a series of task-specific dynamics models when considering diverse tasks that have distinct state and action spaces and demand various capacities to model the environment dynamics. Alternative approaches that can leverage state-only demonstrations were introduced in <cit.>, which extend a generative adversarial imitation learning (GAIL) framework <cit.> to optimize the policy agent towards producing state transitions that are indistinguishable from demonstration data. Inverse reinforcement learning (IRL) methods <cit.> were proposed to learn and improve upon the demonstrated behaviour by inferring a reward function from demonstrations which allows an agent to make decisions according to the inferred rewards. Although providing methods for imitation learning without explicit action labels, they can suffer from training instability and difficulties in convergence, which makes it challenging to effectively develop agents <cit.>. The model performance can be significantly affected by the quality and quantity of demonstrations, particularly in complex environments with a large state space, and the learned policy might not generalize well to unseen situations when a large amount of examples that are adequately representative of the task space are unavailable <cit.>. Advances in reward engineering for RL were made to leverage demonstrated state trajectories to refine the reward function used to train the agent, by measuring the difference from demonstrations in the state space to regularize the action value approximation <cit.>. Despite efficient implementations and efficacy in reshaping the reward function, those avenues aim to enhance the estimation of action values and provide no guidance on updating policy agents and penalizing actions deviating from the demonstrated behaviour. These factors impede the implementation of imitation learning techniques and motivate us to devise an effective and efficient method for learning from state-only demonstrations. We develop a self-supervised method for utilizing demonstrated states to guide the exploration of the agent, while avoiding the issues of methods based on adversarial training or the refinement of action value approximation. Our method retrieves from demonstrations the nearest neighbours of the query state and bootstraps the learned actor network to provide demonstration guidance which shows to improve the agent performance. The recipe of learning from observations enables utilizing broad expert demonstration resources that only provide state information. § METHODOLOGY We develop a framework, as summarized in Fig. <ref>, to achieve surgical task automation, which is established based on reinforcement learning (RL) algorithms where an agent learns manipulation skills by interacting with a simulated environment, i.e. take actions, get feedback, and improve over time. The learning process is enhanced with expert demonstrations where the exact actions taken by an unknown expert policy are unavailable and only states are collected. Our framework uses a self-supervised method to guide agent training without needing detailed action information in demonstrations which is typically necessary in traditional imitation learning (IL) approaches. In the following, we formulate the problem of reinforcement learning with expert demonstrations consisting of pure states in Sect. <ref> and introduce the preliminaries of actor-critic RL in Sect. <ref>. The proposed self-supervised imitation learning method, termed SSIL, is described in Sect <ref> which is followed by the introduction of the actor-critic SSIL training framework, dubbed AC-SSIL, in Sect <ref>. §.§ Problem Formulation: Learning with State-Only Demonstrations We address the policy learning in an online interactive environment which is formulated by a Markov decision process. The agent takes an action a_t at the t-th time step based on the currently observed state s_t and its policy π. The environment responds to the executed action by returning a reward, termed r_t, and transiting to the successive state s_t+1 which elicits action a_t+1 for the next time step. The action-making process and state transitions are stored as experimental experience into a replay buffer <cit.>, dubbed D_π, in the form of tuples {(s_t,a_t,r_t,s_t+1,a_t+1)}_t. Meanwhile, experiences generated by an unknown expert policy are maintained in a expert demonstration buffer D_E={(s_t'^E,s_t'+1^E)}_t' with state-only recordings where action annotations and reward labels are inaccessible, as demonstrated in Fig. <ref>. We utilize the replay buffer for policy optimization via reinforcement learning (RL) algorithms and propose to guide the learning process using state-only demonstrations. Since the expert buffer has no action labels, conventional RL and imitation learning approaches are inapplicable, which motivates us to develop the self-supervised imitation learning method for harnessing the demonstration knowledge to steer and enhance agent exploration in state-only scenarios. §.§ Deep Reinforcement Learning Fundamentals Within the spectrum of AI technologies, reinforcement learning algorithms are developed to make a decision on action selection when taking as input an observable or partially observable state. They are optimized to take a series of actions which are intricately coordinated to reach a desired state or maximize the cumulative reward. RL agents are trained on a dataset of tuples {(s_t,a_t,r_t,s_t+1,a_t+1)}_t which are recorded online or collected from the previous trials, where s denotes the observable state, a refers to the action, r is the reward, and t denotes the time step. In our experiments, the state and action spaces are continuous and respectively composed of the object and robot states and Cartesian-space control. In a deep RL framework, an actor network is responsible for proposing actions by learning a policy that maps states to actions, and a critic network evaluates the quality of actions selected by the actor. Their combination can enhance the learning efficiency by leveraging both policy gradients and value function estimates <cit.>. The goal of training the actor network is to maximize the cumulative reward defined as follows: max_θ𝔼_π_θ[∑_t=0^∞γ^t r_t], where π_θ is the actor network and θ denotes its parameters, γ is the discount factor, e.g. γ=0.99 as adopted in our experiments, and 𝔼 denotes the expectation. To provide a more efficient sampling and learning paradigm, Q-learning function approximation <cit.> is adopted to estimate the accumulated reward via bootstrapping approaches where the current action value estimate is updated based on other estimates via temporal difference (TD). Neural network models are normally adopted for value estimation due to their excellent learning and generalization capacities, which forms deep Q-network algorithms <cit.>. This practice allows RL models to learn from partial sequences of data rather than complete episodes and leads to faster convergence in many scenarios <cit.>. The action value is estimated via a critic network Q_ϕ^π which is updated by: Q_ϕ^π = min_Q^π𝔼_(s_t,a_t)[(Q̂(s_t, a_t) - Q^π(s_t, a_t))^2], where ϕ denotes the parameters of the critic, tuples (s_t,a_t,r_t,s_t+1) are generated by taking a series of actions following the policy of actor π_θ and stored in a buffer, and Q̂ is the target Q-value and computed via bootstrapping: Q̂(s_t, a_t) r_t + γ𝔼_a'∼π_θ^∗[Q_ϕ^∗^π(s_t+1, a')], where θ^∗ and ϕ^∗ are the parameters of the target actor and critic networks which are utilized to stabilize the learning process and alleviate the critical issue of overestimation, due to function approximation error which occurs when the estimated Q-values for taking a specific action are higher than their true values and can result in sub-optimal policies and slower learning <cit.>. The target network parameters are updated via the exponential moving average: θ^∗ ←τθ + (1-τ)θ^∗ ϕ^∗ ←τϕ + (1-τ)ϕ^∗, where τ is the target update rate and normally set to be 0.005. The objective of the actor is to optimize the policy of action-making to maximize the values estimated by the critic: π_θ = max_π𝔼_s_t[Q_ϕ^π(s_t, π(s_t))]. §.§ Self-Supervised Imitation Learning (SSIL) As our expert demonstrations only contain state information, we need a mechanism to integrate such knowledge into the training process mapping states to actions. Hence, we introduce the self-supervised imitation learning method, termed SSIL and demonstrated in Fig. <ref> and <ref>. SSIL improves the exploration efficiency and leads to better performance. §.§.§ K-Nearest Neighbour Matching Given a query state from the replay buffer, i.e. s_t∈ D_π, we retrieve from the expert buffer D_E its nearest neighbours via the Euclidean distance metric and use them to elicit actions to guide the policy learning process. We utilize the learned actor network to produce a pseudo action label for exploration guidance, which we expect can compete against methods that rely on action labels from demonstrations. §.§.§ Pseudo Action Labeling To guide the training of RL agents, potentially in the updates of both the actor and critic, the pseudo action label for a given query state s_t, dubbed a^SSIL_t', is calculated as follows: a^SSIL_t'(s_t) = 𝔼_s_t'∈ NN(s_t,D_E)[π_θ^∗(s_t')], where NN(s_t,D_E) denotes the set of the K-nearest neighbours of s_t and π_θ^∗ refers to the target actor network. [We empirically found that using the current actor for action labeling can incur training instability and result in performance degradation.] The SSIL method produces guidance with state-only expert demonstrations in a self-supervised manner which jointly depends on the similarities to demonstrated states and the bootstrapping of policy agent. The pseudo actions are then leveraged to regularize the agent behaviour, as introduced in the next section. §.§ Actor-Critic SSIL Training Framework (AC-SSIL) We use the pseudo action labels produced by SSIL to regularize the learning process of RL models, aiming to incorporate knowledge from expert data into RL exploration. The actor-critic SSIL training method, dubbed AC-SSIL and illustrated in Fig. <ref>, is introduced here. Firstly, it adopts the RL update to retrieve optimal policies by learning from interactions with the environment. Secondly, it utilizes the SSIL method to leverage expert demonstrations into the RL paradigm to enhance model learning. The training steps are implemented based on a deep deterministic policy gradient framework <cit.> with the replay buffer D_π and the expert buffer D_E for network update. The objective functions for the actor and critic networks are present in the following. §.§.§ Behaviour Regularized Actor Objective The training of the actor network is implemented by maximizing the action value estimated by the critic network and minimizing the distance between the demonstrated action elicited via the SSIL method and the policy action, yielding: π_θ = max_π𝔼_s_t[Q_ϕ^π(s_t, π(s_t)) - α d(π(s_t), a^SSIL_t'(s_t))], where a^SSIL_t' is the pseudo action label of the query state s_t and computed via (<ref>), d(.,.) refers to the Euclidean distance that measures the similarity between actions, and α is the weight which balances the strengths of the RL and SSIL terms. The actor objective leverages both the action value estimation and imitation learning from state-only demonstrations in aid of exploring optimal solutions. §.§.§ Behaviour Regularized Critic Objective To overcome the overestimation problem of action value approximation which can be caused by error accumulation when the policy actions are out-of-distribution for the critic <cit.>, the SSIL term is added to the target Q-value to advise the critic of guidance information from expert demonstrations. The objective function for the critic is therefore given by: Q̂(s_t, a_t) r_t + γ𝔼_a'∼π_θ^∗[Q_ϕ^∗^π (s_t+1, a') - α d(a', a^SSIL_t'+1(s_t+1))] Q_ϕ^π = min_Q^π𝔼_(s_t,a_t)[(Q̂(s_t, a_t ) - Q^π(s_t, a_t))^2], which mitigates the over-estimation problem of ordinary critic that is inimical to agent exploration, by reducing the values of undesired actions through the distance regularization and thus propagating guidance from demonstrated states. § EXPERIMENTS §.§ Experiment Configurations We conduct experiments in an open-source simulation platform, dubbed SurRol <cit.>, which is designed for surgical robot learning and provides manipulation tasks with varying degrees of complexity to facilitate relevant research. The transferability of the simulated environment was validated in <cit.>, showing that the agents trained in SurRol can be transferred to accomplish tasks in a real-world da Vinci Research Kit (dVRK) platform. We evaluate the model performance on four manipulation tasks as illustrated in Fig. <ref>. These tasks have been selected because they encompass a broad variety of manipulation skills and exhibit varying levels of complexity, which is beneficial to demonstrate the capacity and dexterity of agent models to undertake surgical tasks. The tasks are: 1) NeedlePick: approach and pick a needle using a robot arm, 2) GauzeRetrieve: retrieve and pick a suture gauze and sequentially place it at the target position, 3) PegTransfer: pick an object from one peg and move it to the target peg, and 4) NeedleRegrasp: hand over the held needle from one robot arm to the other. All tasks are goal-conditioned with sparse reward functions indicating success. We adopt object state including 3D Cartesian positions and 6D pose and robot proprioceptive state including jaw status and end-effector position as state representation, and use Cartesian-space control as action space. In our experiments, the parameterized actor and critic networks are composed of four fully-connected layers of hidden dimension 256 with ReLU activations in between. The training procedure is actuated using an ADAM optimizer <cit.> with β_1=0.9, β_2=0.999, and a learning rate of 10^-3. We empirically set K=5 and α=5 and sample 100 successful episodes as demonstrations via the expert policy provided by the environment SurRol, and assess the manipulation performance of different methods on each task after 150K environment steps as we found that increasing the training time did not remarkably enhance the model capacity. §.§ Comparison Results We want to evaluate the performance of the AC-SSIL method which only relies on states to guide agent learning and compare it with action-based imitation learning methods. Setup We compare with a method, dubbed AMP <cit.>, which was proposed to leverage state-only demonstrations and present comparisons with methods for integrating demonstrated actions into agent training, including DDPGBC <cit.>, AWAC <cit.>, CoL <cit.>, and DEX <cit.>. The methods we compare with are briefly summarized below: * AMP <cit.> that extends GAIL <cit.> to state-only demonstrations and adversarially imitates the expert behaviour using a discriminator to refine the reward; * DDPGBC <cit.> that regularizes the actor objective with a Q-filtered BC loss; * AWAC <cit.> that pre-trains an RL agent with demonstrations offline and fine-tunes it online with a Q-value based conservative constraint on policy learning; * CoL <cit.> that adopts as initialization the agent pre-trained with BC offline and incorporates the BC regression into RL to maintain model performance and prevent the brittle degradation as training continues; * DEX <cit.> that propagates the demonstration guidance to the actor and critic updates during training by utilizing both states and actions from demonstrations. Evaluations are over 10 runs with different random seeds, where each averages 20 trials with environment variations, e.g. the initial and target positions of the needle are varying. Results The comparison results are summarized in Table <ref>, where it can be found that our method achieves competitive or better performance on different surgical tasks and surpasses all comparison methods on the challenging NeedleRegrasp task. It is also shown that the method AMP based on adversarial training leads to inferior performance, for it can suffer from the issues of training instability, inaccurate discrimination of states, and limited generalization <cit.>. The evolution of return over training is present in Fig. <ref> (a), which showcases the benefits of our method in enhancing model learning using demonstrations consisting of pure states to facilitate RL exploration. §.§ Analysis on Self-Supervised Imitation Learning We now want to analyze and verify the efficacy of the proposed self-supervised imitation learning approach. Setup We compare with the baseline model, termed Base AC, which implements the actor-critic RL framework without using SSIL and expert demonstrations for training guidance <cit.>, and the model, termed AC-BC, which leverages expert data by combining behaviour cloning <cit.> with RL and inevitably requires action labels. Additionally, we also compare with a state-based IL method, termed AC-STD, which regularizes the action value estimation via the distance between the state transition experienced by the agent and the nearest neighbour in demonstrations <cit.>. Results The comparison results are present in Fig. <ref>, where it is evident that the proposed SSIL significantly improves the model performance on all tasks compared to the baseline model, e.g. the success rate on PegTransfer task rises from 0.81 to 0.94, and incurs more prominent improvements over behaviour cloning which can exhibit poor generalization capability on complex tasks, e.g. the success rate is enhanced from 0.33 to 0.83 on NeedleRegrasp task. Our method can consistently outperform the AC-STD that fails to directly steer the actor behaviour, which demonstrates the capacity of SSIL to effectively guide the agent training only using state observations. The evolution of return in Fig. <ref> (b) confirms the advantage of SSIL over other candidates. Those findings validate the advances of our method for leveraging state-only demonstrations to improve RL exploration. §.§ Ablation on AC-SSIL Training Method To verify the effectiveness of the proposed AC-SSIL training framework introduced in Sect. <ref>, we compare the Base AC and AC-SSIL models with the variant, termed Actor-SSIL, which only exploits the SSIL to regularize the actor objective. It reveals from the comparison results in Fig. <ref> (a) that the incorporation of SSIL in Actor-SSIL delivers a considerable improvement over Base AC, which verifies the efficacy of SSIL in enhancing agent learning. The performance gap between Actor-SSIL and AC-SSIL indicates that the impediments of demonstration guidance of ordinary critic has an unfavourable impact on model performance, which can be alleviated by regularizing the critic objective with the SSIL term. Those observations validate the effectiveness of the AC-SSIL training method. §.§ Sensitivity Analysis We investigate the influences of the number K of nearest neighbours and the amount of expert demonstrations. The results in Fig. <ref> (b) and (c) show that an intermediate value of K around 5 works well and too large or small values can lead to a slight performance drop. Our method exhibits stable performance over an appropriate range of demonstration amount and more expert demonstrations can consistently enhance model performance. Those findings verify its insensitivity and robustness to those parameters. § CONCLUSION An RL exploration framework is introduced in this work for the automation of surgical tasks, which can potentially assist surgeons in making informed decisions during the surgical operations. To enhance agent exploration with demonstrations that consist of pure state observations, thus making it applicable to expert data resources only with state information, a novel self-supervised imitation learning method is devised and utilized to guide the training processes. The improved performance over the baseline RL model showcases the effectiveness of our method, e.g. the success rates on PegTransfer and NeedleRegrasp tasks have been considerably enhanced from 0.81/0.02 to 0.94/0.83. The comparison results indicate that the method can compete with approaches that require action labels for behaviour imitation. Appealing research directions include extending our method to offline RL configurations and scaling it for more complex long-horizon tasks such as wound suturing, which can validate its usefulness and generalization to various artificial intelligence-assisted surgical operation scenarios including surgical training, planning and rehearsal. IEEEtran
http://arxiv.org/abs/2409.02319v1
20240903222502
Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker
[ "Sebastião dos Anjos Sousa Júnior", "Marcus V. de S. Ferraz", "José P. de Lima", "Tarik P. Cysne" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] Department of Physics, University of Houston, Houston, Texas 77004 Departamento de Física, Universidade Federal do Piauí, 64049-550 Teresina, Piauí, Brazil Departamento de Física, Universidade Federal do Piauí, 64049-550 Teresina, Piauí, Brazil [email protected] Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói RJ, Brazil § ABSTRACT In this work, we use the local spin Chern marker (LSCM) recently introduced by Baù and Marrazzo [Phys. Rev. B 110, 054203 (2024)] to analyze the topology of the ground-state electronic wave functions in a finite honeycomb lattice flake described by three distinct models. The models considered here are characterized by strong Rashba spin-orbit interaction, which leads to non-conservation of the spin operator, i.e., [ℋ,ŝ_z]≠ 0. The three spin-orbit couplings associated with the topological aspects of the models are: 1) Standard Kane-Mele coupling, 2) Sublattice-dependent Kane-Mele coupling, and 3) In-plane (ŝ_y) polarized Kane-Mele coupling. These couplings occur in graphene grown on suitable substrates and are relevant for modeling its van der Waals heterostructures. A particular topological phase diagram characterizes each of these spin-orbit interactions, and our calculations of LSCM successfully capture its general features. We also performed a detailed analysis of the spectral properties of the energy and valence-projected spin matrix eigenvalues, which shows that both exhibit a gap that protects the marker. Our results expand the applicability of the spin Chern number method to a class of Hamiltonians with experimental relevance and may contribute to future research on the real-space topology of realistic materials. Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker Tarik P. Cysne September 9, 2024 – Version 1.0 ============================================================================================ § INTRODUCTION Insulating materials in a non-trivial topological phase are characterized by gapped bulk energy spectra, which are indexed by a topological invariant. This invariant remains unchanged under smooth modifications in the electronic Hamiltonian and is robust against weak disorder and electronic interactions <cit.>. Much of our knowledge about the topological nature of matter has emerged from studying systems in the thermodynamic limit with translational symmetry, i.e., in the band theory paradigm. In these systems, the crystalline Hamiltonian can be mapped into a Bloch Hamiltonian ℋ( k) defined in k-space within the Brillouin Zone (BZ), which has the topology of a torus. From this perspective, topological invariants are regarded as global properties of Bloch states and are expressed through integrals of geometric quantities in k-space. For instance, Chern insulators are characterized by a nonzero integer invariant known as the Chern number C, which can be computed by integrating the Berry curvature over the first BZ <cit.>. The occurrence of a Chern insulator phase requires the breaking of time-reversal symmetry and is associated with gapless boundary states that support Hall conductance. On the other hand, quantum spin Hall insulators (QSHI) are characterized by a ℤ_2-index whose definition requires the presence of time-reversal symmetry <cit.>. These topological phases are distinguished by boundary modes that enable spin-Hall conductance. In Ref. <cit.>, Prodan introduced the spin Chern number as an alternative topological invariant to characterize the QSHI phase. Unlike the ℤ_2-invariant, the definition of the spin Chern number does not explicitly rely on the presence of time-reversal symmetry, but it does require a gap in the eigenvalue spectrum of the valence-projected spin matrix. Although such a gap is not a universal feature of spin-orbit coupled Hamiltonians, the spin Chern number is a genuine topological index. In time-reversal symmetric systems, it is widely accepted that it essentially contains the information of the ℤ_2-invariant <cit.>. This topological invariant was introduced within the framework of band theory (thermodynamic limit) and assumes a quantized value even for Hamiltonians where spin is not a conserved quantity, i.e., [ℋ( k), ŝ_z]≠ 0. Prodan's method has also been applied to define Chern numbers related to other observables, such as the isospin Chern number <cit.> and the orbital Chern number <cit.>, demonstrating the usefulness of this concept. An interesting aspect of this topological index is its direct connection to response functions, such as orbital Hall conductivity <cit.> and isospin Hall conductivity <cit.>. The requirement for the existence of a BZ when defining a topological index constrains its application to pristine and uniform solids. In particular, a BZ cannot be defined in a finite system or in a system with a high degree of disorder. This compromises the topological characterization of systems with significant practical interest <cit.>. To overcome these limitations, the tool of topological local markers was introduced. These local markers are mathematical objects that encode information about the topology of the ground state's electronic wave functions as real-space quantities. Currently, markers to characterize distinct topological phases are available <cit.>. For instance, Bianco and Resta introduced in Ref. <cit.> a formula for the local Chern marker that allows the real-space indexation of Chern insulators. Extensions of this scheme to treat periodic superlattices <cit.> and inclusion of thermal effects <cit.> have been made. Local Chern markers have also been applied to study the robustness of the topological phase against disorder <cit.> and the topological Anderson transition <cit.>. Recently, Baù and Marrazzo <cit.> introduced a real-space quantity called the Local Spin Chern Marker (LSCM), which combines the Bianco-Resta and Prodan methods to capture the topological information associated with the QSHI phase. The authors validate the method by using the standard Kane-Mele-Rashba model and comparing it to the universal ℤ_2-local marker. In this work, we employ LSCM to analyze the topology of generalized Kane-Mele-Rashba models in real space. The models considered here exhibit a strong Rashba effect. In addition to the standard Kane-Mele coupling, we also consider two generalizations that are relevant for modeling graphene van der Waals heterostructures: the sublattice-resolved Kane-Mele and the in-plane Kane-Mele interactions. The sublattice-dependent Kane-Mele coupling occurs in graphene grown on transition metal dichalcogenides (TMDs) in the 2H structural phase <cit.>. On the other hand, the in-plane polarized Kane-Mele coupling occurs in graphene proximate to a low-symmetry substrate with strong spin-orbit interaction <cit.>. This latter type of interaction has also been predicted to occur in a low-symmetry structural phase of WTe_2 <cit.>. We calculate the topological phase diagram of the three models mentioned above using the LSCM in a flake geometry. The models studied here present very distinct and rich phase diagrams, and we show how the LSCM successfully captures their characteristics. We also discuss the features of the energy and valence-projected spin spectra, showing that both exhibit a gap that protects the topological characterization of each model according to this index <cit.>. Our results expand the applicability of the spin Chern number to describe the topology of electronic Hamiltonians and may represent progress toward describing Chern markers related to other observables <cit.> and to applications in complex materials. § LOCAL SPIN CHERN MARKER For Hamiltonians that satisfy [ℋ,ŝ_z ]=0, the z-component of electronic spin is a good quantum number. In this situation, the Hilbert space of the Hamiltonian can be separated into two decoupled sectors, each associated with an eigenvalue s=↑ and ↓ of the spin operator ŝ_z. One can assign Chern numbers C_↑ and C_↓ to each subspace, and the quantity 1/2(C_↑-C_↓) is an integer quantity, and its non-zero value indicates the QSHI phase. For systems in the thermodynamic limit and with translational symmetry, one can map ℋ into a Bloch Hamiltonian defined in the BZ ℋ( k)=e^i k· rℋe^-i k· r and express the Chern number associated to each sector of Hilbert space as a k-integral of the spin Berry curvature: C_↑,↓=(2π)^-2∫_ BZd^2 kΩ_↑,↓( k). In models where [ℋ ( k),ŝ_z ]≠ 0, the Hilbert space cannot be decoupled into two independent sectors, and the quantity 1/2( C_↑-C_↓) is no longer an integer. Then, a more elaborate procedure must be used to define an integral Chern number that represents information about the topology of the QSHI phase. The spin Chern number, first introduced in Ref. <cit.> and later mathematically formalized in thermodynamic limit by Prodan in Ref. <cit.>, allows for the definition of a quantized invariant even in cases where [ℋ( k),ŝ_z ]≠ 0. The method consists of constructing the valence-band projector 𝒫( k) and diagonalizing the ground-state projected spin matrix M^s_z_v.b.( k)=𝒫( k)ŝ_z𝒫( k). For time-reversal symmetric Hamiltonians the eigenvalues ξ^s_z( k) of the matrix M^s_z_v.b.( k) are symmetrically distributed around zero, within the interval ξ^s_z( k)∈ [-1,1]. Given that σ=sign(ξ^s_z( k)) is the signal of ξ^s_z( k) [σ=+ for positive ξ^s_z( k) and σ=- for negative ξ^s_z( k)] one defines the projector 𝒫_σ( k)=∑_ξ( k)| σ|ϕ_ξ, k⟩⟨ϕ_ξ, k| where |ϕ_ξ, k⟩ are the eigenvectors of M^s_z_v.b.( k) with eigenvalue ξ^s_z( k). Using 𝒫_σ( k), one can assign an integer Chern number C_σ to each projected subspace, provided that there is a spectral gap Δ^s_z in the set of eigenvalues ξ^s_z( k). The spin Chern number is defined by C_s=1/2( C_+-C_-). A non-zero spin Chern number indicates the existence of a QSHI phase and, in time-reversal symmetric systems, is equivalent to the universal ℤ_2-invariant <cit.>. This invariant is topologically protected by the energy spectrum gap Δ^ E and the gap Δ^s_z. This method has been successfully applied to study the topological properties of uniform insulating materials and other systems from their band structures <cit.>. As discussed in the introduction, it is not possible to construct a BZ in finite systems. Instead, real-space markers must be used to quantify the topological properties of the electronic wave functions. Recently, the Ref. <cit.> introduced a combination of Prodan and Bianco-Resta methods to define the LSCM, ℭ_s( r)=ℭ_+( r)-ℭ_-( r)/2, where, ℭ_σ( r)=4πIm⟨ r|𝒫_σX̂𝒬_σŶ𝒫_σ - 𝒫_σŶ𝒬_σX̂𝒫_σ| r⟩. In Eq. (<ref>), X̂ and Ŷ are the Cartesian component of the position operator r. Analogous to the procedure described in the previous paragraph for k-space, one defines the projector into the valence subspace (occupied states) by 𝒫=∑^N_occ_n=1|u_n⟩⟨u_n|, where |u_n⟩ is the eigenstate of real-space Hamiltonian ℋ with state index n. Then, we construct the spin matrix projected onto the valence subspace M^s_z_v.s.=𝒫ŝ_z𝒫 and perform its diagonalization M^s_z_v.s.|ϕ_v⟩=ξ_v^s_z|ϕ_v⟩, where (v=1,..., N_occ). For time-reversal symmetric models, the N_occ eigenvalues ξ_v^s_z are symmetrically distributed around zero within the interval [-1,1]. The branches of this spectrum can be separated by the sign of their eigenvalues [σ=+ for positive ξ_v^s_z and σ=- for negative ξ_v^s_z] if there is a finite gap Δ^s_z. The vector |ϕ_v⟩ has dimension N_occ and can be written as |ϕ_v⟩ = ∑^N_occ_α=1β_v,α|α⟩, where |α⟩ is the basis of N_occ-dimensional matrix M^s_z_ v.s.. We then define the states, |ψ_v⟩ =∑^N_occ_α=1β_v,α|u_α⟩. in terms of the valence eigenstates |u_α⟩ of the flake Hamiltonian ℋ. With these states, the projectors used in Eq. (<ref>) are given by 𝒫_σ=∑_ξ^s_z_v|σ|ψ_v⟩⟨ψ_v|. Finally, the complementary matrices in Eq. (<ref>) are defined as 𝒬_σ = 1- 𝒫_σ. Here, we adopt an equivalent, but slightly different, notation than Baù and Marrazzo. In Ref. <cit.> authors also demonstrate that the LSCM defined in Eqs. (<ref>-<ref>) is compatible with the ℤ_2 marker in the standard Rashba-Kane-Mele model with sublattice potential. § STANDARD KANE-MELE-RASHBA HAMILTONIAN In this section, we review the application of the LSCM to the standard Kane-Mele-Rashba model in a finite flake <cit.>, complementing this with an analysis of the energy and valence-projected spin spectra. The standard Kane-Mele-Rashba Hamiltonian can be cast as, ℋ_ 𝓈𝓉𝒹=ℋ_t+ℋ_ AB+ℋ_ R+ℋ_ KM. We follow Ref. <cit.>, considering this Hamiltonian within a flake of linear size L subjected to periodic boundary conditions. ℋ_t represents the nearest neighbor hopping term of electrons in the honeycomb structure of Fig. <ref>, as given by ℋ_t=t∑_s=↑,↓∑_⟨ i,j ⟩c^†_i,sc_j,s, Here, c^†_i,s and c_j,s are the fermionic creation and annihilation operators for electrons at sites i and j of the flake and spin s=↑,↓. ⟨ i,j ⟩ indicate that the sum runs over the nearest neighbor sites. In all results presented in this work, we set the hopping amplitude t as the unit of energy [t=1]. ℋ_ AB is the sublattice potential ℋ_ AB=V_ AB∑_s=↑,↓∑_iτ_ic^†_i,sc_i,s, where τ_i=+1 for sites i belonging to sublattice A, and τ_i=-1 for sites i belonging to sublattice B. ℋ_ KM is the standard Kane-Mele spin-orbit coupling <cit.> given by ℋ_ KM=iλ_ KM∑_s,s'∑_⟨⟨ i,j⟩⟩ν_ij c^†_i,s(ŝ_z)_s,s'c_j,s'. Here ν_ij=sign( d_1× d_2)_z=± 1, where d_1 and d_2 are unit vectors along the two bonds that the electron traverses when hopping from the site j to the next nearest neighbor i [indicated Eq.(<ref>) by ⟨⟨ i,j⟩⟩]. Finally the Rashba spin-orbit coupling ℋ_ R can be cast as, ℋ_ R=iλ_ R∑_s,s'∑_⟨ i,j⟩c^†_i,s[( ŝ×𝐝_ij)_z ]_s,s'c_j,s' , where 𝐝_ij is the unity vector along the direction connecting site i to site j. ŝ=ŝ_x x+ŝ_y y+ŝ_z z is related to physical spin of electrons (up to a factor ħ/2) and ŝ_x,y,z are Pauli matrices. The Rashba coupling breaks (z→ -z)-symmetry <cit.> and is responsible for the non-conservation of spin in the full Hamiltonian [ℋ_ R, ŝ_z]≠ 0. We show in Fig. <ref> (a) the LSCM calculated along the line indicated by the black arrow in Fig. <ref>. The blue squares are the LSCM for V_ AB and λ_ R adjusted to match a QSHI phase, while the red squares are the results for parameters in the trivial phase. For the system in the QSHI phase, as shown by the blue curve in Fig. <ref> (a), the LSCM assumes a quantized value +1, except for anomalies near the edges, which also occur in usual Chern markers <cit.>. We emphasize that this quantization of the LSCM [Eq. (<ref>)] occurs in the presence of a finite Rashba coupling, which is set to λ_ R=√(3)λ_ KM in Fig. <ref>. If one simply calculates 1/2(ℭ_↑-ℭ_↓), a non-quantized value is obtained, as indicated by the yellow squares in Fig. <ref> (a). We present in Fig. <ref> (b) and (d) the energy (E_n) and valence-state projected spin spectra (ξ^s_z_v) for the flake in the QSHI phase. It is worth noting that both spectra exhibit a gap, which is responsible for the topological protection of the LSCM. The red squares in Fig. <ref> (a) represent the results for the marker in the topologically trivial phase, which is zero, as expected. Panels (c) and (e) of Fig. <ref> show the corresponding energy and valence-projected spin spectra, which also display a gap, thereby enabling the definition of the marker. We follow Refs. <cit.> and use the central site of the flake in Fig. <ref> [r= 0 in Eq. (<ref>)] to calculate the topological phase diagram of LSCM. We show the results for ℭ_s( 0) in the λ_ R-V_ AB parameter space in Fig. <ref>. Setting aside the finite size effects, which are more pronounced near the topological phase transition, the phase diagram shown in Fig. <ref> aligns with the one obtained using the k-space formula for the ℤ_2-invariant <cit.>. § SUB-LATTICE-DEPENDENT KANE-MELE COUPLING We now examine the first generalization of the standard Kane-Mele-Rashba model that we consider in this paper. When a material with strong spin-orbit coupling breaks the C_3v symmetry of the honeycomb lattice, a sublattice-dependent Kane-Mele spin-orbit coupling emerges <cit.>, ℋ_ KM(ab)=i∑_s,s'∑_⟨⟨ i,j⟩⟩λ^ij_ KMν_ij c^†_i,s(ŝ_z)_s,s'c_j,s'. where λ^ij_ KM=λ^ A_ KM for ⟨⟨ i,j⟩⟩∈ sublatice A, and λ^ij_ KM=λ^ B_ KM for ⟨⟨ i,j⟩⟩∈ sublatice B, with λ^ A_ KM≠λ^ B_ KM. This type of coupling occurs in graphene grown on TMDs in 2H structural phase, leading to intriguing spin-transport and topological features <cit.>. The complete Hamiltonian considered in this section is given by ℋ_ 𝒢ℯ𝓃 1=ℋ_t+ℋ_ AB+ℋ_ R+ℋ_ KM(ab). where, ℋ_t, ℋ_ AB and ℋ_ R are the terms defined in Eqs. (<ref>, <ref>) and (<ref>). The topological phase diagram of the model in Eq. <ref> was calculated in Ref. <cit.> using the k-space formula for the ℤ_2 invariant. Here, we perform analogous calculations to those detailed in the previous section to index the topology of this model in real space using the LSCM defined in Eqs. (<ref>, <ref>). We also study their energetic and spin spectral properties. Figure <ref> (a) shows the topological phase diagram of the Hamiltonian in Eq. (<ref>), based on calculations of the LSCM at the central site of the flake ℭ_s( 0). In this figure, we set fixed values for V_ AB=0.1 t and λ_ R=0.05 t and calculated the markers for different values in λ^ A_ KM-λ^ B_ KM space. The LSCM-diagram in Fig. <ref> (a) matches the ℤ_2-diagram in Ref. <cit.>. In the two quadrants where sign(λ^ A_ KM)=sign(λ^ B_ KM), a QSHI phase with a quantized LSCM is obtained. Notably, the LSCM in each topological region have opposite signals. The signal of the spin Chern number does not provide additional information about the topology of the ground state electronic wave function; it is simply a matter of definition in the projection procedure <cit.> described in Sec. <ref>. The two regions correspond to the same topological order, and the connection of LSCM with the local ℤ_2 marker is given by ν ( 0)=ℭ_s( 0) mod 2 <cit.>. The rest of the diagram is characterized by two insulating phases with zero LSCM, in agreement with Ref. <cit.>. These phases exhibit interesting properties due to the presence of what is known as valley Zeeman coupling, including pseudo-helical states <cit.> and unconventional spin-transport phenomena <cit.>. The distinct regions of the phase diagram can be reached by fabricating heterostructures of graphene combined with different members of the 2H-TMD family <cit.>. In panels (b) and (c) of Fig. <ref>, we present the energy (Δ^ E) and valence-projected spin-matrix (Δ^s_z) gaps throughout the parameter space. Except in regions close to the phase transition lines, the two gaps are finite, which ensures the topological protection of LSCM in each phase. § IN-PLANE KANE-MELE COUPLING Now we consider a second generalization of the Kane-Mele coupling that may occur when a honeycomb lattice interacts with a low-symmetry environment. This generalization consists of a sub-lattice-dependent in-plane polarized (ŝ_y) Kane-Mele coupling ℋ_ y(ab)=i∑_s,s'∑_⟨⟨ i,j⟩⟩λ^ij_ yν_ij c^†_i,s(ŝ_y)_s,s'c_j,s', and λ^ij_ y=λ^ A_ y for ⟨⟨ i,j⟩⟩∈ sublatice A, and λ^ij_ y=λ^ B_ y for ⟨⟨ i,j⟩⟩∈ sublatice B, with λ^ A_ y≠λ^ B_ y. This effect has been predicted to occur when graphene is grown on top of materials with a low-symmetry crystal field and strong spin-orbit interaction <cit.>. A similar type of in-plane polarized coupling has also been predicted in the monolayer of WTe_2 in the low-symmetric 1T_d structural phase <cit.>. The Rashba interaction tends to destroy the QSHI state in the case of ŝ_z-polarized Kane-Mele couplings by closing its energy gap, making it a frequently cited obstacle to observing this phase in experiments <cit.>. On the other hand, the energy gap created by ŝ_y-Kane-Mele coupling is robust against the presence of Rashba interaction <cit.>, making the study of its properties appealing for the practical realization of topological phases. In this section, we focus on the study of the robustness of this topological phase against the Rashba effect, examining the Hamiltonian of the form: ℋ_ 𝒢ℯ𝓃 2=ℋ_t+ℋ_ R+ℋ_ y(ab), where, ℋ_t and ℋ_ R are given by Eqs. (<ref>, <ref>). The QSHI phase produced by the in-plane Kane-Mele coupling has a spin-Hall current polarized in the y-direction. Therefore, the projection procedure described in Sec. <ref> must be performed using the ŝ_y operator instead of ŝ_z. This means that, for the calculations of the LSCM for the model in Eq. (<ref>), we obtain the states of Eq. (<ref>) by diagonalizing the matrix M^s_y_ v.s.=𝒫ŝ_y𝒫: M^s_y_v.s.|ϕ_v⟩=ξ_v^s_y|ϕ_v⟩, where (v=1,..., N_occ). Using these states, we construct the projectors 𝒫_σ in Eq. (<ref>) and apply them to the calculations of the markers in Eqs. (<ref>) and (<ref>). The remaining discussion follows closely in analogy to the previous case. The LSCM is protected by the gap in the energy spectrum and in the valence-projected ŝ_y-spectrum. In Fig. <ref> (a), we present the LSCM calculated for the model of Eq. (<ref>), along the line indicated by the black arrow in Fig. <ref>. In the results shown in Fig. <ref>, we use λ^ A_y = 0.19 t, λ^ B_y = 0.21 t, and two different values for λ_ R. For λ_ R = 3√(3)λ̅_ KM/2 [λ̅_ KM=0.2 t], the LSCM shows the expected profile indicative of a QSHI, which means it is quantized in the central region, with anomalies near the edges of the system due to finite-size effects. The quantization of LSCM in the central region for this case is protected by large gaps in both the energy and ξ^s_y spectra [shaded rectangles in Fig. <ref> (b) and (d)]. For λ_ R = 5√(3)λ̅_ KM/2, the finite-size effects are magnified due to the small gap Δ^s_y in the ξ^s_y spectra reported in Fig. <ref> (e). Nevertheless, the gap remains finite, and the central region with a quantized LSCM increases as the size of the flake grows. It is worth mentioning that we used λ^ A_y and λ^ B_y slightly differently because, for this model, the gap in the ξ^s_y spectra closes when λ^ A_y = λ^ B_y. This is a limitation of the method, as there is no symmetry imposition that guarantees the existence of a finite gap in ξ^s_y spectra. As mentioned above, an interesting feature of the spin-orbit coupling in Eq. (<ref>) is that it leads to the formation of a QSHI phase, which persists even at high Rashba coupling intensities. Figure <ref> (a, b) shows the LSCM at the central site ℭ_s( 0) and the corresponding gaps Δ^ E and Δ^s_y for the model in Eq. (<ref>), a function of λ_ R/λ̅_ KM [λ̅_ KM=0.2 t] and different flake sizes L. By inspecting the curves for L=24 in Fig. <ref> (b), one can observe two distinct behaviors in the gaps Δ^ E and Δ^s_y. As λ_ R increases from 0 to a value close to 2√(3)λ̅_ KM, the gap Δ^ E decreases from a finite value to zero, and the gap Δ^s_y decreases from 2 to ≈ 1. As the λ_ R continues to increase, the gap Δ^ E reappears and then oscillates around a finite value, while the gap Δ^s_y jumps to a small finite value and remains nearly constant even under strong λ_ R. The finite gaps Δ^ E and Δ^s_y in the two regimes are reflected in a quantized LSCM at the central site across a wide range of Rashba coupling intensities. Due to the non-zero values of Δ^ E and Δ^s_y, the LSCM is a well-defined topological invariant for all values of Rashba coupling, except at the specific value [λ_ R≈ 2√(3)λ̅_ KM] where Δ^ E vanishes. The small values of the gaps in the region where λ_ R≳ 2√(3)λ̅_ KM make the LSCM susceptible to finite-size effects. As illustrated by Fig. <ref> (a), quantization of LSCM is achieved only for larger flakes. However, the construction of projectors of Eq. (<ref>) remains formally well-defined <cit.> despite the small values of Δ^ E and Δ^s_y. To contrast with the physics discussed above, we show analogous results for the model in Eq. (<ref>) with V_ AB=0 in Fig. <ref> (c, d). The difference in behavior for λ_ R≳ 2√(3)λ̅_ KM, compared to the case discussed in the previous paragraph, is evident. The LSCM of the central site deviates from its quantized value and oscillates around zero, and both gaps Δ^ E and Δ^s_z vanish as the size of the flake increases. The distinct behaviors of the energy gaps shown in Fig. <ref> are already present in the band structure of the models in the thermodynamic limit <cit.> and are reflected here in the spectra for the flake geometry and the LSCM. § FINAL REMARKS AND CONCLUSIONS In summary, we have used the local spin Chern marker to study the real-space topology of electrons described by three distinct Hamiltonians in a honeycomb lattice with finite flake geometry. The Hamiltonians considered here find applications in the field of graphene van der Waals heterostructures and are characterized by strong Rashba spin-orbit interaction, which leads to the non-conservation of the spin operator. We show that the local spin Chern marker successfully describes the topological phase diagram for the three spin-orbit couplings responsible for the manifestation of quantum spin Hall insulator physics considered in this work. The phase diagrams obtained are consistent with those calculated using the k-space formula for the ℤ_2-invariant. Additionally, we calculate the energy spectrum and the valence-projected spin spectrum, demonstrating that in most of the parameter space, both exhibit gaps that protect the marker against smooth variations in the model Hamiltonian. Among the results presented in this work, we highlight those shown in Figs. <ref> and <ref>. As mentioned in the text, the robustness of the gaps Δ^ E and Δ^s_y produced by in-plane Kane-Mele coupling [Eq. (<ref>)] against the presence of the Rashba effect strengthens the possibility of observing the quantum spin Hall insulator phase in experiments. In this situation, the quantum spin-Hall insulator phase is characterized by an in-plane polarized spin current produced by this unconventional type of spin-orbit coupling. Our work broadens the scope of spin Chern numbers <cit.> as an index that accurately captures the topology associated with the quantum spin Hall insulator phase. This is achieved using the real-space version of the index, demonstrating the flexibility of this concept. The results presented here may be useful for future studies on the real-space characterization of the topology of complex materials. § ACKNOWLEDGMENTS The authors are grateful to the Brazilian Agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Coordenação de Aperfeiçoamento de Pessoal de Ensino Superior (CAPES).
http://arxiv.org/abs/2409.03298v1
20240905070741
On the construction of ultra-light MDS matrices
[ "Yu Tian", "Xiutao Feng", "Guangrong Li" ]
cs.CR
[ "cs.CR" ]
Article Title]On the construction of ultra-light MDS matrices[This research was supported by National Key Research and Development Project under Grant No. 2018YFA0704705 and CAS Project for Young Scientists in Basic Research (Grant No. YSBR-035).] 1]Yu Tian [2]Xiutao [email protected] 1]Guangrong Li [1]College of Artificial Intelligence, Nanning Vocational and Technical University Naning, China *[2]Key Laboratory of Mathematics Mechanization, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China In recent years, the Substitution-Permutation Network has emerged as a crucial structure for constructing symmetric key ciphers. Composed primarily of linear matrices and nonlinear S-boxes, it offers a robust foundation for cryptographic security. Among the various metrics used to assess the cryptographic properties of linear matrices, the branch number stands out as a particularly important index. Matrices with an optimal branch number are referred to as MDS matrices and are highly prized in the field of cryptography. In this paper we delve into the construction of lightweight MDS matrices. We commence implementation trees of MDS matrices, which is a vital tool for understanding and manipulating their implementations, and then present an algorithm that efficiently enumerates all the lightest MDS matrices based on the word representation. As results, we obtain a series of ultra-lightweight 4× 4 MDS matrices, remarkably, 4-bit input MDS matrices with 35 XOR operations and 8-bit input ones with 67 XOR operations . These matrices represent the most comprehensive lightweight MDS matrices available to date. Furthermore, we craft some involution 4× 4 MDS matrices with a mere 68 XOR gates.To our best knowledge, they are the best up to date. In the realm of higher-order MDS matrices, we have successfully constructed 5× 5 and 6× 6 matrices with 114 and 148 XOR gates respectively. These findings outperform the current state-of-the-art. [ [ September 5, 2024 ===================== § INTRODUCTION The Substitution-Permutation Network (SPN, for brevity) has emerged as a pivotal component in the construction of symmetric ciphers, particularly in recent years. Due to the prevalence of the block cipher AES <cit.>, this structure is extensively studied and typically comprises three primary elements: a key schedule, a small-sized (usually 4- or 8-bits) nonlinear function known as the S-box, and a larger (often 32-, 64- or 128-bits) linear function. The S-box serves to mix the bits within a 4- or 8-bit word, while the linear function mixes words together. According to the wide trail design strategy <cit.>, the resilience of SPN ciphers against classical attacks, particularly differential and linear attacks, can be assessed by examining their constituent components. Specifically, for a nonlinear S-box, a high nonlinearity and low differential uniformity are desirable; S-boxes achieving optimal differential uniformity are termed Almost Perfect Nonlinear (APN). As for the linear function, a high branch number is expected to establish strong diffusion between input and output words; matrices corresponding to linear functions with an optimal branch number are known as MDS matrices (related to Maximum Distance Separable codes). Besides their use in SPN ciphers like AES, MDS matrices also find application in Feistel ciphers (e.g., Camellia <cit.>, SMS4 <cit.>), hash functions (e.g., Whirlpool <cit.>, Grostl <cit.>), and stream ciphers (e.g., ZUC <cit.>). With the increasing miniaturization of electronic devices handling sensitive data, there is a growing need for novel cryptographic primitives that offer low implementation costs. In contrast to standardized, robust primitives like AES, lighter cryptographic solutions are often preferred in resource-constrained environments. To reduce the cost of the SPN structure, one approach is to optimize its main components: the S-box and the linear function. Numerous studies have explored lightweight alternatives for these components, as evidenced by works on S-boxes <cit.> and linear functions <cit.>. These findings have facilitated the design of various new cipher proposals aimed at balancing cost and security in constrained settings, such as Present <cit.>, KATAN <cit.>, LED <cit.>, LBlock <cit.>, Prince <cit.>, Skinny <cit.>, among others. These examples underscore the advantages of low-cost cryptographic primitives. MDS matrices, with their optimal branch number and excellent diffusion properties, are well-suited for linear diffusion layers and are widely used in cryptographic designs. However, it's important to note that MDS matrices can be relatively dense, resulting in higher hardware implementation costs(larger circuit areas). Therefore, to incorporate MDS matrices as linear diffusion layers in lightweight ciphers, it becomes necessary to optimize existing MDS matrices or construct new lightweight MDS matrices that strike a balance between security and efficiency. §.§ Related works There has been a significant amount of research dedicated to constructing lightweight MDS matrices or near-MDS matrices. One common approach for constructing MDS matrices involves selecting a matrix with a specific structure, such as Hadamard, Cauchy, Vandermonde, Toeplitz, circulant, or other matrices with special properties. Then, specific values are assigned to the variable parameters of the matrix to ensure that all its submatrices are nonsingular, resulting in an MDS matrix. This method reduces the search space by focusing on one or a few specific types of matrices. Additionally, for matrices with special structures, there are often many repeated values in the determinants of their submatrices, which reduces the computational complexity when verifying the nonsingularity of the submatrices. Remarkably, for Cauchy matrices, their structure guarantees that they are MDS matrices regardless of the parameter values. In summary, searching for MDS matrices within matrices with special structures is computationally efficient and often successful. There is a considerable amount of literature on this topic, including works based on Cauchy matrices <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, Vandermonde matrices <cit.>, <cit.>, <cit.>, <cit.>, Hadamard matrices <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, Toeplitz matrices <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and circulant matrices <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. However, this approach of limiting the search to specific matrix structures has some drawbacks. The implementation cost of the resulting matrices is often difficult to control. Early methods focused on optimizing the implementation of matrix elements, known as local optimization. For example, in a circulant matrix, where the entire matrix is determined by its first row consisting of k elements (which are n× n invertible matrices over F_2), local optimization involves optimizing the implementation of these k matrices while preserving the XOR operations between them. This approach cannot avoid the overhead of (k-1)kn XOR operations for the entire matrix. Relevant works include <cit.>, <cit.>, <cit.>. Specifically, in <cit.>, Kölsch provided the specific structure of all matrices that can be implemented using only 2 XORs for element (treated as a matrix) multiplication with a vector in a field of characteristic 2. Local optimization methods have a smaller computational footprint because they only focus on optimizing matrix elements, but their effectiveness is limited as they do not consider the overall matrix structure. To improve optimization efficiency, a natural approach is to treat a k× k matrix with n× n invertible matrices as elements as a single kn× kn matrix over F_2 and optimize the entire matrix, known as global optimization <cit.>. It involves in the so-called SLP (Straight Line Program) problem. In <cit.>, Boyar et al. reduced the point set covering problem to a subproblem of the SLP problem (SLP problem with Hamming weight limited to 3 for each row) and proved that the SLP problem is NP-hard. This indicates that finding a universally applicable and efficient algorithm for it is challenging. Currently, heuristic algorithms are commonly used to address this issue, including the Paar algorithm <cit.>, the BP algorithm <cit.> and its variants <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. Recently, Yang et al. introduced a new search method for involutory MDS matrices and obtained a 4× 4 involutory MDS matrix implementable with only 35 XORs (4-bit input) and 70 XORs (8-bit input) <cit.>. Here it should be pointed out that their 35-XOR result has being optimal in the 4-bit input. Pehlivano?lu et al. further considered 3-input and higher XOR gates in their variant of the BP algorithm and achieved MDS matrices with optimized circuit area <cit.>. An alternative approach to constructing lightweight MDS matrices is to limit the implementation cost of the matrix, then derive the matrix from its implementation, and finally check if it satisfies the MDS condition, i.e., all submatrices are nonsingular. This method was first introduced by Duval and Leurent in <cit.> and extended by Zeng et al <cit.>. Roughly speaking, it considers matrices of the form M = A_1A_2⋯ A_t with an upper bound t, where A_1,A_2,⋯,A_t have low implementation costs. By means of careful selection for A_i, some 4× 4 MDS matrix with 35 XOR gates (4-bit input) and 67 XOR gates (8-bit input) are constructed, which have been proven to be optimal under a word-based structure <cit.>. In <cit.>, Sajadieh and Mousavi further applied it to search higher-order MDS matrices with a General Feistel Structure and obtained some 6× 6 and 8× 8 MDS matrices with relatively low costs. §.§ Our contributions In this study, our main objective is to construct the lightest weight MDS matrices. We first investigate the structures of MDS matrices with low implementation costs and give all possible structures with optimal costs. Based on these structures, we then generate a significant number of new ultra-light MDS matrices. More speaking, we introduce the concept of an "implementation tree". Based on the implementation tree, we develop a new method that starts with several unit vectors and iteratively generates rows for the MDS matrix. As each new row is generated, we check if it satisfies the MDS condition. This verification step is crucial as it eliminates a large number of unproductive branches during the search process. Through this refined approach, we are able to identify all MDS matrix structures with optimal costs (based on word metrics). By leveraging these structures, we successfully find all 60 nonequivalent 4× 4 MDS matrices that can be implemented with just 67 XOR operations on the ring F_2[x]/(x^8 + x^2 + 1) and 35 XOR operations on the ring F_2[x]/(x^4 + x + 1). Notably, our results encompass the 10 matrices in <cit.> and the 52 matrices in <cit.>. Here it should be pointed that our method is also capable of producing a significant quantity of 4× 4 MDS matrices with 36-41 XOR gates for 4-bit input and 68-80 XOR gates for 8-bit input by modestly relaxing the implementation cost constraints. Due to the limit of the paper, we only list the counts of those with the lowest cost for various depths in Table <ref>. Furthermore, we identify 4× 4 involutory MDS matrices on the same ring that require only 68 XOR operations for implementation. To the best of our knowledge, it is the most optimal result currently available. To demonstrate the effectiveness of our algorithm, we conduct a comparative analysis with the methodology described in <cit.>. The results of this comparison are presented in Table <ref>. 0.7 [htbp] The number of MDS matrices of different depths Ring Depth Cost <cit.> <cit.> Our results F_2[α] - 67 2 52 60 5 67 2 4 4 4 69 1 4 7 3 77 1 2 3 F_2[β] - 35 2 52 60 5 35 2 4 4 4 37 1 4 7 3 41 1 2 3 * α is the companion matrix of x^8+x^2+1, β is the companion matrix of x^4+x+1. [htbp] Comparison of the efficiency of our algorithm with previous results 2*Ring 2*Cost 2*Depth 2cPrevious Algorithm 2cOur Algorithm 2*Ref. 4-56-7 Memory Time Memory Time F_2[α] 67 5 30.9G 19.5h 4M <10ms <cit.> F_2[α] 68 5 24.3G 2.3h 4M <10ms <cit.> F_2[α] 69 4 274G 30.2h 4M <10ms <cit.> * α is the companion matrix of x^8+x^2+1. As for higher-order MDS matrices, we successfully constructs 5× 5 and 6× 6 matrices for 8-bit inputs with 114 and 148 XOR gates respectively, which are the best results up to date, see Table <ref>. [H] The XOR gates required to implement MDS matrices of different orders Size of matrix Previous results Our results Ref. 5 129 114 <cit.> 6 156 148 <cit.> §.§ Organization of the paper In Section 2, we introduce several essential conceptions that serve as the foundation for our subsequent discussions. In Section 3, we propose a novel algorithm aimed at efficiently searching for ultra-light 4× 4 MDS (Maximum Distance Separable) matrices and involutory MDS matrices. In Section 4, we extend our focus to the construction of higher-order MDS matrices, especially for 5× 5 and 6× 6 MDS matrices. Finally, in Section 5 we summarize our key findings and conclusions. § PRELIMINARIES Since linear functions defined over vector spaces can be equivalently expressed as matrix-vector multiplications, we restrict our attention to the corresponding matrix representations in the following discussion. Let n and k be positive integers, and let F_2 denote the binary field consisting of the elements 0 and 1. In the context of SPN ciphers, linear functions often operate on the concatenated outputs of multiple S-boxes. Here, we specifically consider a k× k matrix, whose entries are n× n invertible matrices over F_2. In this setting, k represents the number of S-boxes, while n represents the size of a single S-box. Alternatively, this structure can be interpreted as a larger nk× nk matrix defined over F_2. For simplicity and consistency, we further constrain these n× n matrices to belong to a commutative ring R_n that is a subset of GL(n,2) - the general linear group of n× n invertible matrices over the binary field F_2. This restriction applies throughout the remainder of this paper. As a concrete example, one might consider the case where R_n=F_2[α] and α is the companion matrix of a polynomial of degree n. We denote the set of all k× k invertible matrices with entries in R_n by M_k(R_n). §.§ MDS matrix The branch number is a significant cryptographic parameter used to measure the diffusion properties of linear functions. Given that any arbitrary linear function L can be represented in matrix form, i.e., there exists a matrix M such that L(x) = Mx, the concept of the branch number can be extended to any matrix. In the following, we introduce the notion of the branch number for matrices. Let M ∈ M_k(R_n) be a given matrix. Its differential and linear branch numbers are defined respectively as: B_d(M) = min_x ≠ 0{wt(x) + wt(Mx)} and B_l(M) = min_x ≠ 0{wt(x) + wt(M^Tx)}, where wt(x) denotes the weight of x, i.e., the number of nonzero components in x, and M^T represents the transpose of M. It is evident that for any matrix M in M_k(R_n), its differential and linear branch numbers are at most k+1. If either of these branch numbers equals k+1, the matrix is designated as an MDS matrix. Notably, if the differential branch number of a matrix achieves optimality, its linear branch number does so as well, and vice versa. For a given matrix M, the following theorem provides a necessary and sufficient condition for M to be an MDS matrix. Let R_n be a commutative ring and M be a k × k matrix over R_n. Then M is an MDS matrix if and only if all of its minors are invertible. Let 𝐞_i denote the i-th unit vector, that is, 𝐞_i = (0,…,0,i1,0,…,0) with 1 in the i-th position. A permutation matrix is an invertible matrix whose rows consist of unit vectors. Let M be an MDS matrix and P, Q be two permutation matrices. Then M' = PMQ is also an MDS matrix. For any given MDS matrix M, an MDS matrix M' is said to be equivalent to M if there exist two permutation matrices P and Q such that M' = PMQ, denoted by M' ∼ M. Since the inverse of a permutation matrix is also a permutation matrix, this equivalence relation is symmetric, meaning if M' ∼ M, then M ∼ M'. Let [M] represent the set of all MDS matrices equivalent to M. It is straightforward to observe that [M] comprises (k!)^2 MDS matrices. §.§ Cost and depth of binary matrices The differential and linear branch numbers constitute two pivotal cryptographic indicators of a matrix. When a matrix realized as a circuit, its depth and cost emerge as two additional essential parameters. Subsequently, we study the circuit realization of a general binary matrix, emphasizing its depth and cost implications. Consider any nonzero binary m× n matrix M. An implementation I of M refers to a circuit constructed solely using XOR gates that can compute y=Mx for any input vector x, where x = (x_1,x_2,…,x_n)∈ F_2^n and y = (y_1,y_2,…,y_m)∈ F_2^m. As an illustrative example, let us consider a 4×4 binary matrix M=[ 1 0 0 0; 1 1 0 0; 1 1 1 0; 1 1 1 1 ]. Its implementation I can be expressed as: y_1 = x_1, y_2 = x_1 ⊕ x_2, y_3 = (x_1 ⊕ x_2) ⊕ x_3, y_4 = (x_1 ⊕ x_2) ⊕ (x_3 ⊕ x_4). Figure <ref> depicts the structure of I, where intermediate terms x_5 = x_1 ⊕ x_2 and x_7 = x_3 ⊕ x_4 are introduced for clarity. Since I comprises solely of XOR gates, it gives rise to a binary tree B_i rooted at y_i for each y_i (1 ≤ i ≤ m), known as the spanning tree of y_i. In B_i, nodes other than the root y_i are denoted as descendants of y_i. For a given binary matrix M with an implementation I, the cost of I is defined as the total count of XOR gates employed in I. Given a binary m× n matrix M and its implementation I, the depth of I is determined as the maximum height attained among all the trees B_i rooted at y_i (1 ≤ i ≤ m). Considering the aforementioned 4×4 binary matrix M and its implementation I, we observe that the depth and cost of I are 2 and 4, respectively. Notably, a binary matrix M may admit multiple distinct implementations, each potentially exhibiting different costs and depths. For instance, Figure <ref> presents an alternative implementation I' of the same 4×4 matrix M, with a depth and cost of 3. In our work, we prioritize implementations with the lowest cost for any given matrix M. Let M and M' be matrices related by M' = PMQ, where P and Q are permutation matrices, and let I be an implementation of M. An implementation I' of M' is said to be a corresponding implementation of I if it is obtained solely by rearranging the input and output variables in accordance with P and Q. Thus we have the following conclusion. Given equivalent matrices M and M', with I being an implementation of M and I' being the corresponding implementation of I for M', it follows that I and I' have identical cost and depth. By Theorem <ref>, it becomes evident that all matrices belonging to the same equivalence class share a common cost and depth. Therefore, in our search for ultra-light MDS matrices, we can confine our attention to the distinct equivalence classes. §.§ Metric In this section, we discuss the method to compute the cost of a k × k matrix M over R_n: [ y_1; y_2; ⋮; y_k ] = [ a_11 a_12 ⋯ a_1k; a_21 a_22 ⋯ a_2k; ⋮ ⋮ ⋱ ⋮; a_k1 a_k2 ⋯ a_kk ][ x_1; x_2; ⋮; x_k ], where x_i, y_i ∈ F_2^n, a_ij∈ R_n, 1 ≤ i, j ≤ k. Here, we only consider block implementations of M, treating each x_i and y_i as individual blocks. For a given matrix M and its block implementation I, two fundamental operations exist: * Word-wise XOR: X X_1 ⊕ X_2, * Scalar multiplication over R_n: X aX, where X, X_1, X_2 are n-dimensional vectors over F_2 and a ∈ R_n. Word-wise XOR requires n XOR gates; the cost of scalar multiplication aX over R_n depends on the specific value of a. Any block implementation I of matrix M can be expressed as a linear straight-line program, as exemplified below. [ y_1; y_2; y_3; y_4 ] = [ 1 α 1 0; 1 0 1 1; 0 α 1 1; 1 1 1 0 ][ x_1; x_2; x_3; x_4 ], where x_i, y_i ∈ F_2^8 (1 ≤ i ≤ 4), and α is the companion matrix of x^8 + x^2 + 1: α = [ 0 0 0 0 0 0 0 1; 1 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 1; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 0; ]. One implementation of M is given by: y_1 = (x_1 ⊕ x_3) ⊕α x_2, y_2 = (x_1 ⊕ x_3) ⊕ x_4, y_3 = (x_3 ⊕ x_4) ⊕α x_2, y_4 = (x_1 ⊕ x_3) ⊕ x_2. This can be rearranged into a linear straight-line program I as follows: x_5 = x_1 ⊕ x_3, x_6 = x_3 ⊕ x_4, x_7 = x_5 ⊕α x_2 = y_1, x_8 = x_5 ⊕ x_4 = y_2, x_9 = x_6 ⊕α x_2 = y_3, x_10 = x_5 ⊕ x_2 = y_4. Note that each step of I has the form: X = a · X_1 ⊕ b · X_2, where X_1, X_2 are either intermediate results x_i(i≥ 5) calculated previously or input variables x_i (1 ≤ i ≤ 4). Two distinct types of operations are observed in this form: word-wise XOR, denoted as word-xor(expressed as XOR) to distinguish it from bit-wise xor(expressed as xor), and scalar multiplication, which refers to matrix-vector multiplication over R_n. To compute the cost of I, we proceed as follows: First, the linear straight-line program comprises 6 steps, yielding an XOR cost of 6 × 8 = 48. Second, considering scalar multiplication, the non-trivial multiplication in I is α x_2, which appears twice. Since the latter occurrence of α x_2 can reuse the result of the first, only one calculation is needed when assessing cost. Therefore, the cost of scalar multiplication is 1. In conclusion, the total cost of I is 48 + 1 = 49 xor operations. From the above example, it is evident that when determining the cost of an implementation I of a k × k matrix M over R_n, we follow a two-step process. Firstly, we ascertain the number of steps s in I as a linear straight-line program, dictating the number of XORs necessary in I. Secondly, we identify all unique and non-trivial scalar multiplications in I and tally the total number of xors t required for these scalar multiplications. Ultimately, the cost of I is given by ns + t. § THE CONSTRUCTION OF LIGHTWEIGHT 4×4 MDS MATRICES In this section, we mainly focus on the construction of the 4× 4 MDS matrix. Because of its small size, we can discuss it in more details. First, we will introduce the concept of implementation tree. §.§ Implementation tree In Example <ref>, we observe that under the word-based structure, a matrix implementation can be conceptualized as a linear straight-line program. Each such program uniquely determines a matrix. The lightweight MDS matrix can be constructed by searching for the linear straight-line program with the lowest cost capable of implementing an MDS matrix. Next, we investigate the properties of these linear straight-line programs. Consider a linear straight-line program realizing a matrix: t_1 = a_1 t_1,1⊕ b_1 t_1,2, t_2 = a_2 t_2,1⊕ b_2 t_2,2, …, t_m = a_m t_m,1⊕ b_m t_m,2, where t_i,j (1 ≤ i ≤ m, 1 ≤ j ≤ 2) are either initial variables x_p or intermediate results t_q (1 ≤ q < i). We designate t_q (1 ≤ q ≤ m) as a term of the program, and the number of terms m as its length. A partial order relation can be established among the terms t_1, …, t_m: if t_p = a_pt_q ⊕ b_pt_p,2 or t_p = a_pt_p,1⊕ b_pt_q appears in the program, then t_q ≺ t_p. Furthermore, if t_q ≺ t_p and t_r ≺ t_q, then t_r ≺ t_p. For instance, if t_5 = at_3 + bt_4 and t_4 = ct_1 + dt_2, then t_i ≺ t_5 for 1 ≤ i ≤ 4. The relation t_q ≺ t_p signifies that t_q is an essential component of t_p. Given the partial order relationship, suppose the following linear straight-line program can implement some k × k matrix on R_n, with a_pt_p,1⊕ b_pt_p,2 = t_p where a_p ≠ 0 and b_p ≠ 0: a_1t_1,1⊕ b_1t_1,2 = t_1, ⋮ a_i_1t_i_1,1⊕ b_i_1t_i_1,2 = t_i_1 = y_1, a_i_1+1t_i_1+1,1⊕ b_i_1+1t_i_1+1,2 = t_i_1+1, ⋮ a_i_2t_i_2,1⊕ b_i_2t_i_2,2 = t_i_2 = y_2, ⋮ a_i_kt_i_k,1⊕ b_i_kt_i_k,2 = t_i_k = y_k. This program exhibits the following properties: * If t_p is not an output (i.e., t_p ≠ y_i for all 1 ≤ i ≤ k), then t_p ≺ y_i for some 1 ≤ i ≤ k. * If 1 ≤ q < p ≤ i_k, then either t_q ≺ t_p or t_q and t_p are incomparable. If t_q and t_p are incomparable, their order in the program can be swapped. Let us elaborate on these properties: * The partial order relation t_b ≺ t_a indicates that t_b is necessary for computing t_a, that is to say, t_a cannot be computed before t_b. Conversely, if t_b ⊀t_a, whether t_b has been generated is irrelevant to the computation of t_a. For Property 1, if there exists a non-output t_p such that t_p ⊀y_i for all 1 ≤ i ≤ k, this implies that t_p is not required for computing any outputs y_i (1 ≤ i ≤ k). Therefore, removing t_p from the matrix implementation would not affect its functionality, and we can reduce the program's length and simplify the implementation. Thus we always assume that no such a t_p exists in a matrix implementation. * Consider t_q and t_p (1 ≤ q < p ≤ i_k). If t_p ≺ t_q, it means that t_p must be used to compute t_q. However, since t_p is generated after t_q, this leads to a contradiction. Therefore, their relationship can only be t_q ≺ t_p or they are incomparable. If t_p and t_p+1 are incomparable, swapping their positions in the program has no impact on the overall implementation. This is because there are no terms between t_p and t_p+1, so the swap does not affect the generation of these two terms or any subsequent terms. However, it should be noted that if t_p and t_p+1 are incomparable and p - q ≥ 2, such a swap may not be feasible. For example, if there exists q < s < p such that t_s ≺ t_p, swapping t_q and t_p would place t_s after t_p, preventing the generation of t_p and leading to a contradiction. For brevity, we represent the above linear straight-line program as [t_1, …, t_i_1 = y_1, …, t_i_k = y_k]. Applying properties 1 and 2, we arrive at the following theorem: For any given k × k MDS matrix M on R_n, an implementation I = [t_1, …, t_i_1 = y_1, …, t_i_k = y_k] of M can be rearranged to I' = [t_1', …, t_i_1'' = y_1, …, t_i_k' = y_k] by swapping the relative positions of t_p (1 ≤ p ≤ i_k) such that I' satisfies: t_1' ≺ y_1, …, t_i_1'-1' ≺ y_1, t_i_1'+1' ≺ y_2, …, t_i_2'-1' ≺ y_2, ⋮ t_i_k-1'+1' ≺ y_k, …, t_i_k-1' ≺ y_k. Consider the implementation I of M, which we divide into k segments as follows: t_1, ⋯,t_i_1-1,t_i_1=y_1, t_i_1+1, ⋯,t_i_2-1,t_i_2=y_2, ⋮ t_i_k-1+1 ⋯,t_i_k-1,t_i_k=y_k. Utilizing Property 2, we commence the adjustment process segment by segment, starting from the first. In the first segment, we initiate the downward search from t_i_1-1. Suppose the first term encountered that is incomparable with y_1 is t_j_1. Then, under the partial order relation ≺, t_j_1 is incomparable with t_j_1+1,⋯,t_i_1=y_1. This arises because t_j_1 is the inaugural term discovered in the descending search that cannot be compared with y_1. If there existed some p such that t_j_1≺ t_p (j_1 < p < i_1) held true, then since t_p≺ y_1, we would have t_j_1≺ y_1?a contradiction. Hence, t_j_1 is indeed incomparable with t_j_1+1,⋯,t_i_1=y_1. By Property 2, we can successively exchange t_j_1 upward, akin to bubble sorting, until it is positioned behind y_1. This process is repeated until the remaining terms t_1',⋯,t_i_1'-1' satisfy t_s≺ y_1 for all 1≤ s < i_1'. Thus, the first segment is appropriately adjusted. Segments 2 to k-1 are adjusted analogously to the first segment. After the preceding adjustments, any term t_p in segment k necessarily satisfies t_p⊀y_s for all 1≤ s≤ k-1. By virtue of Property 1, t_p≺ y_k ensures that the last segment naturally conforms to the requirements subsequent to the adjustments of the preceding k-1 segments. After the aforementioned adjustments, I' fulfills the theorem's conditions, thereby the conclusion follows. Below we designate I' as a normal implementation. Notably, our adjustments solely alter the order of terms within the linear straight-line program. Therefore they are identical when they are realized as circuits. To illustrate Theorem <ref>, we employ the matrix and implementation from Example <ref>. In this instance, the implementation I takes the following form: x_1⊕ x_3 =x_5, x_3⊕ x_4 =x_6, x_5⊕α x_2 =x_7=y_1, x_5⊕ x_4 =x_8=y_2, x_6⊕α x_2 =x_9=y_3, x_5⊕ x_2 =x_10=y_4. The partial order relationships among the terms are as follows: x_5≺ x_7,x_5≺ x_8,x_5≺ x_10,x_6≺ x_9. Adjusting I according to Theorem <ref>: Since x_6 is incomparable with x_7=y_1, we exchange x_6 and x_7 and finish the first segment's adjustment. Next, as x_6 is incomparable with x_8, we swap x_6 and x_8 in the second segment's adjustment. For the third segment, the relationship x_6≺ x_9=y_3 already exists, we do nothing. Similarly, the fourth segment requires no modification. After renumbering the terms, we obtain I' as follows: x_1⊕ x_3 =x_5, x_5⊕α x_2 =x_6=y_1, x_5⊕ x_4 =x_7=y_2, x_3⊕ x_4 =x_8, x_8⊕α x_2 =x_9=y_3, x_5⊕ x_2 =x_10=y_4. The distinction between I and I' lies solely in the ordering of terms, as both ultimately express the same set of relationships: y_1=(x_1⊕ x_3)⊕α x_2, y_2=(x_1⊕ x_3)⊕ x_4, y_3=(x_3⊕ x_4)⊕α x_2, y_4=(x_1⊕ x_3)⊕ x_2. Based on the preceding discussion, when an MDS matrix is implemented, it suffices to consider a normal implementation akin to the form I' in Theorem <ref>. We call the vector (i_1', i_2' - i_1', …, i_k' - i_k-1') as a characteristic of I'. Once the order among y_i is fixed, its characteristic remains constant. To further illustrate the concept, let us consider the MixColumn matrix of the AES algorithm as an example. We introduce the notion of an implementation tree through this example. The MixColumn matrix of AES is given by: M_AES = [ [ α α+1 1 1; 1 α α+1 1; 1 1 α α+1; α+1 1 1 α ]], where α denotes the companion matrix of x^8 + x^4 + x^3 + x + 1. An implementation I of M_AES can be expressed as follows: [H] 130mm! x_5 = 1 · x_1 ⊕ 1 · x_2, x_6 = 1 · x_2 ⊕α· x_5, x_7 = 1 · x_3 ⊕ 1 · x_4, x_8 = 1 · x_6 ⊕ 1 · x_7=y_1, x_9 = 1 · x_2 ⊕ 1 · x_3, x_10 = 1 · x_3 ⊕α· x_9, x_11 = 1 · x_1 ⊕ 1 · x_4, x_12 = 1 · x_10⊕ 1 · x_11=y_2, x_13 = 1 · x_4 ⊕α· x_7, x_14 = 1 · x_5 ⊕ 1 · x_13=y_3, x_15 = 1 · x_1 ⊕α· x_11, x_16 = 1 · x_9 ⊕ 1 · x_15=y_4. The cost associated with I is 12 × 8 + 4 × 3 = 108. Notably, the cost of XOR operations is 96 and comprises the majority of the total implementation cost. If we abstract away the specific values of scalar multiplications, the implementation I can be generalized as follows: [H] 130mm! x_5 = a_1 · x_1 ⊕ a_2 · x_2, x_6 = a_3 · x_2 ⊕ a_4 · x_5, x_7 = a_5 · x_3 ⊕ a_6 · x_4, x_8 = a_7 · x_6 ⊕ a_8 · x_7=y_1, x_9 = a_9 · x_2 ⊕ a_10· x_3, x_10 = a_11· x_3 ⊕ a_12· x_9, x_11 = a_13· x_1 ⊕ a_14· x_4, x_12 = a_15· x_10⊕ a_16· x_11=y_2, x_13 = a_17· x_4 ⊕ a_18· x_7, x_14 = a_19· x_5 ⊕ a_20· x_13=y_3, x_15 = a_21· x_1 ⊕ a_22· x_11, x_16 = a_23· x_9 ⊕ a_24· x_15=y_4. This abstraction highlights the role of XOR operations in the implementation. It can be visually represented using a binary tree, as shown in Figure <ref>. From Figure <ref>, we can observe the following: * The tree is regular, meaning each node is either a leaf node or has two non-empty subtrees. * Each node in the tree is generated from two nodes with smaller serial numbers, defining the order of implementation. Consider a given k × k MDS matrix M on R_n. Let I = [t_1, …, t_i_1 = y_1, …, t_i_k = y_k] be a normal implementation of M. We stipulate that x_1 = t_0, x_2 = t_-1, …, x_k = t_-(k-1). For any term t_p = a_p t_m ⊕ b_p t_n in I, it induces a triple T_p = (m, n, p). If t_s = y_i, we mark its corresponding triple as T_s → y_i. By arranging such triples in the order of their appearance in the original implementation I, we obtain a sequence T = [T_1, …, T_i_1→ y_1, …, T_i_k→ y_k]. We refer to T as the implementation tree of M. Each triple T_p (1 ≤ p ≤ i_k) is called a node of T. The vector (i_1, i_2 - i_1, …, i_k - i_k-1) is designated as the type of T, and i_k is termed the capacity of T. Among all k × k MDS matrices on R_n, the implementation tree with the minimum capacity is deemed the simplest tree. In Definition <ref>, the implementation tree is represented as triples in the form T_p = (m,n,p). Alternatively, it can be expressed using XOR operations with parameters. Let x_1=T_0, x_2=T_-1, …, x_k=T_-(k-1) and the implementation tree be denoted as T=[T_1, …, y_1=T_i_1, …, y_k=T_i_k], where T_p = a_pT_m ⊕ b_pT_n corresponds to T_p=(m,n,p). Here, a_p and b_p are formal parameters without specific values. Notably, the partial order relationship between terms in I can be seamlessly translated to the nodes in T. We define the implementation tree as follows: matrix → matrix implementation → implementation tree of matrix. Initially, we have the matrix, from which we derive the word-based implementation of the matrix, and finally extract the concept of the implementation tree. When constructing a matrix, we follow the reverse process: first, we construct an implementation tree, then determine the parameter values for each scalar multiplication to achieve a concrete implementation, and finally derive the matrix from this implementation. Before presenting the specific algorithm, we estimate the capacity of the MDS matrix implementation tree. For a given k × k MDS matrix M on R_n, let I=[t_1, …, t_i_1=y_1, …, t_i_k=y_k] be an implementation of M. When fully expanding t_i_p=y_p (1 ≤ p ≤ k) to the input variable x_q (1 ≤ q ≤ k), each x_q must appear. Each t_i_p=y_p (1 ≤ p ≤ k) corresponds to row p of M. Since M is an MDS matrix, the elements in row p are non-zero. Therefore, each x_q (1 ≤ q ≤ k) must appear in the expansion of t_i_p=y_p. The following theorem provides a lower bound for the capacity of MDS matrix implementations. For a given k × k (k ≥ 3) MDS matrix M on R_n, let T=[T_1, …, T_i_1→ y_1, …, T_i_k→ y_k] be an implementation tree of M. Then, i_k ≥ 2k-1. Assume an implementation I=[t_1, …, t_i_1=y_1, …, t_i_k=y_k] of M corresponds to T. By Lemma <ref>, we have i_1 ≥ k-1. Since each output variable t_i_p=y_p (2 ≤ p ≤ k) must be generated, it follows that i_p - i_p-1≥ 1 (2 ≤ p ≤ k). Thus, i_k ≥ 2k-2. Equality holds only if i_1 = k-1 and i_p - i_p-1 = 1 (2 ≤ p ≤ k). We show that this situation does not arise, implying i_k ≥ 2k-1. If i_1 = k-1 and i_p - i_p-1 = 1 (2 ≤ p ≤ k), by Lemma <ref>, each x_q appears exactly once in the expansion of t_i_1=y_1. Consider y_2 = t_i_2 = a_i_2t_i_2,1⊕ b_i_2t_i_2,2 with i_2,1 < i_2,2. Then, t_i_2,2≺ y_1 or t_i_2,2 = y_1 since y_2 is generated immediately after y_1. We analyze two cases: * Consider the case where t_i_2,1 and t_i_2,2 are not comparable under the partial order relation ≺. It follows that t_i_2,2≺ y_1. Suppose that y_1 = t_i_1 = a_i_1t_i_2,1⊕ b_i_1t_i_2,2. If this is not the case, then y_1 = t_i_1 = a_i_1t_i_1,1⊕ b_i_1t_i_1,2, which implies that either t_i_2,1≺ t_i_1,1 or t_i_2,2≺ t_i_1,1 or t_i_2,1≺ t_i_1,2 or t_i_2,2≺ t_i_1,2. This would lead to the omission of some x_q when expanding y_2 in terms of the input variables, contradicting Lemma <ref>. Assuming y_1 = t_i_1 = a_i_1t_i_2,1⊕ b_i_1t_i_2,2, since k ≥ 3, at least one of t_i_2,1 or t_i_2,2 must contain more than two input variables. Let x_m and x_n be two such variables. Considering the 2 × 2 matrix M_2 of M corresponding to x_m, x_n; y_1, y_2, we find that the determinant of M_2 is 0, contradicting the fact that M is an MDS matrix. Therefore, the first case does not hold. * Now consider the case where t_i_2,1≺ t_i_2,2. In this scenario, it must be that t_i_2,2 = y_1. Otherwise, there would be some x_q missing when expanding y_2 in terms of the input variables, contradicting Lemma <ref>. Assuming t_i_2,2 = y_1, if t_i_2,1 contains more than two input variables, let x_m and x_n be two such variables. If not, choose x_m and x_n from the missing terms of t_i_2,1. Considering the 2 × 2 submatrix M_2' of M corresponding to x_m, x_n; y_1, y_2, we find that the determinant of M_2' is 0, contradicting the fact that M is an MDS matrix. Therefore, the second case does not hold. In both cases, we reach a contradiction. Therefore, i_k ≥ 2k-1. §.§ An algorithm for searching the simplest tree In the preceding section, we introduced the concept of the implementation tree. We can leverage this structure to construct the k × k MDS matrix over R_n. As exemplified in Example <ref>, the cost of the word-xor operation constitutes a significant portion of the overall cost in implementing an MDS matrix. Therefore, our approach prioritizes searching for implementation trees with minimal capacity. Subsequently, we assign parameters for scalar multiplications to derive the specific implementation, and ultimately yield a low-cost MDS matrix. Our algorithm is divided into two main parts: * Firstly we treat the parameters a, b of each node Y = a · Y_1 ⊕ b · Y_2 in an implementation tree as undetermined coefficients. We perform a traversal search to identify all the simplest trees which have the potential to form MDS matrices. * Secondly we assign values to the undetermined parameters within the simplest trees. This instantiation process will result in a specific MDS matrix. By Theorem <ref>, for a given k × k (k ≥ 3) MDS matrix M over R_n, any implementation T = [T_1, ⋯, y_1 = T_i_1, ⋯, y_k = T_i_k] of M satisfies the conditions i_k ≥ 2k - 1, i_1 ≥ k - 1, and i_p - i_p-1≥ 1 (2 ≤ p ≤ k). Therefore, we commence our search with i_k = 2k - 1 and look for the simplest trees with this capacity. If there does not exist such a simplest tree, we increment i_k and repeat the search. Next, we outline the procedure for finding a possible implementation tree with a capacity of i_k as below: * We solve the indefinite equation s_1 + s_2 + ⋯ + s_k = i_k under the constraints s_1 ≥ k - 1 and s_i ≥ 1 (2 ≤ i ≤ k) to obtain the set of all possible types, denoted by I_i_k, where each type represents the number of non-leaf nodes in its corresponding implementation tree. * For each type (i_1, i_2 - i_1, ⋯, i_k - i_k-1) ∈ I_i_k, we initiate the search with input variables x_1, x_2, ⋯, x_k. Leveraging the property that each node T_p in T = [T_1, ⋯, y_1 = T_i_1, ⋯, y_k = T_i_k] is the sum of preceding nodes, we perform a traversal search and sequentially generate k binary trees (corresponding to rows of the matrix) of the specified type. Concurrently, we verify if the matrix associated with T is an MDS matrix. For simplicity, we set x_1 = T_0, x_2 = T_-1, ⋯, x_k = T_-(k-1). To generate the first binary tree, rooted at y_1: * We recursively generate the tree starting from T_1. Each T_1 is expressed as T_1 = a_1T_p_1⊕ b_1T_p_2 with -(k-1) ≤ p_1 < p_2 ≤ 0. This process yields C_k^2 distinct T_1 options. Each T_1 can then serve as a child node in subsequent trees. * We proceed to generate T_2 using a similar approach: T_2 = a_2T_p_1⊕ b_2T_p_2 with -(k-1) ≤ p_1 < p_2 ≤ 1. For each T_1, we obtain C_k+1^2 different T_2 options, resulting in a total of C_k^2 · C_k+1^2 unique (T_1, T_2) sequences. * We repeat this process until we generate y_1 = T_i_1. At this point, we have generated C_k^2 · C_k+1^2 ⋯ C_k+i_1-1^2 sequences of the form (T_1, T_2, ⋯, y_1 = T_i_1). At this stage, each generated sequence ensures that each term T_p is expressed as the sum of its predecessors. However, we must further verify the following conditions to determine if these sequences can constitute the first tree in the implementation tree: * T_1 ≺ y_1, T_2 ≺ y_1, ⋯, T_i_1-1≺ y_1. This condition ensures the normality of the implementation and can be checked relatively easily. * Whether the 1 × k matrix corresponding to y_1 satisfies the MDS condition (i.e., all submatrix determinants are non-zero). Since the scalar multiplications in the implementation tree lack specific values, the elements of the 1 × k matrix are polynomials in a_1, b_1, ⋯, a_i_1, b_i_1. In practice, we verify if the determinants of all submatrices of the 1 × k matrix are not divisible by 2 by means of symbolic calculation. If a determinant is divisible by 2, it implies that regardless of the specific values assigned to the parameters a_1, b_1, ⋯, a_i_1, b_i_1, they cannot form an MDS matrix. Such sequences are subsequently discarded. Let S_1 denote the set of all sequences that pass the above test. Each sequence (T_-(k-1),⋯,T_i_1) in S_1 serves as the foundation for generating the second tree in the implementation tree hierarchy. Following a similar approach to the generation of the first tree, we obtain | S_1|· C_k+i_1^2⋯ C_k+i_1+i_2-1^2 candidate sequences. Subsequently, we evaluate whether they meet two conditions: * T_i_1+1≺ y_2, T_i_1+2≺ y_2, ⋯, T_i_1+i_2-1≺ y_2 * Whether the 2× k matrix associated with y_1, y_2 satisfies the MDS condition, i.e., all submatrix determinants are nonzero. The set of sequences that fulfill the above two conditions is denoted as S_2. By extension, all subsequent trees in the implementation tree hierarchy can be generated analogously. This iterative process terminates in two scenarios: * At row p, if S_p = ∅, it indicates the absence of an implementation tree corresponding to the type (i_1, i_2-i_1, ⋯, i_k-i_k-1) (or, more precisely, the absence of an implementation tree for types whose first p terms equal (i_1, i_2-i_1, ⋯, i_p-i_p-1)). * If S_k ≠∅, then we obtain a successful construction of an implementation tree for the MDS matrix of the given type. Since the search is exhaustive, we can obtain all implementation trees corresponding to the type (i_1, i_2-i_1, ⋯, i_k-i_k-1). Finally, by iterating over all possible values of (i_1, i_2-i_1, ⋯, i_k-i_k-1) in I_i_k, we either obtain all implementation trees with capacity i_k or determine that no such trees exist. At this point, we increment i_k by 1 and repeat all the above procedures. §.§ Feasible structures We executed Algorithm <ref> on matrices of sizes 3× 3, 4× 4, and 5× 5 defined over the ring R_n and obtained the following results. Of particular interest is the 4× 4 MDS matrix over R_n. Guided by Theorem <ref>, our traversal search began with an implementation tree of capacity 7. Utilizing Algorithm <ref>, we swiftly discovered, within milliseconds, that no tree of this capacity existed. Subsequently, we tested the trees with a capacity of 8 and successfully located some viable types. A comprehensive list of all the simplest trees with a capacity of 8 is provided in Appendix A. §.§ The 4× 4 MDS matrices with the lowest cost on R_8 In this section we establish the ring R_8 as F_2[α], where α denotes the companion matrix associated with the polynomial α^8 + α^2 + 1. Explicitly, α is represented by the following matrix: α = [ 0 0 0 0 0 0 0 1; 1 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 1; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 0; ]. The choice of this ring is motivated by the fact that it contains a unique zero divisor, α^4 + α + 1, and the multiplication by α has a minimal cost of just one xor operation. To construct MDS matrices with the lowest possible cost, we must assign values to the simplest trees described in section 3.3. During the assignment process, we restrict the values of all parameters to the set {1, α, α^2, α^3, α^-1, α^-2, α^-3}, whose elements have the cost of no more than 2 xor operations. We go through all cases with low costs and obtain 60 distinct 4 × 4 MDS matrices over F_2[α] with 67 xor operations. 30 of them are listed in the following tables, and another 30 MDS matrices can be got by replacing α with α^-1. The two optimal 4× 4 matrices, as cited in <cit.>, are encompassed within matrices labeled 21 and 22; the ten matrices documented in <cit.> are integrated into matrices numbered 17, 19, 20, 21, and 22; the 52 matrices detailed in <cit.> are represented in the matrices numbered 1-8 and 13-30. The rest eight matrices labeled 9-12 represent newly reported findings. Furthermore, if we replace α with the companion matrix of the polynomial x^4 + x + 1, we can naturally obtain 60 new MDS matrices with a word length of 4 that require only 35 XOR operations for their implementation. It is important to note that substituting α with any invertible 8 × 8 matrix over the finite field F_2, whose minimal polynomial is α^8 + α^2 + 1, might yield some new results. However, as long as the substituted matrix can be implemented using a single xor operation, the resulting matrices will retain their MDS property and can still be implemented using 67 xor operations. It should be emphasized that the matrix obtained through such a substitution is not equivalent to the original matrix. Furthermore, while there exist numerous substitution methods that satisfy these conditions, we refrain from enumerating them here due to their extensive nature. By slightly relaxing the requirements on the implementation cost, our method can yield a large number of 4× 4 MDS matrices that can be implemented with 36-41 XOR gates (for 4-bit input) and 68-80 XOR gates (for 8-bit input). If we limit the circuit depth, we can obtain MDS matrices with optimal costs for different depths. These matrices are included in Appendix B. §.§ Construction of 4× 4 Involutory MDS Matrices In this section, we endeavor to construct lightweight involutory MDS matrices by assigning specific parameters to the eight simplest trees obtained in Section 3.3. As a result, we have identified six involutory MDS matrices that can be efficiently implemented using only 68 XOR gates. To the best of our knowledge, this represents the current optimal result in terms of implementation cost. Since row permutations and row multiplications affect the involutory properties of a matrix, we must consider these operations during our search. Specifically, when searching for an involutory matrix, we must account for the original 8× 2 = 16 parameters, as well as additional 4 parameters to represent row multiplications and 4! = 24 possible row orders. It is important to note that the order of the matrix columns does not need to be considered. If P and Q are permutation matrices such that (PAQ)^2 = I, then right-multiplying both sides of the equation by Q^-1 and left-multiplying by Q yields QPAQPA = I. This means that if there exists an involutory matrix in the equivalence class of A, it can be obtained simply by swapping the rows of A. We define the equivalence class of an involutory MDS matrix as follows: If A is an involutory MDS matrix and A' = P^-1AP, where P is a permutation matrix, then A and A' belong to the same equivalence class. It is evident that the implementation of A' differs from that of A only in the order of the input and output variables, and both matrices are either involutory or non-involutory. Our search is restricted to the ring F[x]/(x^8 + x^2 + 1), and the parameters are powers of α, where α is the companion matrix of x^8 + x^2 + 1. We calculate the cost of α^n as |n| and consider the sum of these costs as the total cost t. Scalar multiplications can be reused, potentially reducing the practical cost below t. During the search, we select s (where s ≤ 6) out of 20 parameters for assignment, fixing the unselected parameters at 1. We limit the total cost of the parameters to t ≤ 8. Under these conditions, we find that only the simplest trees 3 and 4 can produce involutory matrices. These involutory matrices satisfy s ≥ 4 and t ≥ 5. Focusing on the lightest involutory MDS matrices with t = 5, we identify 6 matrices from the simplest tree 3 and 12 matrices from the simplest tree 4. We provide these 18 matrices satisfying t = 5 below. Simplest tree 3: y_1=(a_5(a_3(a_0x_1⊕ a_1x_2)⊕ a_2x_3)⊕ a_4x_4)· a_16, y_2=(a_10x_3⊕ a_11(a_9(a_7(a_3(a_0x_1⊕ a_1x_2)⊕ a_2x_3)⊕ a_6x_1)⊕ a_8y_1))· a_17, y_3=(a_12(a_0x_1⊕ a_1x_2)⊕ a_13(a_9(a_7(a_3(a_0x_1⊕ a_1x_2)⊕ a_2x_3)⊕ a_6x_1)⊕ a_8y_1))· a_18, y_4=(a_15y_3⊕ a_14(a_7(a_3(a_0x_1⊕ a_1x_2)⊕ a_2x_3)⊕ a_6x_1))· a_19. The six MDS matrices, associated with the simplest tree 3 and adhering to the condition t=5, are designated as follows: * a_6=α,a_7=α^2,a_12=α^-1,a_13=α. * a_3=α,a_9=α^2,a_12=α^-1,a_14=α. * a_3=α,a_6=α,a_7=α,a_9=α,a_12=α^-1. * a_7=α,a_9=α,a_12=α^-1,a_13=α,a_14=α. * a_0=α^-1,a_6=α,a_7=α,a_12=α^-1,a_15=α^-1. * a_0=α^-1,a_9=α,a_12=α^-1,a_14=α,a_15=α^-1. The aforementioned six matrices exhibit a specific order among their rows, which is as follows: P=[ [ 0 0 1 0; 0 0 0 1; 1 0 0 0; 0 1 0 0 ] ]. The resulting matrix M is expected to have the order (y_3, y_4, y_1, y_2). Among the six involutory MDS matrices with t=5 mentioned previously, the fourth and sixth assignments demonstrate scalar multiplication reuse, which are listed below. Since the terms multiplied by these scalars are identical, the multiplication cost is calculated only once. Therefore, the total cost of the entire matrix operation is determined to be 8 × 8 + 4 = 68. 4:[ [ α^3+α^2+α+α^-1 α^3+α+α^-1 α^3+α α; α^3+α^-1 α^3+α^2+α+α^-1 α^3+α^2+α α; 1 1 1 1; α^2+α+1 α^2+1 α^2 1 ] ], 6:[ [ α+1+α^-1+α^-2 α+1+α^-1 α+1 1; α+α^-1+α^-2+α^-3 α+1+α^-1+α^-2 α+1+α^-1 α^-1; α^-1 1 1 1; α+1+α^-1 α+1 α 1 ] ]. Simplest tree 4: y_1=(a_4(a_0x_1⊕ a_1x_2)⊕ a_5(a_2x_3⊕ a_3x_4))· a_16, y_2=(a_11(a_9(a_6x_1⊕ a_7(a_2x_3⊕ a_3x_4))⊕ a_8x_3)⊕ a_10(a_0x_1⊕ a_1x_2))· a_17, y_3=(a_12y_1⊕ a_13(a_9(a_6x_1⊕ a_7(a_2x_3⊕ a_3x_4))⊕ a_8x_3))· a_18, y_4=(a_15y_2⊕ a_14(a_6x_1⊕ a_7(a_2x_3⊕ a_3x_4)))· a_19. Here we present the twelve assignments of MDS matrices corresponding to the simplest tree 4 satisfying the condition (t=5). These assignments are enumerated as follows: * a_4=α,a_5=α^-1,a_8=α,a_9=α^2. * a_6=α,a_7=α^2,a_10=α^-1,a_11=α. * a_5=α^-1,a_7=α,a_11=α^2,a_13=α. * a_4=α,a_9=α^2,a_10=α^-1,a_14=α. * a_4=α,a_6=α,a_7=α,a_9=α,a_10=α^-1. * a_5=α^-1,a_7=α,a_8=α,a_9=α,a_11=α. * a_2=α^-1,a_5=α^-1,a_8=α,a_9=α,a_12=α^-1. * a_4=α,a_5=α^-1,a_9=α,a_11=α,a_13=α. * a_2=α^-1,a_5=α^-1,a_11=α,a_12=α^-1,a_13=α. * a_7=α,a_9=α,a_10=α^-1,a_11=α,a_14=α. * a_0=α^-1,a_6=α,a_7=α,a_10=α^-1,a_15=α^-1. * a_0=α^-1,a_9=α,a_10=α^-1,a_14=α,a_15=α^-1. The order between the rows for the above assignments is given by the permutation matrix: P=[ [ 0 0 1 0; 0 0 0 1; 1 0 0 0; 0 1 0 0 ] ], which implies that the resulting matrix M should have the order (y_2, y_4, y_1, y_3). Among the twelve involutory MDS matrices presented above, the 8th, 9th, 10th, and 12th assignments exhibit scalar multiplication reuse and have the cost of 68: [ [ α^2+1 1 α^2+α α^2; α^2 1 α^2+α+1 α^2+1; α α α^-1 α^-1; α^2+α α α^2+α+α^-1 α^2+α^-1 ]] [ [ α+1 1 α+1 α; α 1 α+1+α^-1 α+1; 1 1 α^-2 α^-1; α+α^-1 α^-1 α+1+α^-3 α+α^-2 ] ], (8) (9) [ [ α^2+α^-1 α^-1 α^3+α α^3; α^2+α+α^-1 α^-1 α^3+α^2+α α^3+α^2; 1 1 1 1; α+1 1 α^2 α^2+1 ] ] [ [ α+α^-2 α^-1 α+1 α; α+1+α^-3 α^-2 α+1+α^-1 α+1; α^-1 1 1 1; α+α^-1 1 α α+1 ] ]. (10) (12) § CONSTRUCTION OF HIGHER ORDER MDS MATRICES Due to the high efficiency of our algorithm, we are able to search for higher-order MDS matrices. Below, we present the implementation tree structures for MDS matrices of order 5 and 6, along with the actual matrices. Specifically, we demonstrate a 5th-order MDS matrix with a word length of 8 that can be implemented using 114 xor operations, and a 6th-order MDS matrix with a word length of 8 that can be implemented using 148 xor operations. §.§ 5th-order MDS Matrix Using Algorithm <ref>, we can obtain the implementation tree of a 5th-order MDS matrix. Figure <ref> illustrates such an implementation tree for a 5× 5 MDS matrix. When dealing with matrices of higher orders, we arrive the case that the number of parameters to be assigned increases significantly, so we necessitate a filtering process. Consider the implementation tree of the 5× 5 MDS matrix depicted in Figure <ref> as an example. This tree is comprised of 12 XOR operations, and totally 24 parameters (a_1, a_2, …, a_24) need to be assigned, where each a_i represents an n× n matrix over F_2 and n denotes the word length: x_6 = a_1· x_1 ⊕ a_2· x_2, x_7 = a_3· x_3 ⊕ a_4· x_4, x_8 = a_5· x_6 ⊕ a_6· x_7, x_9 = a_7· x_5 ⊕ a_8· x_8 = y_1, x_10 = a_9· x_1 ⊕ a_10· x_3, x_11 = a_11· x_5 ⊕ a_12· x_8, x_12 = a_13· x_2 ⊕ a_14· x_11, x_13 = a_15· x_7 ⊕ a_16· x_12 = y_2 , x_14 =a_17· x_4 ⊕ a_18· x_11, x_15 = a_19· x_6 ⊕ a_20· x_14 = y_3, x_16 = a_21· x_8 ⊕ a_22· x_12 = y_4, x_17 = a_23· x_8 ⊕ a_24· x_14 = y_5, To minimize the implementation cost of the MDS matrix, we aim to maximize the number of identity matrices among the assigned parameters (a_1, a_2, …, a_24). This simplification process involves searching for the minimum number of non-identity matrices required to form an MDS matrix implementation tree. Initially, we assume that at least s non-identity matrices are needed. We start with s = 1 and iterate through all C_24^s possible combinations of positions for these non-identity matrices. If a valid MDS matrix can be formed with this assignment, we have a simplified implementation tree. Otherwise, we increment s and repeat the process until a valid tree is found. In practice, it is found that an implementation tree for a 5× 5 MDS matrix can be constructed using only 5 non-identity matrices among the 24 positions. Such a tree is given by the following equations: x_6 = x_1 ⊕ b_1 · x_2, x_7 = x_3 ⊕ x_4, x_8 = b_2 · x_6 ⊕ b_3 · x_7, x_9 = x_5 ⊕ x_8 = y_1, x_10 = x_1 ⊕ x_3, x_11 = x_5 ⊕ x_8, x_12 = x_2 ⊕ b_4 · x_11, x_13 = x_7 ⊕ x_12 = y_2, x_14 = x_4 ⊕ b_5 · x_11, x_15 = x_6 ⊕ x_14 = y_3, x_16 = x_8 ⊕ x_12 = y_4, x_17 = x_8 ⊕ x_14 = y_5. The algorithm for simplifying the MDS matrix implementation tree structure is outlined in Algorithm <ref>. From the preceding example, it is evident that Algorithm <ref> significantly reduces the number of parameters that require assignment. For these reduced parameters, we can directly assign values. Consider the implementation tree of the 5× 5 MDS matrix presented in the previous section (Figure <ref>) as a case study. Assuming a word length of 8, each parameter takes an 8× 8 matrix over F_2. Initially, we select the matrix α as our basis, whose minimum polynomial is x^8+x^6+x^5+x^3+1: α = [ 0 1 0 0 0 0 0 1; 1 0 0 0 0 0 0 0; 0 1 0 0 1 0 0 0; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 0; ]. This polynomial is primitive of degree 8 and requires only 2 xor operations for implementation. To minimize the cost of matrix implementation, we opt for parameter values from lower powers of α, such as α^-4,α^-3,α^-2,α^-1,α^1,α^2,α^3,α^4. For a given 5× 5 MDS matrix, we need to go through 8^5 = 2^15 possible assignments. Running Algorithms <ref> and <ref>, we get a concrete implementation of the 5× 5 MDS matrix on GL(2,8): x_6 = x_1 ⊕α· x_2, x_7 = x_3 ⊕ x_4, x_8 = α^-2· x_6 ⊕α^2 · x_7, x_9 = x_5 ⊕ x_8 = y_1, x_10 = x_1 ⊕ x_3, x_11 = x_5 ⊕ x_8, x_12 = x_2 ⊕α^-3· x_11, x_13 = x_7 ⊕ x_12 = y_2, x_14 = x_4 ⊕α· x_11, x_15 = x_6 ⊕ x_14 = y_3, x_16 = x_8 ⊕ x_12 = y_4, x_17 = x_8 ⊕ x_14 = y_5, where α is as defined earlier, with a minimal polynomial of x^8+x^6+x^5+x^3+1. The overall implementation cost is 12 × 8 + 9 × 2 = 114 xor gates. §.§ 6th-order MDS Matrix Similarly to the 5th-order matrix, we do the same for a 6th-order matrix. An implementation tree of 6th-order MDS matrix is depicted below: Its corresponding matrix is: x_7 = x_1 ⊕ x_2, x_8 = x_3 ⊕α· x_7, x_9 = x_4 ⊕α^3· x_8, x_10 = x_5 ⊕α^-1· x_9, x_11 = x_6 ⊕ x_10 = y_1, x_12 = x_6 ⊕ x_7, x_13 = x_9 ⊕ x_12, x_14 = x_2 ⊕α^-1· x_10, x_15 = α^2· x_13⊕ x_14, x_16 = x_4 ⊕ x_15 = y_2, x_17 = x_3 ⊕ x_15, x_18 = x_5 ⊕α^2· x_17, x_19 = x_14⊕ x_18 = y_3, x_20 = x_12⊕ x_17 = y_4, x_21 = x_8 ⊕ x_18 = y_5, x_22 = x_13⊕ x_21 = y_6, where α=[ [ 0 1 0 0 0 0 0 1; 1 0 0 0 0 0 0 0; 0 1 0 0 1 0 0 0; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 0; ]] and its minimal polynomial is α is x^8+x^6+x^5+x^3+1. The cost of the entire implementation is 16× 8 + 10× 2 = 148 xor gates. § CONLUSION In this paper we present a traversal algorithm tailored for the discovery of lightweight 4× 4 MDS matrices. As results, we successfully generate all implementation trees for 4× 4 MDS matrices using only 8 word-XOR operations. Based on this systematic approach, we derive a series of MDS matrices that exhibit the lowest computational cost. Additionally, we obtain the lowest-cost 4× 4 involutory MDS matrix currently known over the finite field GL(2,8). Looking beyond the scope of 4× 4 matrices, we extend our method to higher-order MDS matrices. Specifically, we employ our method to obtain the lowest-cost 5× 5 and 6× 6 MDS matrices as well. § THE SIMPLEST TREES WITH CAPACITY 8 Tree 1-5 are have the type of (3, 3, 1, 1) and tree 6-8 have the type (4, 2, 1, 1). § THE 4× 4 MDS MATRICES FOR DIFFERENT DEPTHS In Table <ref>, α represents the companion matrix of either x^8+x^2+1 or. The 4× 4 MDS matrices presented in Table <ref> for depth 4 can be implemented using 69 XOR operations (for 8-bit input) or 37 XOR operations (for 4-bit input), representing the most cost-effective solution at this depth. As the depth decreases to 3, the minimum cost increases to 77 XOR operations (for 8-bit input) or 41 XOR operations (for 4-bit input). Owing to the strict limitation on depth, the available parameter options become constrained. For depth 3, there are three matrices that achieve the lowest cost. These matrices are enumerated below. 1:[ [ 1 1 α^-1 α^-1+α; 1 1+α α α; α α^-1 1+α^-1 1; 1+α 1 1 1+α ] ] x_5 = x_1 ⊕ x_2, x_6 = α· x_4 ⊕ x_5, x_7 = x_3 ⊕ x_4, x_8 = x_6 ⊕α^-1· x_7 = y_1. x_9 = x_2 ⊕ x_3, x_10 = x_6 ⊕α· x_9 = y_2. x_11 = α· x_1 ⊕ x_7, x_12 = α^-1· x_9 ⊕ x_11 = y_3. x_13 = x_6 ⊕ x_11 = y_4. 2:[ [ 1 1 α α^-1+α; 1 1+α^-1 α^-1 α^-1; α^-1 α 1+α 1; 1+α^-1 1 1 1+α^-1 ] ] x_5 = x_1 ⊕ x_2, x_6 = α^-1· x_4 ⊕ x_5, x_7 = x_3 ⊕ x_4, x_8 = x_6 ⊕α· x_7 = y_1 x_9 = x_2 ⊕ x_3, x_10 = x_6 ⊕α^-1· x_9 = y_2 x_11 = α^-1· x_1 ⊕ x_7, x_12 = α· x_9 ⊕ x_11 = y_3 x_13 = x_6 ⊕ x_11 = y_4 3:[ [ α^-1 α 1 α^-1; α^-1+1 1+α 1 1; 1 1 α^-1+1 α^-2+1; 1+α 1 1+α 1 ] ] x_5 = α^-1· x_1 ⊕α· x_2, x_6 = x_3 ⊕α^-1· x_4, x_7 = x_5 ⊕ x_6=y_1 x_8 = x_1 ⊕ x_3, x_9 = x_2 ⊕α^-1· x_4, x_10 = x_8 ⊕ x_9, x_11 = x_5 ⊕ x_10 = y_2 x_12 = α^-1· x_6 ⊕ x_10 = y_3 x_13 = α· x_8 ⊕ x_11 = y_4