source
sequence | source_labels
sequence | rouge_scores
sequence | paper_id
stringlengths 9
11
| ic
unknown | target
sequence |
---|---|---|---|---|---|
[
"Normalising Flows (NFs) are a class of likelihood-based generative models that have recently gained popularity.",
"They are based on the idea of transforming a simple density into that of the data.",
"We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning.",
"For this purpose we reinterpret NFs in the framework of Variational Autoencoders (VAEs), and present a new form of VAE that generalises normalising flows.",
"The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder (VDAE).",
"Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits.",
"The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space.",
"Unsupervised learning offers the promise of leveraging unlabeled data to learn representations useful for downstream tasks when labeled data is scarce BID47 , or even to generate novel data in domains where it is costly to obtain BID15 .",
"Generative models are particularly appealing for this as they provide a statistical model of the data, x, usually in the form of a joint probability density p (x).",
"The model's density function, its samples and representations can then be leveraged in applications ranging from semi-supervised learning and speech and (conditional) image synthesis BID44 BID30 BID14 BID26 to gene expression analysis BID13 and molecule design BID10 .In",
"practice, data x is often high-dimensional and the optimization associated with learning p (x) can be challenging due to an abundance of local minima BID39 and difficulty in sampling from rich high-dimensional distributions BID34 . Despite",
"this, generative modelling has undergone a surge of advancements with recent developments in likelihood-based models BID25 BID8 BID44 and Generative Adversarial Networks (GANs; BID11 ). The former",
"class is particularly attractive, as it offers (approximate) likelihood evaluation and the ability to train models using likelihood maximisation, as well as interpretable latent representations.Autoencoders have a rich history in the unsupervised learning literature owing to their intuitive and simple construction for learning complex latent representations of data. Through fitting",
"a parameterised mapping from the data through a lower dimensional or otherwise constrained layer back to the same data, the model learns to summarise the data in a compact latent representation. Many variants of",
"autoencoders have been proposed to encourage the model to better encode the underlying structure of the data though regularising or otherwise constraining the model (e.g., BID38 BID1 .Denoising Autoencoders",
"(DAEs) are a variant of the autoencoder under which noise is added to the input data that the model must then output noise-free, i.e. x = f θ (x + ) where is sampled from a, possibly structured BID48 BID49 , noise distribution ∼ q( ). They are inspired by the",
"idea that a good representation z would be robust to noise corrupting the data x and that adding noise would discourage the model from simply learning the identity mapping. Although DAEs have been",
"cast as generative models , sampling and computing likelihoods under the model remains challenging.Variational Autoencoders (VAEs) instead assume a probabilistic latent variable model, in which n-dimensional data x correspond to m-dimensional latent representations z following some tractable prior distribution, i.e. x ∼ p φ (x|z) with z ∼ p (z) BID25 . The task is then to learn",
"parameters φ, which requires maximising the log marginal likelihood DISPLAYFORM0 In the majority of practical cases (e.g. p φ (x|z) taken to be a flexible neural network-conditional distribution) the above integral is intractable. A variational lower bound",
"on the marginal likelihood is constructed using a variational approximation q θ (z|x) to the unknown posterior p (z|x): DISPLAYFORM1 The right-hand side of (2), denoted L (θ, φ), is known as the evidence lower bound (ELBO). It can be jointly optimised",
"with stochastic optimisation w.r.t. parameters θ and φ in place of (1).Conditionals q θ (z|x) and p",
"φ (x|z) can be viewed respectively as probabilistically encoding data x in the latent space, and reconstructing it from samples of this encoding. The first term of the ELBO encourages",
"good reconstructions, whereas the second term encourages the model's latent variables to be distributed according to the prior p (z). Generating new data using this model",
"is accomplished by reconstructing samples from the prior.Normalising Flows (NFs) suppose that the sought distribution p (x) can be obtained by warping a simple base density p (z), e.g. a normal distribution BID36 . They make use of the change of variables",
"formula to obtain p (x) through a learned invertible transformation z = f θ (x) as DISPLAYFORM2 Typically, f θ : R n → R n is obtained by stacking several simpler mappings, i.e. DISPLAYFORM3 and the log-determinant obtained as the sum of log-determinants of these mappings.This formulation allows for exact maximum likelihood learning, but requires f θ to be invertible and to have a tractable inverse and Jacobian determinant. This restricts the flexibility of known",
"transformations that can be used in NFs BID8 BID3 and leads to large and computationally intensive models in practice BID26 .NFs can also be thought of as VAEs with",
"encoder and decoder modelled as Dirac deltas p θ (x|z) = δ (f θ (z)) and q θ (z|x) = δ f −1 θ (x) , constructed using a restricted set of transformations. Furthermore, because NFs model continuous",
"density, to prevent trivial solutions with infinite point densities discrete data must be dequantised by adding random noise BID42 BID40 .The contribution of this work is two-fold.",
"First, we shed new light on the relationship",
"between DAEs, VAEs and NFs, and discuss the pros and cons of these model classes. Then, we also introduce several extensions of",
"these models, which we collectively refer to as the Variational Denoising Autoencoders (VDAEs).In the most general form VDAEs generalise NFs",
"and DAEs to discrete data and learned noise distributions. However, when the amount of injected noise is",
"small, VDAE attains a form that allows for using non-invertible transformations (e.g. f θ : R n → R m , with m n). We demonstrate these theoretical advantages through",
"preliminary experimental results on the binary and continuous versions of the MNIST dataset.",
"We introduced Variational Denoising Autoencoders (VDAEs), a family of models the bridges the gap between VAEs, NFs and DAEs.",
"Our model extends NFs to discrete data and non-invertible encoders that use lower-dimensional latent representations.",
"Preliminary experiments on the MNIST handwritten digits demonstrate that our model can be successfully applied to data with discrete support, attaining competitive likelihoods and generating plausible digit samples.",
"We also identified a failure mode of our models, in which their performance does not scale well to cases when latent and input dimensionalities are the same (i.e. when a flow-based encoder is used).Future",
"work should address limitations of the method identified in our experiments. In particular",
", replacing additive coupling blocks with the more powerful invertible convolutions, affine coupling blocks and invertible residual blocks BID9 BID26 BID3 can significantly improve the variational posterior for high dimensions. It can also",
"be interesting to explicitly condition the transformation f θ used for defining the posterior sampling procedure on the data x, for example by defining f θ (x, ) ≡ f x,θ ( ) using a hyper-network BID16 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.24242423474788666,
0.1875,
0.09999999403953552,
0.24390242993831635,
0.25,
0.22727271914482117,
0.09756097197532654,
0.07843136787414551,
0.1395348757505417,
0.037735845893621445,
0.07843136787414551,
0.08888888359069824,
0.09999999403953552,
0.13333332538604736,
0.13636362552642822,
0.1269841194152832,
0.21739129722118378,
0.11267605423927307,
0.07407406717538834,
0.07017543166875839,
0.0555555522441864,
0.08888888359069824,
0.09756097197532654,
0.18518517911434174,
0.07894736528396606,
0.0952380895614624,
0.11999999731779099,
0,
0.1538461446762085,
0.21621620655059814,
0.10526315122842789,
0.12121211737394333,
0.12765957415103912,
0.13333332538604736,
0.3333333134651184,
0.1818181723356247,
0.17391303181648254,
0.1538461446762085,
0.06451612710952759,
0.08888888359069824,
0.08510638028383255
] | HklKEUUY_E | true | [
"We explore the relationship between Normalising Flows and Variational- and Denoising Autoencoders, and propose a novel model that generalises them."
] |
[
"We investigate methods to efficiently learn diverse strategies in reinforcement learning for a generative structured prediction problem: query reformulation.",
"In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer.",
"Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set.",
"Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such as\n",
"an ensemble of agents trained on the full data.",
"We evaluate on the tasks of document retrieval and question answering.",
"The\n",
"improved performance seems due to the increased diversity of reformulation strategies.",
"This suggests that multi-agent, hierarchical approaches might play an important role in structured prediction tasks of this kind.",
"However, we also find that it is not obvious how to characterize diversity in this context, and a first attempt based on clustering did not produce good results.",
"Furthermore, reinforcement learning for the reformulation task is hard in high-performance regimes.",
"At best, it only marginally improves over the state of the art, which highlights the complexity of training models in this framework for end-to-end language understanding problems.",
"Reinforcement learning (RL) has proven effective in several language tasks, such as machine translation (Wu et al., 2016; Ranzato et al., 2015; BID1 , question-answering BID12 Hu et al., 2017) , and text summarization (Paulus et al., 2017) .",
"In RL efficient exploration is key to achieve good performance.",
"The ability to explore in parallel a diverse set of strategies often speeds up training and leads to a better policy (Mnih et al., 2016; Osband et al., 2016) .In",
"this work, we propose a simple method to achieve efficient parallelized exploration of diverse policies, inspired by hierarchical reinforcement learning BID7 Lin, 1993; Dietterich, 2000; Dayan & Hinton, 1993) . We",
"structure the agent into multiple sub-agents, which are trained on disjoint subsets of the training data. Sub-agents",
"are co-ordinated by a meta-agent, called aggregator, that groups and scores answers from the sub-agents for each given input. Unlike sub-agents",
", the aggregator is a generalist since it learns a policy for the entire training set. We argue that it",
"is easier to train multiple sub-agents than a single generalist one since each sub-agent only needs to learn a policy that performs well for a subset of examples. Moreover, specializing",
"agents on different partitions of the data encourages them to learn distinct policies, thus giving the aggregator the possibility to see answers from a population of diverse agents. Learning a single policy",
"that results in an equally diverse strategy is more challenging. Since each sub-agent is",
"trained on a fraction of the data, and there is no communication between them, training can be done faster than training a single agent on the full data. Additionally, it is easier",
"to parallelize than applying existing distributed algorithms such as asynchronous SGD or A3C (Mnih et al., 2016) , as the sub-agents do not need to exchange weights or gradients. After training the sub-agents",
", only their actions need to be sent to the aggregator.We build upon the works of Nogueira & Cho (2017) and Buck et al. (2018b) . Hence, we evaluate our method",
"on the same tasks: query reformulation for document retrieval and question-answering. We show that it outperforms a",
"strong baseline of an ensemble of agents trained on the full dataset. We also found that performance",
"and reformulation diversity are correlated (Sec. 5.5). Our main contributions are the",
"following:• A simple method to achieve more diverse strategies and better generalization performance than a model average ensemble.• Training can be easily parallelized",
"in the proposed method.• An interesting finding that contradicts",
"our, perhaps naive, intuition: specializing agents on semantically similar data does not work as well as random partitioning. An explanation is given in Appendix F.• We",
"report new state-of-the art results on",
"several datasets using BERT (Devlin et al., 2018) .However results improve marginally using reinforcement",
"learning and on the question answering task we see no improvements.",
"3 A query is the title of a paper and the ground-truth answer consists of the papers cited within.",
"Each document in the corpus consists of its title and abstract.",
"We proposed a method to build a better query reformulation system by training multiple sub-agents on partitions of the data using reinforcement learning and an aggregator that learns to combine the answers of the multiple agents given a new query.",
"We showed the effectiveness and efficiency of the proposed approach on the tasks of document retrieval and question answering.",
"We also found that a first attempt based on semantic clustering did not produce good results, and that diversity was an important but hard to characterize reason for improved performance.",
"One interesting orthogonal extension would be to introduce diversity on the beam search decoder BID11 Li et al., 2016) , thus shedding light on the question of whether the gains come from the increased capacity of the system due to the use of the multiple agents, the diversity of reformulations, or both.",
"Furthermore, we found that reinforcement learning for the reformulation task is hard when the underlying system already performs extremely well on the task.",
"This might be due to the tasks being too constrained (which makes it possible for machines to almost reach human performance), and requires further exploration.",
"AGGREGATOR: The encoder f q0 is a word-level two-layer CNN with filter sizes of 9 and 3, respectively, and 128 and 256 kernels, respectively.",
"D = 512.",
"No dropout is used.",
"ADAM is the optimizer with learning rate of 10 −4 and mini-batch of size 64.",
"It is trained for 100 epochs."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.260869562625885,
0.19999998807907104,
0.2380952388048172,
0.1702127605676651,
0.2222222238779068,
0.31578946113586426,
0.21052631735801697,
0.17777776718139648,
0.2222222238779068,
0.307692289352417,
0.15686273574829102,
0.07017543166875839,
0.054054051637649536,
0.1111111044883728,
0.14035087823867798,
0.23255813121795654,
0.1702127605676651,
0.27272728085517883,
0.2222222238779068,
0.15094339847564697,
0.14999999105930328,
0.25925925374031067,
0.1090909019112587,
0.1428571343421936,
0.3636363446712494,
0.2790697515010834,
0.20512820780277252,
0.11764705181121826,
0.10810810327529907,
0.15094339847564697,
0.060606058686971664,
0.04878048226237297,
0.21052631735801697,
0.23255813121795654,
0.15789473056793213,
0.4406779706478119,
0.2857142686843872,
0.25,
0.1818181723356247,
0.3829787075519562,
0.15686273574829102,
0.12244897335767746,
0,
0.06451612710952759,
0.24390242993831635,
0.12121211737394333
] | BJeypMU5wE | true | [
"We use reinforcement learning for query reformulation on two tasks and surprisingly find that when training multiple agents diversity of the reformulations is more important than specialisation."
] |
[
"We study continuous action reinforcement learning problems in which it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that keep the agent in desirable situations, both during training and at convergence.",
"We formulate these problems as {\\em constrained} Markov decision processes (CMDPs) and present safe policy optimization algorithms that are based on a Lyapunov approach to solve them.",
"Our algorithms can use any standard policy gradient (PG) method, such as deep deterministic policy gradient (DDPG) or proximal policy optimization (PPO), to train a neural network policy, while guaranteeing near-constraint satisfaction for every policy update by projecting either the policy parameter or the selected action onto the set of feasible solutions induced by the state-dependent linearized Lyapunov constraints.",
"Compared to the existing constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data.",
"Moreover, our action-projection algorithm often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline.",
"We evaluate our algorithms and compare them with the state-of-the-art baselines on several simulated (MuJoCo) tasks, as well as a real-world robot obstacle-avoidance problem, demonstrating their effectiveness in terms of balancing performance and constraint satisfaction.",
"The field of reinforcement learning (RL) has witnessed tremendous success in many high-dimensional control problems, including video games (Mnih et al., 2015) , board games (Silver et al., 2016) , robot locomotion (Lillicrap et al., 2016) , manipulation (Levine et al., 2016; Kalashnikov et al., 2018) , navigation (Faust et al., 2018) , and obstacle avoidance (Chiang et al., 2019) .",
"In RL, the ultimate goal is to optimize the expected sum of rewards/costs, and the agent is free to explore any behavior as long as it leads to performance improvement.",
"Although this freedom might be acceptable in many problems, including those involving simulators, and could expedite learning a good policy, it might be harmful in many other problems and could cause damage to the agent (robot) or to the environment (objects or people nearby).",
"In such domains, it is absolutely crucial that while the agent optimizes long-term performance, it only executes safe policies both during training and at convergence.",
"A natural way to incorporate safety is via constraints.",
"A standard model for RL with constraints is constrained Markov decision process (CMDP) (Altman, 1999) , where in addition to its standard objective, the agent must satisfy constraints on expectations of auxiliary costs.",
"Although optimal policies for finite CMDPs with known models can be obtained by linear programming (Altman, 1999) , there are not many results for solving CMDPs when the model is unknown or the state and/or action spaces are large or infinite.",
"A common approach to solve CMDPs is to use the Lagrangian method (Altman, 1998; Geibel & Wysotzki, 2005) , which augments the original objective function with a penalty on constraint violation and computes the saddle-point of the constrained policy optimization via primal-dual methods (Chow et al., 2017) .",
"Although safety is ensured when the policy converges asymptotically, a major drawback of this approach is that it makes no guarantee with regards to the safety of the policies generated during training.",
"A few algorithms have been recently proposed to solve CMDPs at scale while remaining safe during training.",
"One such algorithm is constrained policy optimization (CPO) (Achiam et al., 2017) .",
"CPO extends the trust-region policy optimization (TRPO) algorithm (Schulman et al., 2015a) to handle the constraints in a principled way and has shown promising empirical results in terms scalability, performance, and constraint satisfaction, both during training and at convergence.",
"Another class of these algorithms is by Chow et al. (Chow et al., 2018) .",
"These algorithms use the notion of Lyapunov functions that have a long history in control theory to analyze the stability of dynamical systems (Khalil, 1996) .",
"Lyapunov functions have been used in RL to guarantee closed-loop stability (Perkins & Barto, 2002; Faust et al., 2014) .",
"They also have been used to guarantee that a model-based RL agent can be brought back to a \"region of attraction\" during exploration (Berkenkamp et al., 2017) .",
"Chow et al. (Chow et al., 2018) use the theoretical properties of the Lyapunov functions and propose safe approximate policy and value iteration algorithms.",
"They prove theories for their algorithms when the CMDP is finite with known dynamics, and empirically evaluate them in more general settings.",
"However, their algorithms are value-function-based, and thus are restricted to discrete-action domains.",
"In this paper, we build on the problem formulation and theoretical findings of the Lyapunov-based approach to solve CMDPs, and extend it to tackle continuous action problems that play an important role in control theory and robotics.",
"We propose Lyapunov-based safe RL algorithms that can handle problems with large or infinite action spaces, and return safe policies both during training and at convergence.",
"To do so, there are two major difficulties that need to be addressed:",
"1) the policy update becomes an optimization problem over the large or continuous action space (similar to standard MDPs with large actions), and",
"2) the policy update is a constrained optimization problem in which the (Lyapunov) constraints involve integration over the action space, and thus, it is often impossible to have them in closed-form.",
"Since the number of Lyapunov constraints is equal to the number of states, the situation is even more challenging when the problem has a large state space.",
"To address the first difficulty, we switch from value-function-based to policy gradient (PG) algorithms.",
"To address the second difficulty, we propose two approaches to solve our constrained policy optimization problem (a problem with infinite constraints, each involving an integral over the continuous action space) that can work with any standard on-policy (e.g., proximal policy optimization (PPO) (Schulman et al., 2017) ) and off-policy (e.g., deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016) ) PG algorithm.",
"Our first approach, which we call policy parameter projection or θ-projection, is a constrained optimization method that combines PG with a projection of the policy parameters onto the set of feasible solutions induced by the Lyapunov constraints.",
"Our second approach, which we call action projection or a-projection, uses the concept of a safety layer introduced by (Dalal et al., 2018) to handle simple single-step constraints, extends this concept to general trajectorybased constraints, solves the constrained policy optimization problem in closed-form using Lyapunov functions, and integrates this closed-form into the policy network via safety-layer augmentation.",
"Since both approaches guarantee safety at every policy update, they manage to maintain safety throughout training (ignoring errors resulting from function approximation), ensuring that all intermediate policies are safe to be deployed.",
"To prevent constraint violations due to function approximation errors, similar to CPO, we offer a safeguard policy update rule that decreases constraint cost and ensures near-constraint satisfaction.",
"Our proposed algorithms have two main advantages over CPO.",
"First, since CPO is closely connected to TRPO, it can only be trivially combined with PG algorithms that are regularized with relative entropy, such as PPO.",
"This restricts CPO to on-policy PG algorithms.",
"On the contrary, our algorithms can work with any on-policy (e.g., PPO) and off-policy (e.g., DDPG) PG algorithm.",
"Having an off-policy implementation is beneficial, since off-policy algorithms are potentially more data-efficient, as they can use the data from the replay buffer.",
"Second, while CPO is not a back-propagatable algorithm, due to the backtracking line-search procedure and the conjugate gradient iterations for computing natural gradient in TRPO, our algorithms can be trained end-to-end, which is crucial for scalable and efficient implementation (Hafner et al., 2017) .",
"In fact, we show in Section 3.1 that CPO (minus the line search) can be viewed as a special case of the on-policy version (PPO version) of our θ-projection algorithm, corresponding to a specific approximation of the constraints.",
"We evaluate our algorithms and compare them with CPO and the Lagrangian method on several continuous control (MuJoCo) tasks and a real-world robot navigation problem, in which the robot must satisfy certain constraints, while minimizing its expected cumulative cost.",
"Results show that our algorithms outperform the baselines in terms of balancing the performance and constraint satisfaction (during training), and generalize better to new and more complex environments.",
"We used the notion of Lyapunov functions and developed a class of safe RL algorithms for continuous action problems.",
"Each algorithm in this class is a combination of one of our two proposed projections:",
"θ-projection and a-projection, with any on-policy (e.g., PPO) or off-policy (e.g., DDPG) PG algorithm.",
"We evaluated our algorithms on four high-dimensional simulated robot locomotion MuJoCo tasks and compared them with several baselines.",
"To demonstrate the effectiveness of our algorithms in solving real-world problems, we also applied them to an indoor robot navigation problem, to ensure that the robot's path is optimal and collision-free.",
"Our results indicate that our algorithms",
"1) achieve safe learning,",
"2) have better data-efficiency,",
"3) can be more naturally integrated within the standard end-to-end differentiable PG training pipeline, and",
"4) are scalable to tackle real-world problems.",
"Our work is a step forward in deploying RL to real-world problems in which safety guarantees are of paramount importance.",
"Future work includes",
"1) extending a-projection to stochastic policies and",
"2) extensions of the Lyapunov approach to model-based RL and use it for safe exploration."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1304347813129425,
0,
0.06557376682758331,
0,
0.05882352590560913,
0.04444444179534912,
0.11320754140615463,
0,
0.08510638028383255,
0.0555555522441864,
0.2857142686843872,
0.1860465109348297,
0.0416666641831398,
0.0363636314868927,
0.05128204822540283,
0.06896550953388214,
0,
0.0833333283662796,
0,
0.05714285373687744,
0.0624999962747097,
0,
0,
0.1764705777168274,
0,
0.04444444179534912,
0,
0,
0,
0.10256409645080566,
0.060606054961681366,
0,
0,
0.04651162400841713,
0.09677419066429138,
0.0476190447807312,
0,
0,
0,
0,
0,
0,
0.07843136787414551,
0.08695651590824127,
0.04255318641662598,
0.05405404791235924,
0.06666666269302368,
0.07692307233810425,
0,
0,
0.04878048226237297,
0,
0,
0,
0,
0,
0.12903225421905518,
0,
0,
0.07407406717538834
] | HkxeThNFPH | true | [
"A general framework for incorporating long-term safety constraints in policy-based reinforcement learning"
] |
[
"Generative networks are known to be difficult to assess.",
"Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images.",
"But the validation of their quality is highly dependent on the method used.",
"A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset.",
"This paper presents a new method to assess a generator.",
"Our approach is based on training a classifier with a mixture of real and generated samples.",
"We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data.",
"This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset.",
"We compare this result with the score of the same classifier trained on the real training data mixed with noise.",
"By computing the classifier's accuracy with different ratios of samples from both distributions (real and generated) we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset.",
"Our experiments compare the result of different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset.",
"Generative network approaches have been widely used to generate samples in recent years.",
"Methods such as GAN BID2 , WGAN BID0 , CGAN BID6 , CVAE BID15 and VAE BID3 have produced nice samples on various image datasets such as MNIST, bedrooms BID10 or imageNet BID8 .One",
"commonly accepted tool to evaluate a generative model trained on images is visual assessment to validate the realistic character of samples. One",
"case of this method is called 'visual Turing tests', in which samples are visualized by humans who try to guess if the images are generated or not. It",
"has been used to assess generative models of images from ImageNet BID1 and also on digit images BID4 . BID13",
"proposes to automate this method with the inception score, which replaces the human judgment by the use of a pretrained classifier to assess the variability of the samples with an entropy measure of the predicted classes and the confidence in the prediction. Unfortunately",
", those two methods do not indicate if the generator collapses to a particular mode of the data distribution. Log-likelihood",
"based evaluation metrics were widely used to evaluate generative models but as shown in Lucas BID5 , those evaluations can be misleading in high dimensional cases.The solution we propose to estimate both sample quality and global fit of the data distribution is to incorporate generated data into the training phase of a classifier before evaluating it. Using generated",
"samples for training has several advantages over using only the original dataset. First, it can make",
"training more efficient when the amount of data is low. As shown in BID7 ,",
"where the conditional distribution P (Y |X)(X represents the samples and Y the classes) learned by a generative model is compared to the same conditional distribution learned by a discriminative model, the generative model performs better in learning this conditional distribution by regularizing the model when the amount of data is low. Secondly, once the",
"generative model is trained, it can sample as much images as needed and can produce interpolations between two images which will induce less risk of overfitting on the data. Other works use generative",
"models for data augmentation BID11 or to produce labeled data BID14 in order to improve the training of discriminative models, but their intention is not to use it to evaluate or compare generative neural networks.Our method evaluates the quality of a generative model by assessing its capacity to fit the real distribution of the data. For this purpose, we use the",
"samples generated by a given trained generative model. Our work aims to show how this",
"data augmentation can benefit the training of a classifier and how we can use this benefit as an evaluation tool in order to assess a generative model. This method evaluates whether",
"the information of the original distribution is still present in the generated data and whether the generator is able to produce new samples that are eventually close to unseen data. We compare classifiers trained",
"over mixtures of generated and real data with varying ratios and with varying total amounts of data. This allows us to compare generative",
"models in various data settings (i.e., when there is few or many data points).The next section will present the related",
"work on generative models, the exploitation of the generated samples and their evaluation. We then present our generative model evaluation",
"framework before presenting experimental results on several generative models with different datasets.",
"This paper introduces a new method to assess and compare the performances of generative models on various labeled datasets.",
"By training a classifier on several mixture of generated and real data we can estimate the ability of a generative model to generalize.",
"When addition of generated data into the training set achieved better data augmentation than traditional data augmentation as Gaussian noise or random dropout, it demonstrates the ability of generative models to create meaningful samples.",
"By varying the number of training data, we compute a data augmentation capacity Ψ G for each model on MNIST and fashion-MNIST datasets.",
"Ψ G is a global estimation of the generalization capacity of a generative model on a given dataset.",
"The results presented here are produced on image datasets but this method can be used on all kinds of datasets or generative models as long as labeled data is available.A ADDITIONAL RESULTS It represents the overfitting capacity of a VAE.",
"All four samples set look good, but for example, the top left trained with only 50 different data often produce similar images (as the samples on top right trained with 100 images).",
"When the number of training images increases the variability seems good afterwards but as we can see in FIG2 when τ = 1 the generator generalizes better the distribution when n is < 1000 than when > 1000 is high.",
"Relative accuracy improvement between the baseline trained on original data and the accuracy with generated or noise data augmentation in training.",
"τ is the ratio between the number of generated data and the total number of data used for training."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10526315122842789,
0.1538461446762085,
0.17391303181648254,
0.06666666269302368,
0,
0.07692307233810425,
0.11764705181121826,
0.12121211737394333,
0.1428571343421936,
0,
0.0714285671710968,
0,
0.04878048226237297,
0.1249999925494194,
0,
0.20689654350280762,
0,
0.06666666269302368,
0.0952380895614624,
0,
0.07692307233810425,
0.08510638028383255,
0.14999999105930328,
0.24137930572032928,
0.07692307233810425,
0.1538461446762085,
0.05128204822540283,
0.13793103396892548,
0.12121211737394333,
0.2142857164144516,
0.260869562625885,
0.19999998807907104,
0.1875,
0.20000000298023224,
0.23529411852359772,
0.23076923191547394,
0.2083333283662796,
0.10526315122842789,
0,
0.20689654350280762,
0.07999999821186066
] | HJ1HFlZAb | true | [
"Evaluating generative networks through their data augmentation capacity on discrimative models."
] |
[
"We propose Automating Science Journalism (ASJ), the process of producing a press release from a scientific paper, as a novel task that can serve as a new benchmark for neural abstractive summarization.",
"ASJ is a challenging task as it requires long source texts to be summarized to long target texts, while also paraphrasing complex scientific concepts to be understood by the general audience.",
"For this purpose, we introduce a specialized dataset for ASJ that contains scientific papers and their press releases from Science Daily.",
"While state-of-the-art sequence-to-sequence (seq2seq) models could easily generate convincing press releases for ASJ, these are generally nonfactual and deviate from the source.",
"To address this issue, we improve seq2seq generation via transfer learning by co-training with new targets:",
"(i) scientific abstracts of sources and",
"(ii) partitioned press releases.",
"We further design a measure for factuality that scores how pertinent to the scientific papers the press releases under our seq2seq models are.",
"Our quantitative and qualitative evaluation shows sizable improvements over a strong baseline, suggesting that the proposed framework could improve seq2seq summarization beyond ASJ.",
"Neural text summarization (Rush et al., 2015) has undergone an exciting evolution recently: from extractive (Nallapati et al., 2017) through abstractive (Nallapati et al., 2016) to hybrid (See et al., 2017) models; from maximum likelihood to reinforcement learning objectives (Celikyilmaz et al., 2018; Chen & Bansal, 2018) ; from small to large datasets (Grusky et al., 2018) that are also abstractive (Sharma et al., 2019) ; from short to orders of magnitude longer sources and targets (Liu et al., 2018) ; from models trained from scratch to using pre-trained representations (Edunov et al., 2019; Liu & Lapata, 2019) .",
"Such evolution was largely supported by the emergence of seq2seq models (Cho et al., 2014; Sutskever et al., 2014) .",
"These advances are yet to be challenged with a seq2seq summarization task that summarizes a long source to a long target with extreme paraphrasing.",
"Below we argue that ASJ is a natural testbed for such a challenge.",
"Science journalism is one of the few direct connections between scientific research and the general public, lead by media outlets such as Science Daily, Scientific American, and Popular Science.",
"Their journalists face an incredibly difficult task: not only must they carefully read the scientific papers and write factual summaries, but they also need to paraphrase complex scientific concepts using a language that is accessible to the general public.",
"To emulate what a journalist would do, we present a dataset of about 50,000 scientific papers paired with their corresponding Science Daily press releases, and we seek to train a seq2seq model to transform the former into the latter, i.e., an input scientific paper into an output popular summary.",
"Ideally, our model would both identify and extract the relevant points in a scientific paper and it would present them in a format that a layman can understand, just as science journalists do.",
"We now ask: would such a model be successful without a factual and accurate representation of scientific knowledge?",
"Recent work suggests that even simple training of word embeddings could capture certain scientific knowledge from 3.3 million scientific abstracts (Tshitoya et al., 2019) .",
"Therefore, here we propose to transfer knowledge from domains from which a seq2seq model would be able to extract factual knowledge using transfer learning (Caruana, 1997; Ruder, 2019) .",
"We frame our approach as multitask learning (MTL).",
"We perform co-training using both scientific abstracts and parts of the target press releases, and we view these additional domains as potential training sources for representation of scientific facts within the seq2seq model, which ideally would be helpful to ASJ.",
"We demonstrate that MTL improves factuality in seq2seq summarization, and we measure this automatically using a novel evaluation measure that extracts random fragments of the source and evaluates the likelihood of the target given these fragments.",
"We believe that the insights from our experiments can guide future work on a variety of seq2seq tasks.",
"The contributions of our work can be summarized as follows:",
"1. We present a novel application task for seq2seq modelling: automating science journalism (ASJ).",
"2. We present a novel, highly abstractive dataset for summarization for the ASJ task with long source and target texts, where complex scientific notions in the source are paraphrased and explained in simple terms in the target text.",
"3. We propose a transfer learning approach that significantly improves the factuality of the generated summaries.",
"4. We propose an automatic evaluation measure that targets factuality.",
"The rest of this paper is organized as follows: Section 2 discusses previous work.",
"Section 3 describes the new data that we propose for ASJ.",
"Section 4 presents our models.",
"Section 5 introduces our transfer learning experiments for summarization.",
"Section 6 describes our evaluation setup.",
"Section 7 discusses our experiments and the results.",
"Section 8 concludes and points to promising directions for future work.",
"In this section, we focus qualitatively on the advantages and the limitations of our experiments of using transfer learning for summarization.",
"In this work, we proposed a novel application for seq2seq modelling (ASJ), presented a highly abstractive dataset for summarization with long sources and targets (SD), proposed MTL as a means of improving factuality (AB and PART), and proposed a novel factuality-based evaluation (RA).",
"Our transfer learning approach and our random access evaluation measure are in principle domainagnostic and hence are applicable and could be potentially useful for a variety of summarizationrelated seq2seq tasks.",
"Our experimental results have demonstrated that MTL via special tagging for seq2seq models is a helpful step for summarization.",
"In future work, we plan to address the limitations of the current state of AB and PART by equipping our models with pre-trained representations on large corpora, e.g., from the Web, and to use these pre-trained models as knowledge bases (Petroni et al., 2019) , thus expanding our transfer learning objectives for better factual seq2seq summarization.",
"Ilya Sutskever, Oriol Vinayals, and Quoc V Le.",
"Sequence to sequence learning with neural networks.",
"In"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.04444443807005882,
0,
0,
0.1764705777168274,
0.0833333283662796,
0,
0.14999999105930328,
0.09756097197532654,
0.09638553857803345,
0.1111111044883728,
0.10810810327529907,
0,
0.04651162400841713,
0.037735845893621445,
0.09836065024137497,
0,
0.05714285373687744,
0.0476190410554409,
0.1904761791229248,
0.07692307233810425,
0.1111111044883728,
0.1702127605676651,
0.1111111044883728,
0.0714285671710968,
0.1875,
0.0833333283662796,
0.1818181723356247,
0.2142857164144516,
0.0624999962747097,
0,
0,
0.14814814925193787,
0.0833333283662796,
0,
0.06896550953388214,
0.1621621549129486,
0.26923075318336487,
0.2666666507720947,
0.0555555522441864,
0.1492537260055542,
0,
0.1599999964237213
] | SJeRVRVYwS | true | [
"New: application of seq2seq modelling to automating sciene journalism; highly abstractive dataset; transfer learning tricks; automatic evaluation measure."
] |
[
"The interpretability of an AI agent's behavior is of utmost importance for effective human-AI interaction.",
"To this end, there has been increasing interest in characterizing and generating interpretable behavior of the agent.",
"An alternative approach to guarantee that the agent generates interpretable behavior would be to design the agent's environment such that uninterpretable behaviors are either prohibitively expensive or unavailable to the agent.",
"To date, there has been work under the umbrella of goal or plan recognition design exploring this notion of environment redesign in some specific instances of interpretable of behavior.",
"In this position paper, we scope the landscape of interpretable behavior and environment redesign in all its different flavors.",
"Specifically, we focus on three specific types of interpretable behaviors -- explicability, legibility, and predictability -- and present a general framework for the problem of environment design that can be instantiated to achieve each of the three interpretable behaviors.",
"We also discuss how specific instantiations of this framework correspond to prior works on environment design and identify exciting opportunities for future work.",
"The design of human-aware AI agents must ensure that its decisions are interpretable to the human in the loop.",
"Uninterpretable behavior can lead to increased cognitive load on the human -from reduced trust, productivity to increased risk of danger around the agent BID7 .",
"BID5 emphasises in the Roadmap for U.S. Robotics -\"humans must be able to read and recognize agent activities in order to interpret the agent's understanding\".",
"The agent's behavior may be uninterpretable if the human: (1) has incorrect notion of the agent's beliefs and capabilities BID15 BID1 ) FORMULA3 is unaware of the agent's goals and rewards BID6 BID12 (3) cannot predict the agent's plan or policy BID8 BID12 .",
"Thus, in order to be interpretable, the agent must take into account the human's expectations of its behavior -i.e. the human mental model BID0 ).",
"There are many ways in which considerations of the human mental model can affect agent behavior.",
"* equal contribution",
"We will now highlight limitations of the proposed framework and discuss how they may be extended in the future.Multiple decision making problems.",
"The problem of environment design, as studied in this paper, is suitable for settings where the actor performs a single repetitive task.",
"However, our formulation can be easily extended to handle an array of tasks that the agent performs in its environment by considering a set of decision making problems for the actor , where the worst-case score is decided by taking either minimum (or average) over the wci(·) for the set of problems.Interpretability Score.",
"The three properties of interpretable agent behavior are not mutually exclusive.",
"A plan can be explicable, legible and predictable at the same time.",
"In general, a plan can have any combination of the three properties.",
"In Equation 2, Int(·) uses one of these properties at a time.",
"In order to handle more than one property at a time, one could formulate Int(·) as a linear combination of the three properties.",
"In general, the design objective would be to minimize the worst-case interpretability score such that the scores for each property are maximized in the modified environment, or at least allow the designer pathways to trade off among potentially competing metrics.Cost of the agent.",
"In Section 1.3 we mentioned an advantage of the design process in the context of interpretabilitythe ability to offload the computational load on the actor, in having to reason about the observer model, to the offline design stage.",
"However, there is never any free lunch.",
"The effect of environment design is more permanent than operating on the human mental model.",
"That is to say, interpretable behavior while targeted for a particular human in the loop or for a particular interaction, does not (usually) affect the actor going forward.",
"However, in case of design of environment, the actor has to live with the design decisions for the rest of its life.",
"That means, for example, if the environment has been designed to promote explicable behavior, the actor would be incurring additional cost for its behaviors (than it would have had in the original environment).",
"This also affects not only a particular decision making problem at hand, but also everything that the actor does in the environment, and for all the agents it interacts with.",
"As such there is a \"loss of autonomy\" is some sense due to environment design, the cost of which can and should be incorporated in the design process."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06451612710952759,
0.11764705181121826,
0.5238094925880432,
0.1860465109348297,
0.1666666567325592,
0.25,
0.14999999105930328,
0.22857142984867096,
0.15789473056793213,
0.14999999105930328,
0.11764705181121826,
0.1463414579629898,
0.1818181723356247,
0,
0.10256409645080566,
0.10256409645080566,
0.19672130048274994,
0.1428571343421936,
0.06896550953388214,
0.06896550953388214,
0,
0.10526315122842789,
0.2545454502105713,
0.1304347813129425,
0,
0.1249999925494194,
0.1463414579629898,
0.11764705181121826,
0.17391303181648254,
0.09090908616781235,
0.1904761791229248
] | rkxg4a3m9N | true | [
"We present an approach to redesign the environment such that uninterpretable agent behaviors are minimized or eliminated."
] |
[
"Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs).",
"In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients.",
"Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings.",
"We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.",
"Generative models present the possibility of learning structure from data in unsupervised or semisupervised settings, thereby facilitating more flexible systems to learn and perform tasks in computer vision, robotics, and other application domains with limited human involvement.",
"Latent variable models, a class of generative models, are particularly well-suited to learning hidden structure.",
"They frame the process of data generation as a mapping from a set of latent variables underlying the data.",
"When this mapping is parameterized by a deep neural network, the model can learn complex, non-linear relationships, such as object identities (Higgins et al. (2016) ) and dynamics (Xue et al. (2016) ; Karl et al. (2017) ).",
"However, performing exact posterior inference in these models is computationally intractable, necessitating the use of approximate inference methods.Variational inference (Hinton & Van Camp (1993) ; Jordan et al. (1998) ) is a scalable approximate inference method, transforming inference into a non-convex optimization problem.",
"Using a set of approximate posterior distributions, e.g. Gaussians, variational inference attempts to find the distribution that most closely matches the true posterior.",
"This matching is accomplished by maximizing a lower bound on the marginal log-likelihood, or model evidence, which can also be used to learn the model parameters.",
"The ensuing expectation-maximization procedure alternates between optimizing the approximate posteriors and model parameters (Dempster et al. (1977) ; Neal & Hinton (1998) ; Hoffman et al. (2013) ).",
"Amortized inference (Gershman & Goodman (2014) ) avoids exactly computing optimized approximate posterior distributions for each data example, instead learning a separate inference model to perform this task.",
"Taking the data example as input, this model outputs an estimate of the corresponding approximate posterior.",
"When the generative and inference models are parameterized with neural networks, the resulting set-up is referred to as a variational auto-encoder (VAE) (Kingma & Welling (2014) ; Rezende et al. (2014) ).We",
"introduce a new class of inference models, referred to as iterative inference models, inspired by recent work in learning to learn (Andrychowicz et al. (2016) ). Rather",
"than directly mapping the data to the approximate posterior, these models learn how to iteratively estimate the approximate posterior by repeatedly encoding the corresponding gradients, i.e. learning to infer. With inference",
"computation distributed over multiple iterations, we conjecture that this model set-up should provide improved inference estimates over standard inference models given sufficient model capacity. Our work is presented",
"as follows: Section 2 contains background on latent variable models, variational inference, and inference models; Section 3 motivates and introduces iterative inference models; Section 4 presents this approach for latent Gaussian models, showing that a particular form of iterative inference models reduces to standard inference models under mild assumptions; Section 5 contains empirical results; and Section 6 concludes our work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08888888359069824,
0.2631579041481018,
0.0476190410554409,
0.2380952388048172,
0.11538460850715637,
0.25806450843811035,
0.1249999925494194,
0.04081632196903229,
0.2222222238779068,
0.3589743673801422,
0.09756097197532654,
0.0476190410554409,
0.27272728085517883,
0.25,
0.2083333283662796,
0.2926829159259796,
0.3255814015865326,
0.1428571343421936,
0.19354838132858276
] | B1Z3W-b0W | true | [
"We propose a new class of inference models that iteratively encode gradients to estimate approximate posterior distributions."
] |
[
"In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients.",
"For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes.",
"This produces the so-called \"weight transport problem\" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli.",
"This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm.",
"However, such random weights do not appear to work well for large networks.",
"Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem.",
"The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design.",
"We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights.",
"As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST and CIFAR-10.",
"Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.",
"Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function.",
"Given that people and animals can also show clear behavioral improvements on specific tasks (Shadmehr et al., 2010) , however the brain determines its synaptic updates, on average, the changes in must also correlate with the gradients of some loss function related to the task (Raman et al., 2019) .",
"As such, the brain may have some way of calculating at least an estimator of gradients.",
"To-date, the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up, feedforward and top-down, feedback connections.",
"The feedback connections are used for propagating activity that can be used to estimate a gradient (Williams, 1992; Lillicrap et al., 2016; Akrout et al., 2019; Roelfsema & Ooyen, 2005; Lee et al., 2015; Scellier & Bengio, 2017; Sacramento et al., 2018) .",
"In all such models, the gradient estimator is less biased the more the feedback connections mirror the feedforward weights.",
"For example, in the REINFORCE algorithm (Williams, 1992) , and related algorithms like AGREL (Roelfsema & Ooyen, 2005) , learning is optimal when the feedforward and feedback connections are perfectly symmetric, such that for any two neurons i and j the synaptic weight from i to j equals the weight from j to i, e.g. W ji = W ij (Figure 1 ).",
"Some algorithms simply assume weight symmetry, such as Equilibrium Propagation (Scellier & Bengio, 2017) .",
"The requirement for synaptic weight symmetry is sometimes referred to as the \"weight transport problem\", since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights, which is not biologically realistic (Crick, 1989-01-12; Grossberg, 1987) .",
"Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms (Lillicrap et al., 2016) , and is thus an important topic of study.",
"Several solutions to the weight transport problem have been proposed for biological models, including hard-wired sign symmetry (Moskovitz et al., 2018) , random fixed feedback weights (Lillicrap et al., 2016) , and learning to make the feedback weights symmetric (Lee et al., 2015; Sacramento et al., 2018; Akrout et al., 2019; Kolen & Pollack, 1994) .",
"Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry (Moskovitz et al., 2018) and it leads to less bias in the gradient estimator (and thereby, better training results) than using fixed random feedback weights (Bartunov et al., 2018; Akrout et al., 2019) .",
"However, of the current proposals for learning weight symmetry some do not actually work well in practice (Bartunov et al., 2018) and others still rely on some biologically unrealistic assumptions, including scalar value activation functions (as opposed to all-or-none spikes) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass Akrout et al. (2019) ; Sacramento et al. (2018) .",
"Interestingly, learning weight symmetry is implicitly a causal inference problem-the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners.",
"As such, we may look to the causal infererence literature to develop better, more biologically realistic algorithms for learning weight symmetry.",
"In econometrics, which focuses on quasi-experiments, researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question Angrist & Pischke (2008); Marinescu et al. (2018) .",
"Among such quasi-experimental methods, regression discontinuity design (RDD) is particularly promising.",
"It uses the discontinuity introduced by a threshold to estimate causal effects.",
"For example, RDD can be used to estimate the causal impact of getting into a particular school (which is a discontinuous, all-or-none variable) on later earning power.",
"RDD is also potentially promising for estimating causal impact in biological neural networks, because real neurons communicate with discontinuous, all-or-none spikes.",
"Indeed, it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons Lansdell & Kording (2019) .",
"Given that learning weight symmetry is fundamentally a causal estimation problem, we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic, spiking neural networks.",
"Here, we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks (Lansdell & Kording, 2019) .",
"Our algorithm takes advantage of a neuron's spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons.",
"Since this causal effect is proportional to the feedforward synaptic weight between the two neurons, by estimating it, feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights, thereby overcoming the weight transport problem.",
"We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry (or the lack thereof), that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment (Akrout et al., 2019) and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights (Lillicrap et al., 2016) .",
"Altogether, these results demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking, and which could be used in future models of biologically plausible gradient estimation.",
"In order to understand how the brain learns complex tasks that require coordinated plasticity across many layers of synaptic connections, it is important to consider the weight transport problem.",
"Here, we presented an algorithm for updating feedback weights in a network of spiking neurons that takes advantage of the spiking discontinuity to estimate the causal effect between two neurons (Figure 2 ).",
"We showed that this algorithm enforces weight alignment (Figure 3 ), and identified a loss function, R self , that is minimized by our algorithm (Figure 4) .",
"Finally, we demonstrated that our algorithm allows deep neural networks to achieve better learning performance than feedback alignment on Fashion-MNIST and CIFAR-10 ( Figure 5 ).",
"These results demonstrate the potential power of RDD as a means for solving the weight transport problem in biologically plausible deep learning models.",
"One aspect of our algorithm that is still biologically implausible is that it does not adhere to Dale's principle, which states that a neuron performs the same action on all of its target cells (Strata & Harvey) .",
"This means that a neuron's outgoing connections cannot include both positive and negative weights.",
"However, even under this constraint, a neuron can have an excitatory effect on one downstream target and an inhibitory effect on another, by activating intermediary inhibitory interneurons.",
"Because our algorithm provides a causal estimate of one neuron's impact on another, theoretically, it could capture such polysynaptic effects.",
"Therefore, this algorithm is in theory compatible with Dale's principle.",
"Future work should test the effects of this algorithm when implemented in a network of neurons that are explicitly excitatory or inhibitory.",
"A APPENDIX"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19999998807907104,
0.1702127605676651,
0.1904761791229248,
0.4444444477558136,
0.12903225421905518,
0.2857142686843872,
0.1621621549129486,
0.25806450843811035,
0.21052631735801697,
0.5,
0.1538461446762085,
0.09999999403953552,
0.060606054961681366,
0.21739129722118378,
0.15094339847564697,
0.1764705777168274,
0.2028985470533371,
0.0624999962747097,
0.25,
0.1860465109348297,
0.2666666507720947,
0.13114753365516663,
0.15584415197372437,
0.2857142686843872,
0.21052631735801697,
0.07843136787414551,
0,
0.13333332538604736,
0.09090908616781235,
0.1538461446762085,
0.23255813121795654,
0.43478259444236755,
0.4888888895511627,
0.1621621549129486,
0.23999999463558197,
0.3692307770252228,
0.3265306055545807,
0.2222222238779068,
0.3829787075519562,
0.1904761791229248,
0.1818181723356247,
0.4000000059604645,
0.11764705181121826,
0.1875,
0.04878048226237297,
0.052631575614213943,
0.0714285671710968,
0.25641024112701416
] | rJxWxxSYvB | true | [
"We present a learning rule for feedback weights in a spiking neural network that addresses the weight transport problem."
] |
[
"Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair $q(z|x)$/$p(x|z)$ can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space.",
"However, these approximations are well-documented to become degenerate in training.",
"Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match.\n",
"Conversely, diffusion maps (DM) automatically \\textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism.\n",
"In this paper, we propose \\textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \\textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model.",
"The measure, the \\textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret.",
"The method, the \\textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data.",
"To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization.",
"We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold.\n",
"Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.",
"Recent developments in generative models such as variational auto-encoders (VAEs, Kingma & Welling (2013) ) and generative adversarial networks (GANs, Goodfellow et al. (2014) ) have made it possible to sample remarkably realistic points from complex high dimensional distributions at low computational cost.",
"While their methods are very different -one is derived from variational inference and the other from game theory -their ends both involve learning smooth mappings from a user-defined prior distribution to the modeled distribution.",
"These maps are closely tied to manifold learning when the prior is supported over a Euclidean space (e.g. Gaussian or uniform priors) and the data lie on a manifold (also known as the Manifold Hypothesis, see Narayanan & Mitter (2010) ; Fefferman et al. (2016) ).",
"This is because manifolds themselves are defined by sets that have homeomorphisms to such spaces.",
"Learning such maps is beneficial to any machine learning task, and may shed light on the success of VAEs and GANs in modeling complex distributions.",
"Furthermore, the connection to manifold learning may explain why these generative models fail when they do.",
"Known as posterior collapse in VAEs (Alemi et al., 2017; Zhao et al., 2017; He et al., 2019; Razavi et al., 2019) and mode collapse in GANs (Goodfellow, 2017) , both describe cases where the forward/reverse mapping to/from Euclidean space collapses large parts of the input to a single output.",
"This violates the bijective requirement of the homeomorphic mapping.",
"It also results in degenerate latent spaces and poor generative performance.",
"A major cause of such failings is when Figure 1 : A diagram depicting one step of the diffusion process modeled by the variational diffusion autoencoder (VDAE).",
"The diffusion and inverse diffusion maps ψ, ψ −1 , as well as the covariance C of the random walk on M Z , are all approximated by neural networks.",
"the geometries of the prior and target data do not agree.",
"We explore this issue of prior mismatch and previous treatments of it in Section 3.",
"Given their connection to manifold learning, it is natural to look to classical approaches in the field for ways to improve VAEs.",
"One of the most principled methods is spectral learning (Schölkopf et al., 1998; Roweis & Saul, 2000; Belkin & Niyogi, 2002) which involves describing data from a manifold X ⊂ M X by the eigenfunctions of a kernel on M X .",
"We focus specifically on DMs, where Coifman & Lafon (2006) show that normalizations of the kernel approximate a very specific diffusion process, the heat kernel over M X .",
"A crucial property of the heat kernel is that, like its physical analogue, it defines a diffusion process that has a uniform stationary distribution -in other words, drawing from this stationary distribution draws uniformly from the data manifold.",
"Moreover, Jones et al. (2008) established another crucial property of DMs, namely that distances in local neighborhoods in the eigenfunction space are nearly isometric to corresponding geodesic distances on the manifold.",
"However, despite its strong theoretical guarantees, DMs are poorly equipped for large scale generative modeling as they are not easily scalable and do not provide an inverse mapping from the intrinsic feature space.",
"In this paper we address issues in variational inference and manifold learning by combining ideas from both.",
"Theory in manifold learning allows us to better recognize prior mismatch, whereas variational inference provides a method to learn the difficult to approximate inverse diffusion map.",
"Our contributions:",
"1) We introduce the locally bi-Lipschitz property, a sufficient condition for a homeomorphism, for measuring the stability of a mapping between latent and data distributions.",
"2) We introduce VDAEs, a class of variational autoencoders whose encoder-decoder feedforward pass approximates the diffusion process on the data manifold with respect to a user-defined kernel k.",
"3) We show that deep neural networks are capable of learning such diffusion processes, and",
"4) that networks approximating this process produce random walks that have certain desirable properties, including well defined transition and stationary distributions.",
"5) Finally, we demonstrate the utility of the VDAE framework on a set of real and synthetic datasets, and show that they have superior performance and satisfy the locally bi-Lipschitz property where GANs and VAEs do not."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25974026322364807,
0.0476190447807312,
0.12244897335767746,
0.23728813230991364,
0.3384615480899811,
0.16326530277729034,
0.25,
0.23999999463558197,
0.24137930572032928,
0.2142857164144516,
0.13698630034923553,
0.29032257199287415,
0.21333332359790802,
0.08510638028383255,
0.2142857164144516,
0.2083333283662796,
0.138888880610466,
0.04999999701976776,
0.09302325546741486,
0.145454540848732,
0.24137930572032928,
0.1428571343421936,
0.08695651590824127,
0.15686273574829102,
0.23880596458911896,
0.17241378128528595,
0.2769230604171753,
0.13333332538604736,
0.1269841194152832,
0.3265306055545807,
0.2857142686843872,
0.18867923319339752,
0.3103448152542114,
0.1702127605676651,
0.11538460850715637,
0.1904761791229248
] | rkg8FJBYDS | true | [
"We combine variational inference and manifold learning (specifically VAEs and diffusion maps) to build a generative model based on a diffusion random walk on a data manifold; we generate samples by drawing from the walk's stationary distribution."
] |
[
"While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch.",
"Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning.",
"However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently.",
"The reasons why multi-task learning is so challenging compared to single task learning are not fully understood.",
"Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as “gradient surgery”.",
"We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient.",
"On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. ",
"Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.",
"While deep learning and deep reinforcement learning (RL) have shown considerable promise in enabling systems to perform complex tasks, the data requirements of current methods make it difficult to learn a breadth of capabilities particularly when all tasks are learned individually from scratch.",
"A natural approach to such multi-task learning problems is to train a single network on all tasks jointly, with the aim of discovering shared structure across the tasks in a way that achieves greater efficiency and performance than solving the tasks individually.",
"However, learning multiple tasks all at once results in a difficult optimization problem, sometimes leading to worse overall performance and data efficiency compared to learning tasks individually (Parisotto et al., 2015; Rusu et al., 2016a) .",
"These optimization challenges are so prevalent that multiple multi-task RL algorithms have considered using independent training as a subroutine of the algorithm before distilling the independent models into a multi-tasking model Parisotto et al., 2015; Rusu et al., 2016a; Ghosh et al., 2017; Teh et al., 2017) , producing a multi-task model but losing out on the efficiency gains over independent training.",
"If we could tackle the optimization challenges of multi-task learning effectively, we may be able to actually realize the hypothesized benefits of multi-task learning without the cost in final performance.",
"While there has been a significant amount of research in multi-task learning (Caruana, 1997; Ruder, 2017) , the optimization challenges are not well understood.",
"Prior work has described varying learning speeds of different tasks (Chen et al., 2017) and plateaus in the optimization landscape (Schaul et al., 2019) as potential causes, while a range of other works have focused on the model architecture (Misra et al., 2016b; Liu et al., 2018) .",
"In this work, we instead hypothesize that the central optimization issue in multi-task learning arises from gradients from different tasks conflicting with one another.",
"In particular, we define two gradients to be conflicting if they point away from one another (i.e., have a negative cosine similarity).",
"As a concrete example, consider the 2D optimization landscapes of two task objectives shown in Figure 1 .",
"The optimization landscape of each task consists of a deep valley, as has been characterized of neural network optimization landscapes in the past (Goodfellow et al., 2014) .",
"When considering the combined optimization landscape for multiple tasks, SGD produces gradients that struggle to efficiently find the optimum.",
"This occurs due to a gradient thrashing phenomenon, where the gradient of one task destabilizes optimization in the valley.",
"We can observe this in Figure 1 (d) when the optimization reaches the deep valley of task 1, but is prevented from traversing the valley to an optimum.",
"In Section 6.2, we find experimentally that this thrashing phenomenon also occurs in a neural network multi-task learning problem.",
"The core contribution of this work is a method for mitigating gradient interference by altering the gradients directly, i.e. by performing \"gradient surgery\".",
"If two gradients are conflicting, we alter the gradients by projecting each onto the normal plane of the other, preventing the interfering components of the gradient from being applied to the network.",
"We refer to this particular form of gradient surgery as projecting conflicting gradients (PCGrad).",
"PCGrad is model-agnostic, requiring only a single modification to the application of gradients.",
"Hence, it is easy to apply to a range of problem settings, including multi-task supervised learning and multi-task reinforcement learning, and can also be readily combined with other multi-task learning approaches, such as those that modify the architecture.",
"We evaluate PCGrad on multi-task CIFAR classification, multi-objective scene understanding, a challenging multi-task RL domain, and goal-conditioned RL.",
"Across the board, we find PCGrad leads to significant improvements in terms of data efficiency, optimization speed, and final performance compared to prior approaches.",
"Further, on multi-task supervised learning tasks, PCGrad can be successfully combined with prior state-of-the-art methods for multi-task learning for even greater performance.",
"In this work, we identified one of the major challenges in multi-task optimization: conflicting gradients across tasks.",
"We proposed a simple algorithm (PCGrad) to mitigate the challenge of conflicting gradients via \"gradient surgery\".",
"PCGrad provides a simple way to project gradients to be orthogonal in a multi-task setting, which substantially improves optimization performance, since the task gradients are prevented from negating each other.",
"We provide some simple didactic examples and analysis of how this procedure works in simple settings, and subsequently show significant improvement in optimization for a variety of multi-task supervised learning and reinforcement learning problems.",
"We show that, once some of the optimization challenges of multi-task learning are alleviated by PCGrad, we can obtain the hypothesized benefits in efficiency and asymptotic performance that are believed to be possible in multi-task settings.",
"While we studied multi-task supervised learning and multi-task reinforcement learning in this work, we suspect the problem of conflicting gradients to be prevalent in a range of other settings and applications, such as meta-learning, continual learning, multi-goal imitation learning (Codevilla et al., 2018) , and multi-task problems in natural language processing applications (McCann et al., 2018) .",
"Due to its simplicity and model-agnostic nature, we expect that applying PCGrad in these domains to be a promising avenue for future investigation.",
"Further, the general idea of gradient surgery may be an important ingredient for alleviating a broader class of optimization challenges in deep learning, such as the challenges in the stability challenges in two-player games (Roth et al., 2017) and multi-agent optimizations (Nedic & Ozdaglar, 2009 ).",
"We believe this work to be a step towards simple yet general techniques for addressing some of these challenges.",
"Proof.",
"We will use the shorthand || · || to denote the L 2 -norm and ∇L = ∇ θ L, where θ is the parameter vector.",
"Let g 1 = ∇L 1 , g 2 = ∇L 2 , and φ be the angle between g 1 and g 2 .",
"At each PCGrad update, we have two cases: cos(φ) ≥ 0 or cos(φ < 0).",
"If cos(φ) ≥ 0, then we apply the standard gradient descent update using t ≤ 1 L , which leads to a strict decrease in the objective function value L(φ) unless ∇L(φ) = 0, which occurs only when θ = θ * (Boyd & Vandenberghe, 2004 ).",
"In the case that cos(φ) < 0, we proceed as follows:",
"Our assumption that ∇L is Lipschitz continuous with constant L implies that ∇ 2 L(θ) − LI is a negative semidefinite matrix.",
"Using this fact, we can perform a quadratic expansion of L around L(θ) and obtain the following inequality:",
"Now, we can plug in the PCGrad update by letting θ",
"We then get:",
"(Expanding, using the identity",
"(Expanding further and re-arranging terms)",
"(Note that cos(φ) < 0 so the final term is non-negative)",
"Plugging this into the last expression above, we can conclude the following:",
"2 will always be positive unless ∇L(θ) = 0.",
"This inequality implies that the objective function value strictly decreases with each iteration where cos(φ) > −1.",
"Hence repeatedly applying PCGrad process can either reach the optimal value L(θ) = L(θ * ) or cos(φ) = −1, in which case",
"Note that this result only holds when we choose t to be small enough, i.e. t ≤ 1 L ."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2295081913471222,
0.17777776718139648,
0.20408162474632263,
0.09302324801683426,
0.508474588394165,
0.17777776718139648,
0.3921568691730499,
0.22727271914482117,
0.27272728085517883,
0.2857142686843872,
0.17241378128528595,
0.10810810327529907,
0.23529411852359772,
0.23529411852359772,
0.20895521342754364,
0.2800000011920929,
0.11764705181121826,
0.1818181723356247,
0.1538461446762085,
0.17777776718139648,
0.1818181723356247,
0.19230768084526062,
0.1702127605676651,
0.23999999463558197,
0.1538461446762085,
0.1463414579629898,
0.20000000298023224,
0.2666666507720947,
0.1860465109348297,
0.19999998807907104,
0.260869562625885,
0.22727271914482117,
0.2790697515010834,
0.3333333134651184,
0.4000000059604645,
0.27586206793785095,
0.2857142686843872,
0.20408162474632263,
0.20895521342754364,
0.260869562625885,
0.12244897335767746,
0.14999999105930328,
0,
0.11594202369451523,
0.052631575614213943,
0.04255318641662598,
0.17777776718139648,
0.10526315122842789,
0.06666666269302368,
0.06451612710952759,
0.0624999962747097,
0.052631575614213943,
0.052631575614213943,
0,
0.045454539358615875,
0.12244897335767746,
0
] | HJewiCVFPB | true | [
"We develop a simple and general approach for avoiding interference between gradients from different tasks, which improves the performance of multi-task learning in both the supervised and reinforcement learning domains."
] |
[
"In this paper we propose to view the acceptance rate of the Metropolis-Hastings algorithm as a universal objective for learning to sample from target distribution -- given either as a set of samples or in the form of unnormalized density.",
"This point of view unifies the goals of such approaches as Markov Chain Monte Carlo (MCMC), Generative Adversarial Networks (GANs), variational inference.",
"To reveal the connection we derive the lower bound on the acceptance rate and treat it as the objective for learning explicit and implicit samplers.",
"The form of the lower bound allows for doubly stochastic gradient optimization in case the target distribution factorizes (i.e. over data points).",
"We empirically validate our approach on Bayesian inference for neural networks and generative models for images.",
"Bayesian framework and deep learning have become more and more interrelated during recent years.",
"Recently Bayesian deep neural networks were used for estimating uncertainty BID6 , ensembling BID6 and model compression BID20 .",
"On the other hand, deep neural networks may be used to improve approximate inference in Bayesian models BID13 .Learning",
"modern Bayesian neural networks requires inference in the spaces with dimension up to several million by conditioning the weights of DNN on hundreds of thousands of objects. For such",
"applications, one has to perform the approximate inference -predominantly by either sampling from the posterior with Markov Chain Monte Carlo (MCMC) methods or approximating the posterior with variational inference (VI) methods.MCMC methods provide the unbiased (in the limit) estimate but require careful hyperparameter tuning especially for big datasets and high dimensional problems. The large",
"dataset problem has been addressed for different MCMC algorithms: stochastic gradient Langevin dynamics BID28 , stochastic gradient Hamiltonian Monte Carlo , minibatch MetropolisHastings algorithms BID15 BID1 . One way to",
"address the problem of high dimension is the design of a proposal distribution. For example",
", for the Metropolis-Hastings (MH) algorithm there exists a theoretical guideline for scaling the variance of a Gaussian proposal BID24 BID25 . More complex",
"proposal designs include adaptive updates of the proposal distribution during iterations of the MH algorithm BID12 BID7 . Another way",
"to adapt the MH algorithm for high dimensions is combination of adaptive direction sampling and the multiple-try Metropolis algorithm as proposed in BID17 . Thorough overview",
"of different extensions of the MH algorithm is presented in BID18 .Variational inference",
"is extremely scalable but provides a biased estimate of the target distribution. Using the doubly stochastic",
"procedure BID27 BID11 VI can be applied to extremely large datasets and high dimensional spaces, such as a space of neural network weights BID14 BID5 . The bias introduced by variational",
"approximation can be mitigated by using flexible approximations BID22 and resampling BID9 .Generative Adversarial Networks BID8",
") (GANs) is a different approach to learn samplers. Under the framework of adversarial training",
"different optimization problems could be solved efficiently BID0 BID21 . The shared goal of \"learning to sample\" inspired",
"the connection of GANs with VI BID19 and MCMC BID26 .In this paper, we propose a novel perspective on",
"learning to sample from a target distribution by optimizing parameters of either explicit or implicit probabilistic model. Our objective is inspired by the view on the acceptance",
"rate of the Metropolis-Hastings algorithm as a quality measure of the sampler. We derive a lower bound on the acceptance rate and maximize",
"it with respect to parameters of the sampler, treating the sampler as a proposal distribution in the Metropolis-Hastings scheme.We consider two possible forms of the target distribution: unnormalized density (density-based setting) and a set of samples (sample-based setting). Each of these settings reveals a unifying property of the proposed",
"perspective and the derived lower bound. In the density-based setting, the lower bound is the sum of forward",
"and reverse KL-divergences between the true posterior and its approximation, connecting our approach to VI. In the sample-based setting, the lower bound admit a form of an adversarial",
"game between the sampler and a discriminator, connecting our approach to GANs.The closest work to ours is of BID26 . In contrast to their paper our approach (1) is free from hyperparameters; (",
"2) is able to optimize the acceptance rate directly; (3) avoids minimax problem in the density based setting.Our main contributions are as follows:1. We introduce a novel perspective on learning to sample from the target distribution",
"by treating the acceptance rate in the Metropolis-Hastings algorithm as a measure of sampler quality. 2. We derive the lower bound on the acceptance rate allowing for doubly stochastic",
"optimization of the proposal distribution in case when the target distribution factorizes (i.e. over data points). 3. For sample-based and density-based forms of target distribution we show the connection",
"of the proposed algorithm to variational inference and GANs.The rest of the paper is organized as follows. In Section 2 we introduce the lower bound on the AR. Special forms of target distribution",
"are addressed in Section 3. We validate our approach",
"on the problems of approximate Bayesian inference in the space",
"of high dimensional neural network weights and generative modeling in the space of images in Section 4. We discuss results and directions of the future work in Section 5.",
"This paper proposes to use the acceptance rate of the MH algorithm as the universal objective for learning to sample from some target distribution.",
"We also propose the lower bound on the acceptance rate that should be preferred over the direct maximization of the acceptance rate in many cases.",
"The proposed approach provides many ways of improvement by the combination with techniques from the recent developments in the field of MCMC, GANs, variational inference.",
"For example• The proposed loss function can be combined with the loss function from BID16 , thus allowing to learn the Markov chain proposal in the density-based setting.•",
"We can use stochastic Hamiltonian Monte Carlo for the loss estimation in Algorithm 1. •",
"In sample-based setting one can use more advanced techniques of density ratio estimation.Application of the MH algorithm to improve the quality of generative models also requires exhaustive further exploration and rigorous treatment."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.35555556416511536,
0.12121211737394333,
0.24242423474788666,
0.1764705777168274,
0,
0,
0,
0.19354838132858276,
0.15789473056793213,
0.06896551698446274,
0.052631575614213943,
0.1599999964237213,
0.25,
0.2142857164144516,
0.22857142984867096,
0.25,
0.14814814925193787,
0.0952380895614624,
0,
0.2222222238779068,
0.13793103396892548,
0.12903225421905518,
0.2702702581882477,
0.46666666865348816,
0.15686273574829102,
0.23999999463558197,
0.21621620655059814,
0.14999999105930328,
0.21739129722118378,
0.3888888955116272,
0.1111111044883728,
0.24390242993831635,
0,
0.1904761791229248,
0.11764705181121826,
0.42424240708351135,
0.3125,
0.11764705181121826,
0.10810810327529907,
0.07407406717538834,
0.1904761791229248
] | Hkg313AcFX | true | [
"Learning to sample via lower bounding the acceptance rate of the Metropolis-Hastings algorithm"
] |
[
"This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods.",
"Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE).",
"We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more.",
"Recently there has been a lot of progress in self-supervised representation learning for textual sequences, followed by supervised fine-tuning (using small labeled datasets) of shallow (often linear) decoders on various downstream NLP tasks, such as sentiment classification.",
"In this paper, we build on this work and propose a new method for self-supervised representation learning for videos, optionally accompanied by speech transcripts generated by automatic speech recognition (ASR).",
"We show that fine-tuning linear decoders together with our self-supervised video representations, can achieve state of the art results on various supervised tasks, including video classification, segmentation and captioning.",
"Our approach builds on the popular BERT (Bidirectional Encoder Representations from Transformers) model (Devlin et al., 2018) for text.",
"This uses the Transformer architecture (Vaswani et al., 2017) to encode long sentences, and trains the model using the \"masked language modeling\" (MLM) training objective, in which the model must predict the missing words given their bidirectional context.",
"The MLM loss requires that each token in the sequence be discrete.",
"The VideoBERT model of (Sun et al., 2019a) therefore applied vector quantization (VQ) to video frames before passing them (along with optional ASR tokens) to the BERT model.",
"Unfortunately, VQ loses fine-grained information that is often critical for downstream tasks.",
"More recently, several papers (e.g., VilBERT and LXMERT (Tan & Bansal, 2019) ) proposed to address this limitation by directly measuring the visual similarity between frames using pre-trained visual encoders.",
"In this paper, we propose a way to train bidirectional transformer models on sequences of realvalued vectors (e.g., video frames), x 1:T , using noise contrastive estimation (NCE), without needing pre-trained visual encoders.",
"We call our method \"Contastive Bidirectional Transformer\" or CBT.",
"We also develop a method that combines x 1:T with an optional sequence of discrete tokens, y 1:T (e.g., derived from ASR).",
"In contrast to the VideoBERT paper (Sun et al., 2019a) , we provide a \"lightweight\" way of combining these signals after training each modality separately.",
"In particular, we propose a cross-modal transformer to maximize the mutual information between x 1:T and y 1:T at the sequence level (rather than at the frame level).",
"This method is robust to small misalignments between the sequences (e.g., if the words at time t do not exactly correspond to what is visually present in frame t).",
"We demonstrate the effectiveness of the proposed approach for learning short-term visual representations, as well as longer term temporal representations.",
"For visual representations, we encode each window of K frames using a 3D convolutional neural network S3D (Xie et al., 2018) , and then pass this sequence of features to the CBT model for self-supervised pretraining with the NCE loss on the",
"We have shown how to extend the BERT model to learn representations from video in a self-supervised way, without needing vector quantization or pre-trained visual features.",
"We have also shown how to extend this to the cross-modal setting, when ASR is available.",
"Finally, we demonstrated that our method learns features that are far more useful than existing self-supervised methods for a variety of downstream video tasks, such as classification, captioning and segmentation.",
"We believe that the simplicity and modularity of our method will let us scale to much larger unlabeled video datasets, which we hope will let us finally surpass supervised video pretraining (e.g., on Kinetics), just as other methods (e.g., CPC++ (Hénaff et al., 2019) ) have recently surpassed supervised image pretraining (on ImageNet)."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.20000000298023224,
0.11428570747375488,
0.1538461446762085,
0.08510638028383255,
0.1621621549129486,
0.1538461446762085,
0.12903225421905518,
0.04444444179534912,
0,
0.10526315122842789,
0.08695651590824127,
0.0476190447807312,
0.04444444179534912,
0,
0,
0,
0.11428570747375488,
0,
0.13793103396892548,
0.11999999731779099,
0.2222222238779068,
0.07692307233810425,
0.20000000298023224,
0.06779661029577255
] | rJgRMkrtDr | true | [
"Generalized BERT for continuous and cross-modal inputs; state-of-the-art self-supervised video representations."
] |
[
"We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure.",
"We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters.",
"Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching.",
"While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -- our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -- an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters.",
"Deep learning is inherently data driven -visual question answering, scene recognition, language modeling, speech recognition, translation, and other supervised tasks can be expressed as: given input x, predict output y.",
"The field has attempted to model different underlying data structures with neural architectures, but core convolutional and recurrent building blocks were designed with only general notions of spatial and temporal locality.",
"In some cases, additional information about the problem can be expressed simply as an additional loss, but when hard logical assumptions are present, it is nonobvious how to do so in a manner compatible with backpropagation.Discrete logic is a fundamental component of human visual reasoning, but there is no dominant approach to incorporating such structural information into deep learning models.",
"For particular data structures and settings, there has been some success.",
"However, in prior work additional structure must either be learned implicitly and without additional annotations or is available at both train and test time.",
"For example, StackRNN BID7 ) allows recurrent architectures to push and pop from a stack.",
"While this approach works well without explicit stack trace supervision, implicit learning only goes so far: the hardest task it was tested on is binary addition.",
"Approaches such as recursive NN BID13 ) and TreeRNN BID15 ) allow inclusion of explicit tree structures available during both training and testing, but neither can be used when additional supervision is available only at training time.",
"We consider this the most general problem because it is not feasible to obtain good results without any additional supervision if the problem is sufficiently difficult.Our objective is to develop a general framework for differentiable, discrete reasoning over data structures, including as stacks and trees.",
"Our approach is flexible to differing degrees of supervision and demonstrates improved results when structural assumptions are available at test time.",
"We are less concerned with the no-supervision case because of limitations in scalability, as demonstrated by the scope of StackRNN.We present our framework in the context of two broader architectures: Neural Module Networks (NMN, BID0 ) and Neural Programmer-Interpreters (NPI, BID11 ).The",
"original NMN allows per-example dynamic architectures assembled from a set of smaller models; it was concurrently adapted in N2NMN BID4 ) and IEP as the basis of the first visual question answering (VQA) architectures successful on CLEVR BID5 ). The",
"NPI work allows networks to execute programs by directly maximizing the probability of a successful execution trace. In",
"the present work, we present two applications of our framework, which is a superset of both approaches. The",
"first is our CLEVR architecture, which introduces two novel behaviors. It",
"interleaves program prediction and program execution by using the output of each module to predict the next module; this is an important addition because it improves the differentiability of the model. For",
"IEP/N2NMN, the discrete program in the middle of the model breaks the gradient flow. For",
"our model, although the selection of modules is still a discrete non-differentiable choice, it is influenced by the loss gradient: the visual state gives a gradient pathway learnable through the question answer loss. The",
"second contribution of this architecture is a novel differentiable forking mechanism that enables our network to process logical tree structures through interaction with a stack of saved states. This",
"allows our model to perform a broad range of logical operations; DDRstack is the first architecture to obtain consistently strong performance across all CLEVR subtasks.We briefly discuss our rationale for evaluation on CLEVR as well as prior work on the task. Though",
"CLEVR is too easy with or without program supervision, it is the best-available proxy task for high-level reasoning. Its scale",
", diverse logical subtask categories, and program annotations make the dataset the best current option for designing discrete visual reasoning systems. By effectively",
"leveraging the additional program annotations, we improve over the previous state-of-the-art with a much smaller model -on the important Count and Compare Integer subtasks, we improve from 94.5 to 96.5 percent and 93.8 to 98.4 percent, respectively. However, our objective",
"is neither the last couple percentage points of accuracy on this task nor to decrease supervision, but to motivate more complex tasks over knowledge graphs. We expect that it is possible",
"to improve accuracy on CLEVR with a static architecture using less supervision. This is largely unrelated to",
"the objective of our work -we view CLEVR as a good first step towards increased supervision for the learning of complex logic. Human-level general visual reasoning",
"from scratch is less reasonable than from expressively annotated data: we consider improving and generalizing the ability of architectures to better leverage additional supervision to be the most likely means to this end.Prior work on CLEVR is largely categorized by dynamic and static approaches. IEP BID6 ) and N2NMN both generalized",
"the original neural module networks architecture and used the functional annotations in CLEVR to predict a static program which is then assembled into a tree of discrete modules and executed. IEP further demonstrated success when",
"program annotations are available for only a few percent of questions. These are most similar to our approach",
"; we focus largely upon comparison to IEP, which performs significantly better. RN BID12 ) and FiLM BID10 ), the latter",
"being the direct successor of CBN BID9 ) are both static architectures which incorporate some form of implicit reasoning module in order to achieve high performance without program annotations. In contrast, our architecture uses program",
"annotations to explicitly model the underlying question structure and jointly executes the corresponding functional representation. As a result, our architecture performs comparably",
"on questions requiring only a sequence of filtering operations, but it performs significantly better on questions requiring higher level operations such as counting and numerical comparison.We present DDRstack as a second application of our framework and introduce a reverse Polish notation (RPN) expression evaluation task. The task is solvable by leveraging the stack structure",
"of expression evaluation, but extremely difficult without additional supervision: a much larger LSTM baseline fails to attain any generalization on the task. We therefore use RPN as additional motivation for our",
"framework, which introduces a simple mechanism for differentiably incorporating the relevant stack structure. Despite major quantitative differences from CLEVR VQA",
", the RPN task is structurally similar. In the former, questions seen at training time contain",
"direct programmatic representations well modeled by a set of discrete logical operations and a stack requiring at most one recursive call. The latter is an extreme case with deep recursion requiring",
"a full stack representation, but this stack structure is also available at test time.In summary: the DDR framework combines the discrete modular behavior of NMN and IEP with an NPI inspired forking behavior to leverage structural information about the input data. Our approach resolves common differentibility issues and is",
"easily adapted to the specific of each problem: we achieve a moderate improvement over previous state-of-the-art on CLEVR and succeed on RPN where a much larger baseline LSTM fails to attain generalization.",
"Several models have far exceeded human accuracy on CLEVR -the task remains important for two reasons.",
"First, though CLEVR is large and yields consistent performance across runs, different models exhibit significantly different performance across question types.",
"Where every competitive previous work exhibits curiously poor performance on at least one important subtask, our architecture dramatically increases consistency across all tasks.",
"Second, CLEVR remains the best proxy task for high-level visual reasoning because of its discrete program annotations -this is far more relevant than raw accuracy to our work, which is largely concerned with the creation of a general reasoning framework.",
"However, we do achieve a modest improvement in raw accuracy over the previous state-of-the-art with a >5X smaller architecture.We presently consider RN, FiLM, IEP, and our architecture as competitive models.",
"From Table 1 , no architecture has particular difficulty with Exist, Query, or Compare questions; the main differentiating factors are Count and Compare Integer.",
"Though Compare Integer is the smallest class of questions and is therefore assigned less importance by the cross entropy loss, the IEP result suggests that this does not cause models to ignore this question type.",
"We therefore consider Count and Compare Integer to be the hardest unary and binary tasks, respectively, and we assign most important to these question subsets in our analysis.",
"We achieve strong performance on both subtasks and a significant increment over previous state-of-the-art on the Count subtask.We first compare to IEP.",
"Our model is 4x smaller than IEP (see Table 1 ) and resolves IEP's poor performance on the challenging Count subtask.",
"Overall, DDRprog performs at least 2x better across all unary tasks (+1.7 percent on Exist, +3.8 percent on Count, + 1.0 percent on Query) it closely matches binary performance (+0.2 percent on Compare, -0.3 percent on Compare Integer).",
"We believe that our model's lack of similar gains on binary task performance can be attributed to the use of a singular fork module, which is responsible for cross-communication during prediction of both branches of a binary program tree, shared across all binary modules.",
"We have observed that this module is essential to obtaining competitive performance on binary tasks; it is likely suboptimal to use a large shared fork module as opposed to a separate smaller cell for each binary cell.Our model surpasses RN in all categories of reasoning, achieving a 2.6x reduction in overall error.",
"RN achieves impressive results for its size and lack of program labels.",
"However, it is questionable whether the all-to-all comparison model will generalize to more logically complex questions.",
"In particular, Count operations do not have a natural formulation as a comparison between pairs of objects, in which case our model achieves a significant 6.4 percent improvement.",
"RN also struggles on the challenging Compare Integer subtask, where we achieve a 4.8 percent improvement.",
"Furthermore, it is unclear how essential high epoch counts are to the model's performance.",
"As detailed in Table 1 , RN was trained in a distributed setting for 1000 epochs.",
"Both our result and FiLM were obtained on single graphics cards and were only limited in number of epochs for practicality -FiLM had not fully converged, and our model was unregularized.Both IEP and our model achieve a roughly 4x improvement over FiLM on Compare Integer questions (4.9 and 4.6 percent, respectively), the difference being that our model eliminates the Count deficiency and is also 4X smaller than IEP.",
"The contrast between FiLM's Compare Integer and Exist/Query/Compare performance suggests a logical deficiency in the model -we believe it is difficult to model the more complex binary question structures using only implicit branching through batch normalization parameters.",
"FiLM does achieve strong Compare Attribute performance, but many such questions can be more easily resolved through a sequence of purely visual manipulations.",
"FiLM achieves 1.5x relative improvement over our architecture on Exist questions, but this is offset by our 1.5x relative improvement on Count questions.Given proximity in overall performance, FiLM could be seen as the main competitor to our model.",
"However, they achieve entirely different aims: DDRprog is an application of a general framework, >5X smaller, and achieves stable performance over all subtasks.",
"FiLM is larger and suffers from a significant deficiency on the Compare Integer subtask, but it uses less supervision.",
"As mentioned in the introduction, our model is part of a general framework that expands the ability of neural architectures to leverage discrete logical and structural information about the given problem.",
"In contrast, FiLM is a single architecture that is likely more directly applicable to low-supervision natural image tasks.",
"We argue that the LSTM fails on the RPN task.",
"This is not immediately obvious: from FIG2 , both the small and large LSTM baselines approximately match our model's performance on the first 5 subproblems of the n = 10 dataset.",
"From n = 6 to n = 10, the performance gap grows between our models -the small LSTM is unable to learn deep stack behavior, and performance decays sharply.The n = 30 dataset reveals the failure.",
"The LSTM's performance is far worse on the first few subproblems of this dataset than on the test set of the original task.",
"This is not an error: recall the question formatting [NUM]*(n + 1)-[OP]*n.",
"The leading subproblems do not correspond to the leading tokens of the question, but rather to a central crop.",
"For example, the first two subproblems of \"12345+-*/\" are given by \"345+-\", not \"12345\" -the latter is not a valid expression.",
"The rapid increase in error on the LSTM implies that it did not learn this property, let alone the stack structure.",
"Instead, it memorized all possible subproblems of length n ∈ {1, 2, 3} expressions preceding the first few OP tokens.",
"Performance quickly decays to L1 error greater than 2.0, which corresponds to mostly noise (the standard deviation of answers minus the first few subproblems is approximately 6.0).",
"In contrast, our model's explicit incorporation of the stack assumption results in a smooth generalization curve with a gradual decay in performance as problem length increases.We briefly address a few likely concerns with our reasoning.",
"First, one might argue that DDRstack cannot be compared to an LSTM, as the latter does not incorporate explicit knowledge of the problem structure.",
"While this evaluation is correct, it is antithetical to the purpose of our architecture.",
"The LSTM baseline does not incorporate this additional information because there is no obvious way to include it -the prevailing approach would be to ignore it and then argue the model's superiority on the basis that it performs well with less supervision.",
"This logic might suggest implicit reasoning approaches such as StackRNN, which attempt to model the underlying data structure without direct supervision.",
"However, we do not expect such approaches to scale to RPN: the hardest task on which StackRNN was evaluated is binary addition.",
"While StackRNN exhibited significantly better generalization compared to the LSTM baseline, the latter did not completely fail the task.",
"In contrast, RPN is a more complex task that completely breaks the baseline LSTM.",
"While we did not evaluate Stack-RNN on RPN (the original implementation is not compatible modern frameworks), we consider it highly improbably that StackRNN would generalize to RPN, which was intentionally designed to be difficult without additional supervision.",
"In contrast, our dynamic approach achieves a dramatic increase in performance and generalization precisely by efficiently incorporating additional supervision.",
"StackRNN is to DDRstack as FiLM is to DDRprog: one motive is to maximize performance with minimal supervision whereas our motive is to leverage structural data to solve harder tasks.",
"The DDR framework facilitates high level reasoning in neural architectures by enabling networks to leverage additional structural information.",
"Our approach resolves differentiability issues common in interfering with discrete logical data and is easily adapted to the specific of each problem.",
"Our work represents a clean synthesis of the modeling capabilities of IEP/NMN and NPI through a differentiable forking mechanism.",
"We have demonstrated efficacy through two applications of our framework.",
"DDRprog achieves a moderate improvement over previous state-of-the-art on CLEVR with greatly increased consistency and reduced model size.",
"DDRstack succeeds on RPN where a much larger baseline LSTM fails to attain generalization.",
"It is our intent to continue refining the versatility of our architecture, including more accurate modeling of the fork module, as mentioned in our CLEVR VQA discussion.",
"Our architecture and its design principles enable modeling of complex data structure assumptions across a wide class of problems where standard monolithic approaches would ignore such useful properties.",
"We hope that this increase in interoperability between discrete data structures and deep learning architectures aids in motivating higher level tasks for the continued progression and development of neural reasoning.",
"Table 2 : Architectural details of subnetworks in DDRprog as referenced in FIG0 and Algorithm 1.",
"Finegrained layer details are provided in tables 4-8.",
"Source will be released pending publication."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6086956262588501,
0.10526315122842789,
0.15686273574829102,
0.14432989060878754,
0.07547169178724289,
0.11320754140615463,
0.12820512056350708,
0.11428570747375488,
0.08695651590824127,
0.1538461446762085,
0,
0.035087715834379196,
0.15625,
0.13333332538604736,
0.03333332762122154,
0.13333332538604736,
0.0952380895614624,
0.04999999701976776,
0.05714285373687744,
0.07843136787414551,
0,
0.038461532443761826,
0.2745097875595093,
0.16393442451953888,
0.04651162400841713,
0.04347825422883034,
0.09836065024137497,
0.07547169178724289,
0.19999998807907104,
0.0833333283662796,
0.11594202369451523,
0.178571417927742,
0.09756097197532654,
0.08888888359069824,
0.07017543166875839,
0.2222222238779068,
0.1428571343421936,
0.1111111044883728,
0.22727271914482117,
0,
0.07547169178724289,
0.1764705777168274,
0.18867923319339752,
0.04999999701976776,
0.09756097197532654,
0.04255318641662598,
0.09999999403953552,
0.11320754140615463,
0.08510638028383255,
0.1090909019112587,
0.08163265138864517,
0.13333332538604736,
0.04444443807005882,
0,
0.09677419066429138,
0.08695651590824127,
0.0555555522441864,
0.04999999701976776,
0.039215680211782455,
0.04878048226237297,
0.052631575614213943,
0.05128204822540283,
0.07594936341047287,
0.10169491171836853,
0.04255318641662598,
0.07017543166875839,
0.08510638028383255,
0.09302324801683426,
0.19230768084526062,
0.19512194395065308,
0.060606054961681366,
0.037735845893621445,
0.07407406717538834,
0,
0,
0.09999999403953552,
0.09090908616781235,
0.09090908616781235,
0,
0.038461532443761826,
0.072727270424366,
0.1702127605676651,
0.1621621549129486,
0.09677419066429138,
0.13333332538604736,
0.04444443807005882,
0.04878048226237297,
0.10526315122842789,
0.06896550953388214,
0.1395348757505417,
0.08695651590824127,
0.0476190410554409,
0.21739129722118378,
0.24390242993831635,
0,
0.1428571343421936,
0.10526315122842789,
0.12765957415103912,
0.23529411852359772,
0.11538460850715637,
0.05128204822540283,
0,
0
] | HypkN9yRW | true | [
"A generic dynamic architecture that employs a problem specific differentiable forking mechanism to encode hard data structure assumptions. Applied to CLEVR VQA and expression evaluation."
] |
[
"We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms.",
"SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability.",
"We also show that SAIL is at least as efficient as standard AIL.",
"In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.",
"The class of Adversarial Imitation Learning (AIL) algorithms learns robust policies that imitate an expert's actions from a small number of expert trajectories, without further access to the expert or environment signals.",
"AIL iterates between refining a reward via adversarial training, and reinforcement learning (RL) with the learned adversarial reward.",
"For instance, Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) shows the equivalence between some settings of inverse reinforcement learning and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) , and recasts imitation learning as distribution matching between the expert and the RL agent.",
"Similarly, Adversarial Inverse Reinforcement Learning (AIRL) (Fu et al., 2017) modifies the GAIL discriminator to learn a reward function robust to changes in dynamics or environment properties.",
"AIL mitigates the issue of distributional drift from behavioral cloning (Ross et al., 2011) , a classical imitation learning algorithm, and demonstrates good performance with only a small number of expert demonstrations.",
"However, AIL has several important challenges, including implicit reward bias (Kostrikov et al., 2019) , potential training instability (Salimans et al., 2016; Brock et al., 2018) , and potential sample inefficiency with respect to environment interaction (Sasaki et al., 2019) .",
"In this paper, we propose a principled approach towards addressing these issues.",
"Wang et al. (2019) demonstrated that imitation learning is also feasible by constructing a fixed reward function via estimating the support of the expert policy.",
"Since support estimation only requires expert demonstrations, the method sidesteps the training instability associated with adversarial training.",
"However, we show in Section 4.2 that the reward learned via support estimation deteriorates when expert data is sparse, and leads to poor policy performances.",
"Support estimation and adversarial reward represent two different yet complementary RL signals for imitation learning, both learnable from expert demonstrations.",
"We unify both signals into Supportguided Adversarial Imitation Learning (SAIL), a generic imitation learning framework.",
"SAIL leverages the adversarial reward to guide policy exploration and constrains the policy search to the estimated support of the expert policy.",
"It is compatible with existing AIL algorithms, such as GAIL and AIRL.",
"We also show that SAIL is at least as efficient as standard AIL.",
"In an extensive evaluation, we demonstrate that SAIL mitigates the implicit reward bias and achieves better performance and training stability against baseline methods over a series of benchmark control tasks.",
"In this paper, we propose Support-guided Adversarial Imitation Learning by combining support guidance with adversarial imitation learning.",
"Our approach is complementary to existing adversarial imitation learning algorithms, and addresses several challenges associated with them.",
"More broadly, our results show that expert demonstrations contain rich sources of information for imitation learning.",
"Effectively combining different sources of reinforcement learning signals from the expert demonstrations produces more efficient and stable algorithms by constraining the policy search space; and appears to be a promising direction for future research.",
"10413.1 ± 47.0 RED and SAIL use RND Burda et al. (2018) for support estimation.",
"We use the default networks from RED 4 .",
"We set σ following the heuristic in Wang et al. (2019) that (s, a) from the expert trajectories mostly have reward close to 1."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6808510422706604,
0.1538461446762085,
0.05714285373687744,
0.145454540848732,
0.30188679695129395,
0.25641024112701416,
0.2666666507720947,
0.19999998807907104,
0.2641509473323822,
0.07407406717538834,
0.05714285373687744,
0.25531914830207825,
0.21052631735801697,
0.16326530277729034,
0.1395348757505417,
0.5263158082962036,
0.20512819290161133,
0.11428570747375488,
0.05714285373687744,
0.1538461446762085,
0.4000000059604645,
0.19999998807907104,
0.1538461446762085,
0.290909081697464,
0.14999999105930328,
0.12903225421905518,
0.08695651590824127
] | r1x3unVKPS | true | [
"We unify support estimation with the family of Adversarial Imitation Learning algorithms into Support-guided Adversarial Imitation Learning, a more robust and stable imitation learning framework."
] |
[
"We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges.",
"We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data.",
"To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization.",
"Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges.",
"Given a graph representing known relationships between a set of nodes, the goal of link prediction is to learn from the graph and infer novel or previously unknown relationships (Liben-Nowell & Kleinberg, 2003) .",
"For instance, in a social network we may use link prediction to power a friendship recommendation system (Aiello et al., 2012) , or in the case of biological network data we might use link prediction to infer possible relationships between drugs, proteins, and diseases (Zitnik & Leskovec, 2017) .",
"However, despite its popularity, previous work on link prediction generally focuses only on one particular problem setting: it generally assumes that link prediction is to be performed on a single large graph and that this graph is relatively complete, i.e., that at least 50% of the true edges are observed during training (e.g., see Grover & Leskovec, 2016; Kipf & Welling, 2016b; Liben-Nowell & Kleinberg, 2003; Lü & Zhou, 2011) .",
"In this work, we consider the more challenging setting of few shot link prediction, where the goal is to perform link prediction on multiple graphs that contain only a small fraction of their true, underlying edges.",
"This task is inspired by applications where we have access to multiple graphs from a single domain but where each of these individual graphs contains only a small fraction of the true, underlying edges.",
"For example, in the biological setting, high-throughput interactomics offers the possibility to estimate thousands of biological interaction networks from different tissues, cell types, and organisms (Barrios-Rodiles et al., 2005) ; however, these estimated relationships can be noisy and sparse, and we need learning algorithms that can leverage information across these multiple graphs in order to overcome this sparsity.",
"Similarly, in the e-commerce and social network settings, link prediction can often have a large impact in cases where we must quickly make predictions on sparsely-estimated graphs, such as when a service has been recently deployed to a new locale.",
"That is to say to link prediction for a new sparse graph can benefit from transferring knowledge from other, possibly more dense, graphs assuming there is exploitable shared structure.",
"We term this problem of link prediction from sparsely-estimated multi-graph data as few shot link prediction analogous to the popular few shot classification setting (Miller et al., 2000; Lake et al., 2011; Koch et al., 2015) .",
"The goal of few shot link prediction is to observe many examples of graphs from a particular domain and leverage this experience to enable fast adaptation and higher accuracy when predicting edges on a new, sparsely-estimated graph from the same domain-a task that can can also be viewed as a form of meta learning, or learning to learn (Bengio et al., 1990; 1992; Thrun & Pratt, 2012; Schmidhuber, 1987) in the context of link prediction.",
"This few shot link prediction setting is particularly challenging as current link prediction methods are generally ill-equipped to transfer knowledge between graphs in a multi-graph setting and are also unable to effectively learn from very sparse data.",
"Present work.",
"We introduce a new framework called Meta-Graph for few shot link prediction and also introduce a series of benchmarks for this task.",
"We adapt the classical gradient-based metalearning formulation for few shot classification (Miller et al., 2000; Lake et al., 2011; Koch et al., 2015) to the graph domain.",
"Specifically, we consider a distribution over graphs as the distribution over tasks from which a global set of parameters are learnt, and we deploy this strategy to train graph neural networks (GNNs) that are capable of few-shot link prediction.",
"To further bootstrap fast adaptation to new graphs we also introduce a graph signature function, which learns how to map the structure of an input graph to an effective initialization point for a GNN link prediction model.",
"We experimentally validate our approach on three link prediction benchmarks.",
"We find that our MetaGraph approach not only achieves fast adaptation but also converges to a better overall solution in many experimental settings, with an average improvement of 5.3% in AUC at convergence over non-meta learning baselines.",
"We introduce the problem of few-shot link prediction-where the goal is to learn from multiple graph datasets to perform link prediction using small samples of graph data-and we develop the Meta-Graph framework to address this task.",
"Our framework adapts gradient-based meta learning to optimize a shared parameter initialization for local link prediction models, while also learning a parametric encoding, or signature, of each graph, which can be used to modulate this parameter initialization in a graph-specific way.",
"Empirically, we observed substantial gains using Meta-Graph compared to strong baselines on three distinct few-shot link prediction benchmarks.",
"In terms of limitations and directions for future work, one key limitation is that our graph signature function is limited to modulating the local link prediction model through an encoding of the current graph, which does not explicitly capture the pairwise similarity between graphs in the dataset.",
"Extending Meta-Graph by learning a similarity metric or kernel between graphs-which could then be used to condition meta-learning-is a natural direction for future work.",
"Another interesting direction for future work is extending the Meta-Graph approach to multi-relational data, and exploiting similarities between relation types through a suitable Graph Signature function."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.19607841968536377,
0.21276594698429108,
0.07692307233810425,
0.21276594698429108,
0.1355932205915451,
0.125,
0.11538460850715637,
0.16326530277729034,
0.08571428060531616,
0.178571417927742,
0.17777776718139648,
0.1249999925494194,
0.1463414579629898,
0.15686273574829102,
0.2631579041481018,
0.2380952388048172,
0.19230768084526062,
0.31372547149658203,
0.06896550953388214,
0.1071428507566452,
0.2083333283662796,
0.07407406717538834,
0.05405404791235924,
0.16393442451953888,
0.0952380895614624,
0.2222222238779068
] | BJepcaEtwB | true | [
"We apply gradient based meta-learning to the graph domain and introduce a new graph specific transfer function to further bootstrap the process."
] |
[
"Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set.",
"However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space.",
"To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space.",
"We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network's distribution space.",
"Generative Adversarial Nets (GAN) BID5 is a framework of estimating generative models.",
"The main idea BID4 is to train two target network models simultaneously, in which one, called the generator, aims to generate samples that resemble those from the data distribution, while the other, called the discriminator, aims to distinguish the samples by the generator from the real data.",
"Naturally, this type of training framework admits a nice interpretation as a twoperson zero-sum game and interesting game theoretical properties, such as uniqueness of the optimal solution, have been derived BID5 .",
"It is further proved that such adversarial process minimizes certain divergences, such as Shannon divergence, between the generated distribution and the data distribution.Simply put, the goal of training a GAN is to search for a distribution in the range of the generator that best approximates the data distribution.",
"The range is often defined by the input latent variable z and its specific architecture, i.e., Π = {G(z, θ), θ ∈ Θ}.",
"When the range is general enough, one could possibly find the real data distribution.",
"However, in practice, the range is usually insufficient to perfectly describe the real data, which is typically of high dimension.",
"As a result, what we search for is in fact the I-projection BID2 of the real data distribution on Π."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.31578946113586426,
0.1860465109348297,
0.2857142686843872,
0.3333333432674408,
0.19354838132858276,
0.18867924809455872,
0.1304347813129425,
0.2222222238779068,
0.045454539358615875,
0.25,
0.21621620655059814,
0.31578946113586426
] | ryekdoCqF7 | true | [
"We propose a new method to incrementally train a mixture generative model to approximate the information projection of the real data distribution."
] |
[
"Generative priors have become highly effective in solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.",
"With a generative model we can represent an image with a much lower dimensional latent codes.",
"In the context of compressive sensing, if the unknown image belongs to the range of a pretrained generative network, then we can recover the image by estimating the underlying compact latent code from the available measurements.",
"However, recent studies revealed that even untrained deep neural networks can work as a prior for recovering natural images.",
"These approaches update the network weights keeping latent codes fixed to reconstruct the target image from the given measurements.",
"In this paper, we optimize over network weights and latent codes to use untrained generative network as prior for video compressive sensing problem.",
"We show that by optimizing over latent code, we can additionally get concise representation of the frames which retain the structural similarity of the video frames.",
"We also apply low-rank constraint on the latent codes to represent the video sequences in even lower dimensional latent space.",
"We empirically show that our proposed methods provide better or comparable accuracy and low computational complexity compared to the existing methods.",
"Compressive sensing refers to a broad class of problems in which we aim to recover a signal from a small number of measurements [1] - [3] .",
"Suppose we are given a sequence of measurements for t = 1, . . . , T as y t = A t x t + e t ,",
"where x t denotes the t th frame in the unknown video sequence, y t denotes its observed measurements, A t denotes the respective measurement operator, and e t denotes noise or error in the measurements.",
"Our goal is to recover the video sequence (x t ) from the available measurements (y t ).",
"The recovery problem becomes especially challenging as the number of measurements (in y t ) becomes very small compared to the number of unknowns (in x t ).",
"Classical signal priors exploit sparse and low-rank structures in images and videos for their reconstruction [4] - [16] .",
"However, the natural images exhibits far richer nonlinear structures than sparsity alone.",
"We focus on a newly emerging generative priors that learn a function that maps vectors drawn from a certain distribution in a low-dimensional space into images in a highdimensional space.",
"The generative model and optimization problems we use are inspired by recent work on using generative models for compressive sensing in [17] - [23] .",
"Compressive sensing using generative models was introduced in [17] , which used a trained deep generative network as a prior for image reconstruction from compressive measurements.",
"Afterwards deep image prior (DIP) used an untrained convolutional generative model as a prior for solving inverse problems such as inpainting and denoising because of their tendency to generate natural images [22] ; the reconstruction problem involves optimization of generator network parameters.",
"Inspired by these observations, a number of methods have been proposed for solving compressive sensing problem by optimizing generator network weights while keeping the latent code fixed at a random value [19] , [20] .",
"Both DIP [22] and deep decoder [20] update the model weights to generate a given image; therefore, the generator can reconstruct wide range of images.",
"One key difference between the two approaches is that the network used in DIP is highly overparameterized, while the one used in deep decoder is underparameterized.",
"We observed two main limitations in the DIP and deep decoder-based video recovery that we seek to address in this paper.",
"(1) The latent codes in DIP and deep decoder methods are initialized at random and stay fixed throughout the recovery process.",
"Therefore, we cannot infer the structural similarities in the images from the structural similarities in the latent codes.",
"(2) Both of these methods train one network per image.",
"A naive approach to train one network per frame in a video will be computationally prohibitive, and if we train a single network to generate the entire video sequence, then their performance degrades.",
"Therefore, we propose joint optimization over network weights γ and the latent codes z t to reconstruct video sequence.",
"Thus we learn a single generator and a set of latent codes to represent a video sequence.",
"We observe that when we optimize over latent code alongside network weights, the temporal similarity in the video frames is reflected in the latent code representation.",
"To exploit similarities among the frames in a video sequence, we also include low-rank constraints on the latent codes.",
"An illustration of different types of representations we use in this paper are shown in Figure 1 ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1111111044883728,
0.0624999962747097,
0.21739129722118378,
0.1111111044883728,
0.11764705181121826,
0.05128204822540283,
0.10256409645080566,
0,
0,
0.1538461446762085,
0.15789473056793213,
0.0476190410554409,
0.1249999925494194,
0.052631575614213943,
0.05882352590560913,
0,
0.14999999105930328,
0.09999999403953552,
0.24390242993831635,
0.1071428507566452,
0.20408162474632263,
0.1463414579629898,
0.10810810327529907,
0.05405404791235924,
0.05405404791235924,
0.06896550953388214,
0,
0.04444443807005882,
0,
0.1249999925494194,
0.052631575614213943,
0.05714285373687744,
0
] | BJgmnmn5Lr | true | [
"Recover videos from compressive measurements by learning a low-dimensional (low-rank) representation directly from measurements while training a deep generator. "
] |
[
"Magnitude-based pruning is one of the simplest methods for pruning neural networks.",
"Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures.",
"Based on the observation that the magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization.",
"Our experimental results demonstrate that the proposed method consistently outperforms the magnitude pruning on various networks including VGG and ResNet, particularly in the high-sparsity regime.",
"The \"magnitude-equals-saliency\" approach has been long underlooked as an overly simplistic baseline among all imaginable techniques to eliminate unnecessary weights from over-parametrized neural networks.",
"Since the early works of LeCun et al. (1989) ; Hassibi & Stork (1993) which provided more theoretically grounded alternative of magnitude-based pruning (MP) based on second derivatives of loss function, a wide range of methods including Bayesian / information-theoretic approaches (Neal, 1996; Louizos et al., 2017; Molchanov et al., 2017; Dai et al., 2018) , pregularization (Wen et al., 2016; Liu et al., 2017; Louizos et al., 2018) , sharing redundant channels (Zhang et al., 2018; Ding et al., 2019) , and reinforcement learning approaches (Lin et al., 2017; Bellec et al., 2018; He et al., 2018) has been proposed as more sophisticated alternatives.",
"On the other hand, the capabilities of MP heuristics are gaining attention once more.",
"Combined with minimalistic techniques including iterative pruning (Han et al., 2015) and dynamic reestablishment of connections (Zhu & Gupta, 2017) , a recent large-scale study by Gale et al. (2019) claims that MP can achieve a state-of-the-art trade-off of sparsity and accuracy on ResNet-50.",
"The unreasonable effectiveness of magnitude scores often extends beyond the strict domain of network pruning; a recent experiment by Frankle & Carbin (2019) suggests an existence of an automatic subnetwork discovery mechanism underlying the standard gradient-based optimization procedures of deep, overparametrized neural networks by showing that the MP algorithm finds an efficient trainable subnetwork.",
"These observations constitute a call to revisit the \"magnitude-equals-saliency\" approach for a better understanding of deep neural network itself.",
"As an attempt to better understand the nature of MP methods, we study a generalization of magnitude scores under a functional approximation framework; by viewing MP as a relaxed minimization of distortion in layerwise operators introduced by zeroing out parameters, we consider a multi-layer extension of the distortion minimization problem.",
"Minimization of the newly suggested distortion measure which 'looks ahead' the impact of pruning on neighboring layers gives birth to a novel pruning strategy, coined lookahead pruning (LAP).",
"In this paper, we focus on comparison of the proposed LAP scheme to its MP counterpart.",
"We empirically demonstrate that LAP consistently outperforms the MP under various setups including linear networks, fully-connected networks, and deep convolutional and residual networks.",
"In particular, the LAP consistently enables more than ×2 gain in the compression rate of the considered models, with increasing benefits under the high-sparsity regime.",
"Apart from its performance, the lookahead pruning method enjoys additional attractive properties: • Easy-to-use: Like magnitude-based pruning, the proposed LAP is a simple score-based approach agnostic to model and data, which can be implemented by computationally light elementary tensor operations.",
"Unlike most Hessian-based methods, LAP does not rely on an availability of training data except for the retraining phase.",
"It also has no hyper-parameter to tune, in contrast to other sophisticated training-based and optimization-based schemes.",
"• Versatility: As our method simply replaces the \"magnitude-as-saliency\" criterion with a lookahead alternative, it can be deployed jointly with algorithmic tweaks developed for magnitudebased pruning, such as iterative pruning and retraining (Han et al., 2015) or joint pruning and training with dynamic reconnections (Zhu & Gupta, 2017; Gale et al., 2019) .",
"The remainder of this manuscript is structured as follows: In Section 2, we introduce a functional approximation perspective toward MP and motivate LAP and its variants as a generalization of MP for multiple layer setups; in Section 3 we explore the capabilities of LAP and its variants with simple models, then move on to apply LAP to larger-scale models.",
"In this work, we interpret magnitude-based pruning as a solution to the minimization of the Frobenius distortion of a single layer operation incurred by pruning.",
"Based on this framework, we consider the minimization of the Frobenius distortion of multi-layer operation, and propose a novel lookahead pruning (LAP) scheme as a computationally efficient algorithm to solve the optimization.",
"Although LAP was motivated from linear networks, it extends to nonlinear networks which indeed minimizes the root mean square lookahead distortion assuming i.",
"τ fraction in all fully-connected layers, except for the last layer where we use (1 + q)/2 instead.",
"For FCN, we use (p, q) = (0, 0.5).",
"For Conv-6, VGGs ResNets, and WRN, we use (0.85, 0.8).",
"For ResNet-{18, 50}, we do not prune the first convolutional layer.",
"The range of sparsity for reported figures in all tables is decided as follows: we start from τ where test error rate starts falling below that of an unpruned model and report the results at τ, τ + 1, τ + 2, . . . for FCN and Conv-6, τ, τ + 2, τ + 4, . . . for VGGs, ResNet-50, and WRN, and τ, τ + 3, τ + 6, . . . for ResNet-18.",
"In this section, we show that the optimization in Eq. (3) is NP-hard by showing the reduction from the following binary quadratic programming which is NP-hard (Murty & Kabadi, 1987) :"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.29999998211860657,
0.1818181723356247,
0.29999998211860657,
0.1249999925494194,
0,
0.12820512056350708,
0.1818181723356247,
0.1666666567325592,
0.1111111119389534,
0.2222222238779068,
0.25531914830207825,
0.24242423474788666,
0.1599999964237213,
0.13333332538604736,
0.12903225421905518,
0.1666666567325592,
0.1428571343421936,
0,
0.1071428507566452,
0.14814814925193787,
0.3333333432674408,
0.2702702581882477,
0.0624999962747097,
0.07407406717538834,
0,
0,
0.09999999403953552,
0.07017543911933899,
0.0555555522441864
] | ryl3ygHYDB | true | [
"We study a multi-layer generalization of the magnitude-based pruning."
] |
[
"Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary.",
"Those methods perform single-objective optimization on some simple consolidation of the losses, e.g. an average.",
"In this work, we revisit the multiple-discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem.",
"Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets.",
"Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently.",
"Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods.",
"Generative Adversarial Networks (GANs) BID13 offer a new approach to generative modeling, using game-theoretic training schemes to implicitly learn a given probability density.",
"Prior to the emergence of GAN architectures, realistic generative modeling remained elusive.",
"When offering unparalleled realism, GAN training remains fraught with stability issues.",
"Commonly reported shortcomings involved in the GAN game are the lack of useful gradients provided by the discriminator, and mode collapse, i.e. lack of diversity in the generator's samples.Considerable research effort has been devoted in recent literature in order to overcome training instability 1 within the GAN framework.",
"Some architectures such as BEGAN BID4 ) have applied auto-encoders as discriminators and proposed a new loss to help stabilize training.",
"Methods such as TTUR BID16 , in turn, have attempted to define schedules for updating the generator and discriminator differently.",
"The PacGAN algorithm (Lin et al., 2017) proposes to modify the discriminator's architecture which will receive m concatenated samples as input, while modifications to alternate updates in SGD were introduced in (Yadav et al., 2017) .",
"These samples are jointly classified as either real or generated, and authors show that this enforces sample diversity.",
"In SNGAN (Miyato et al., 2018) , authors introduce spectral normalization on the discriminator aiming to ensure Lipschitz continuity, which is empirically shown to consistently yield high quality samples when different sets of hyperparameters are used.Recent works have proposed to tackle GANs instability issues using multiple discriminators.",
"Neyshabur et al. (2017) propose a GAN variation in which one generator is trained against a set of discriminators, where each discriminator sees a fixed random projection of the inputs.",
"Prior work, including GMAN BID9 has also explored training against multiple discriminators.In this paper, we build upon Neyshabur et al.'s introduced framework and propose reformulating the average loss minimization aiming to further stabilize GAN training.",
"Specifically, we propose treating the loss signal provided by each discriminator as an independent objective function.",
"To achieve this, we simultaneously minimize the losses using multi-objective optimization techniques.",
"Namely, we exploit previously introduced methods in literature such as the multiple gradient descent algorithm (MGD) BID7 .",
"However, due to MGD's prohibitively high cost in the case of large neural networks, we propose the use of more efficient alternatives such as maximization of the hypervolume of the region defined between a fixed, shared upper bound on those losses, which we will refer to as the nadir point η * , and each of the component losses.In contrast to Neyshabur et al. (2017) 's approach, where the average loss is minimized when training the generator, hypervolume maximization (HV) optimizes a weighted loss, and the generator's training will adaptively assign greater importance to feedback from discriminators against which it performs poorly.Experiments performed on MNIST show that HV presents a good compromise in the computational cost-samples quality trade-off, when compared to average loss minimization or GMAN's approach (low quality and cost), and MGD (high quality and cost).",
"Also, the sensitivity to introduced hyperparameters is studied and results indicate that increasing the number of discriminators consequently increases the generator's robustness along with sample quality and diversity.",
"Experiments on CIFAR-10 indicate the method described produces higher quality generator samples in terms of quantitative evaluation.",
"Moreover, image quality and sample diversity are once more shown to consistently improve as we increase the number of discriminators.In summary, our main contributions are the following:1.",
"We offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization.",
"2.",
"We propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss.The remainder of this document is organized as follows: Section 2 introduces definitions on multiobjective optimization and MGD.",
"In Section 3 we describe prior relevant literature.",
"Hypervolume maximization is detailed in Section 4, with experiments and results presented in Section 5.",
"Conclusions and directions for future work are drawn in Section 6.",
"In this work we have shown that employing multiple discriminators is a practical approach allowing us to trade extra capacity, and thereby extra computational cost, for higher quality and diversity of generated samples.",
"Such an approach is complimentary to other advances in GANs training and can be easily used together with other methods.",
"We introduced a multi-objective optimization framework for studying multiple discriminator GANs, and showed strong similarities between previous work and the multiple gradient descent algorithm.",
"The proposed approach was observed to consistently yield higher quality samples in terms of FID.",
"Furthermore, increasing the number of discriminators was shown to increase sample diversity and generator robustness.Deeper analysis of the quantity || K k=1 α k ∇l k || is the subject of future investigation.",
"We hypothesize that using it as a penalty term might reduce the necessity of a high number of discriminators.",
"In BID16 , authors proposed to use as a quality metric the squared Fréchet distance BID11 between Gaussians defined by estimates of the first and second order moments of the outputs obtained through a forward pass in a pretrained classifier of both real and generated data.",
"They proposed the use of Inception V3 (Szegedy et al., 2016) for computation of the data representation and called the metric Fréchet Inception Distance (FID), which is defined as: DISPLAYFORM0 where m d , Σ d and m g , Σ g are estimates of the first and second order moments from the representations of real data distributions and generated data, respectively.We employ FID throughout our experiments for comparison of different approaches.",
"However, for each dataset in which FID was computed, the output layer of a pretrained classifier on that particular dataset was used instead of Inception.",
"m d and Σ d were estimated on the complete test partitions, which are not used during training."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11999999731779099,
0.05405404791235924,
0.04651162400841713,
0.307692289352417,
0.2448979616165161,
0.24390242993831635,
0.0476190410554409,
0.060606054961681366,
0.1249999925494194,
0.16393442451953888,
0.09756097197532654,
0.1463414579629898,
0.038461532443761826,
0.1538461446762085,
0.1492537260055542,
0.1249999925494194,
0.10526315122842789,
0,
0,
0.10526315122842789,
0.1138211339712143,
0.260869562625885,
0.21052631735801697,
0.21276594698429108,
0.2545454502105713,
0.1666666567325592,
0,
0.23529411852359772,
0.1875,
0.23076923191547394,
0.25,
0.1860465109348297,
0.2222222238779068,
0.16326530277729034,
0.10526315122842789,
0.13333332538604736,
0.10666666179895401,
0.1395348757505417,
0.10526315122842789
] | S1MB-3RcF7 | true | [
"We introduce hypervolume maximization for training GANs with multiple discriminators, showing performance improvements in terms of sample quality and diversity. "
] |
[
"Designing of search space is a critical problem for neural architecture search (NAS) algorithms.",
"We propose a fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms.",
"This search space facilitates direct selection of channel numbers and kernel sizes in convolutions.",
"In addition, we propose a resource-aware architecture search algorithm which dynamically selects atomic blocks during training.",
"The algorithm is further accelerated by a dynamic network shrinkage technique.\n",
"Instead of a search-and-retrain two-stage paradigm, our method can simultaneously search and train the target architecture in an end-to-end manner. \n",
"Our method achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost.\n",
"We open our entire codebase at: https://github.com/meijieru/AtomNAS.",
"Human-designed neural networks are already surpassed by machine-designed ones.",
"Neural Architecture Search (NAS) has become the mainstream approach to discover efficient and powerful network structures (Zoph & Le (2017) ; Pham et al. (2018) ; ; ).",
"Although the tedious searching process is conducted by machines, humans still involve extensively in the design of the NAS algorithms.",
"Designing of search spaces is critical for NAS algorithms and different choices have been explored.",
"Cai et al. (2019) and Wu et al. (2019) utilize supernets with multiple choices in each layer to accommodate a sampled network on the GPU.",
"Chen et al. (2019b) progressively grow the depth of the supernet and remove unnecessary blocks during the search.",
"Tan & Le (2019a) propose to search the scaling factor of image resolution, channel multiplier and layer numbers in scenarios with different computation budgets.",
"Stamoulis et al. (2019a) propose to use different kernel sizes in each layer of the supernet and reuse the weights of larger kernels for small kernels.",
"; Tan & Le (2019b) adopts Inverted Residuals with Linear Bottlenecks (MobileNetV2 block) (Sandler et al., 2018) , a building block with light-weighted depth-wise convolutions for highly efficient networks in mobile scenarios.",
"However, the proposed search spaces generally have only a small set of choices for each block.",
"DARTS and related methods Chen et al., 2019b; use around 10 different operations between two network nodes.",
"; Cai et al. (2019) ; Wu et al. (2019) ; Stamoulis et al. (2019a) search the expansion ratios in the MobileNetV2 block but still limit them to a few discrete values.",
"We argue that more fine-grained search space is essential to find optimal neural architectures.",
"Specifically, the searched building block in a supernet should be as small as possible to generate the most diversified model structures.",
"We revisit the architectures of state-of-the-art networks ; Tan & Le (2019b) ; He et al. (2016) ) and find a commonly used building block: convolution -channel-wise operation -convolution.",
"We reinterpret such structure as an ensemble of computationally independent blocks, which we call atomic blocks.",
"This new formulation enables a much larger and more fine-grained search space.",
"Starting from a supernet which is built upon atomic blocks, the search for exact channel numbers and various operations can be achieved by selecting a subset of the atomic blocks.",
"For the efficient exploration of the new search space, we propose a NAS algorithm named AtomNAS to conduct architecture search and network training simultaneously.",
"Specifically, an importance factor is introduced to each atomic block.",
"A penalty term proportional to the computation cost of the atomic block is enforced on the network.",
"By jointly learning the importance factors along with the weights of the network, AtomNAS selects the atomic blocks which contribute to the model capacity with relatively small computation cost.",
"Training on large supernets is computationally demanding.",
"We observe that the scaling factors of many atomic blocks permanently vanish at the early stage of model training.",
"We propose a dynamic network shrinkage technique which removes the ineffective atomic blocks on the fly and greatly reduce the computation cost of AtomNAS.",
"In our experiment, our method achieves 75.9% top-1 accuracy on ImageNet dataset around 360M FLOPs, which is 0.9% higher than state-of-the-art model (Stamoulis et al., 2019a) .",
"By further incorporating additional modules, our method achieves 77.6% top-1 accuracy.",
"It outperforms MixNet by 0.6% using 363M FLOPs, which is a new state-of-the-art under the mobile scenario.",
"In summary, the major contributions of our work are:",
"1. We propose a fine-grained search space which includes the exact number of channels and mixed operations (e.g., combination of different convolution kernels).",
"2. We propose an efficient end-to-end NAS algorithm named AtomNAS which can simultaneously search the network architecture and train the final model.",
"No finetuning is needed after AtomNAS finishes.",
"3. With the proposed search space and AtomNAS, we achieve state-of-the-art performance on ImageNet dataset under mobile setting.",
"In this paper, we revisit the common structure, i.e., two convolutions joined by a channel-wise operation, and reformulate it as an ensemble of atomic blocks.",
"This perspective enables a much larger and more fine-grained search space.",
"For efficiently exploring the huge fine-grained search space, we propose an end-to-end algorithm named AtomNAS, which conducts architecture search and network training jointly.",
"The searched networks achieve significantly better accuracy than previous state-of-the-art methods while using small extra cost.",
"Table 4 : Comparision with baseline backbones on COCO object detection and instance segmentation.",
"Cls denotes the ImageNet top-1 accuracy; detect-mAP and seg-mAP denotes mean average precision for detection and instance segmentation on COCO dataset.",
"The detection results of baseline models are from Stamoulis et al. (2019b) .",
"SinglePath+ (Stamoulis et al., 2019b) In this section, we assess the performance of AtomNAS models as feature extractors for object detection and instance segmentation on COCO dataset (Lin et al., 2014) .",
"We first pretrain AtomNAS models (without Swish activation function (Ramachandran et al., 2018) and Squeeze-and-Excitation (SE) module (Hu et al., 2018) ) on ImageNet, use them as drop-in replacements for the backbone in the Mask-RCNN model (He et al., 2017a) by building the detection head on top of the last feature map, and finetune the model on COCO dataset.",
"We use the open-source code MMDetection (Chen et al., 2019a) .",
"All the models are trained on COCO train2017 with batch size 16 and evaluated on COCO val2017.",
"Following the schedule used in the open-source implementation of TPU-trained Mask-RCNN , the learning rate starts at 0.02 and decreases by a scale of 10 at 15-th and 20th epoch respectively.",
"The models are trained for 23 epochs in total.",
"Table 4 compares the results with other baseline backbone models.",
"The detection results of baseline models are from Stamoulis et al. (2019b) .",
"We can see that all three AtomNAS models outperform the baselines on object detection task.",
"The results demonstrate that our models have better transferability than the baselines, which may due to mixed operations, a.k.a multi-scale are here, are more important to object detection and instance segmentation.",
"https://github.com/tensorflow/tpu/tree/master/models/official/mask_ rcnn"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380895614624,
0,
0,
0,
0,
0,
0.1599999964237213,
0,
0,
0,
0,
0.08695651590824127,
0.06666666269302368,
0,
0,
0.06451612710952759,
0.10256409645080566,
0.0833333283662796,
0,
0,
0,
0,
0.0555555522441864,
0,
0.09999999403953552,
0.05714285373687744,
0.06666666269302368,
0,
0.17391303181648254,
0,
0.13333332538604736,
0,
0.06666666269302368,
0.11428570747375488,
0,
0.23076923191547394,
0,
0,
0,
0,
0.307692289352417,
0,
0,
0,
0.0833333283662796,
0.09090908616781235,
0.14814814925193787,
0,
0.10526315122842789,
0.072727270424366,
0,
0.08695651590824127,
0,
0.11764705181121826,
0,
0,
0.08695651590824127,
0
] | BylQSxHFwr | true | [
"A new state-of-the-art on Imagenet for mobile setting"
] |
[
"We introduce Lyceum, a high-performance computational ecosystem for robotlearning. ",
"Lyceum is built on top of the Julia programming language and theMuJoCo physics simulator, combining the ease-of-use of a high-level program-ming language with the performance of native C. Lyceum is up to 10-20Xfaster compared to other popular abstractions like OpenAI’sGymand Deep-Mind’sdm-control. ",
"This substantially reduces training time for various re-inforcement learning algorithms; and is also fast enough to support real-timemodel predictive control with physics simulators. ",
"Lyceum has a straightfor-ward API and supports parallel computation across multiple cores or machines.",
"The code base, tutorials, and demonstration videos can be found at: https://sites.google.com/view/lyceum-anon.",
"Progress in deep learning and artificial intelligence has exploded in recent years, due in large part to growing computational infrastructure.",
"The advent of massively parallel GPU computing, combined with powerful automatic-differentiation tools like TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2017b) , have lead to new classes of algorithms by enabling what was once computational intractable.",
"These tools, alongside fast and accurate physics simulators like MuJoCo (Todorov et al., 2012) and associated frameworks like OpenAI's Gym (Brockman et al., 2016 ) and DeepMind's dm_control (Tassa et al., 2018) , have similarly transformed various aspects of robotic control like Reinforcement Learning (RL), Model-Predictive Control (MPC), and motion planning.",
"These platforms enable researchers to give their ideas computational form, share results with collaborators, and deploy their successes on real systems.",
"From these advances, simulation to real-world (sim2real) transfer has emerged as a promising paradigm for robotic control.",
"A growing body of recent work suggests that robust control policies trained in simulation can successfully transfer to the real world (Lowrey et al., 2018a; OpenAI, 2018; Rajeswaran et al., 2016; Sadeghi & Levine, 2016; Tobin et al., 2017) .",
"Despite these advances, many algorithms are computationally inefficient and have been unable to scale to complex problem domains.",
"Training control policies with state-of-the-art RL algorithms often takes hours to days of compute time.",
"For example, OpenAI's extremely impressive Dactyl work (OpenAI, 2018) required 50 hours of training time across 6144 CPU cores and 8 powerful NVIDIA V100 GPUs.",
"Such computational budgets are available only to a select few labs.",
"Furthermore, such experiments are seldom run only once in deep learning and especially in deep RL.",
"Indeed, RL algorithms are notoriously sensitive to choices of hyper-parameters and require reward shaping (Henderson et al., 2017; Rajeswaran et al., 2017; 2018; Mania et al., 2018) .",
"Thus, many iterations of the learning process may be required, with humans in the loop, to improve reward and hyperparameters, before deploying solutions in real world.",
"This computational bottleneck often leads to a scarcity of hardware results, relative to the number of papers that propose new algorithms on highly simplified and well tuned benchmark tasks (Brockman et al., 2016; Tassa et al., 2018) .",
"Exploring avenues to reduce experiment turn around time is thus crucial to scaling up to harder tasks and making resource-intensive algorithms and environments accessible to research labs without massive cloud computing budgets.",
"In a similar vein, computational considerations have also limited progress in model-based control algorithms.",
"For real-time model predictive control, the computational restrictions manifest as the requirement to compute controls in bounded time with limited local resources.",
"As we will show, existing frameworks such as Gym and dm_control, while providing a convenient abstraction in Python, are too slow to meet this real-time computation requirement.",
"As a result, most planning algorithms are run offline and deployed in open-loop mode on hardware.",
"This is unfortunate, since it does not take feedback into account which is well known to be critical for stochastic control.",
"Our Contributions: Our goal in this work is to overcome the aforementioned computational restrictions to enable faster training of policies with RL algorithms, facilitate real-time MPC with a detailed physics simulator, and ultimately enable researchers to engage more complex robotic tasks.",
"To this end, we develop Lyceum, a computational ecosystem that uses the Julia programming language and the MuJoCo physics engine.",
"Lyceum ships with the main OpenAI gym continuous control tasks, along with other environments representative of challenges in robotics.",
"Julia's unique features allow us to wrap MuJoCo with zero-cost abstractions, providing the flexibility of a high-level programming language to enable easy creation of environments, tasks, and algorithms while retaining the performance of native C/C++.",
"This allows RL and MPC algorithms implemented in Lyceum to be 10-100X faster compared to Gym and dm_control.",
"We hope that this speedup will enable RL researchers to scale up to harder problems without increased computational costs, as well as enable real-time MPC that looks ahead through a simulator.",
"We intruced Lyceum, a new computational ecosystem for robot learning in Julia that provides the rapid prototyping and ease-of-use benefits of a high-level programming language, yet retaining the performance of a low-level language like C. We demonstrated that this ecosystem can obtain 10-20X speedups compared to existing ecosystems like OpenAI gym and dm_control.",
"We also demonstrated that this speed up enables faster experimental times for RL algorithms, as well as real-time model predictive control.",
"In the future, we hope to port over algorithmic infrastructures like OpenAI's baselines (Dhariwal et al., 2017) as well as environments like hand manipulation suite (Rajeswaran et al., 2018) and DoorGym (Urakami et al., 2019) ."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0,
0.09302325546741486,
0.060606054961681366,
0.08695651590824127,
0.08695651590824127,
0.07407406717538834,
0.04444444179534912,
0.039215683937072754,
0.06896550953388214,
0.07692307233810425,
0.09090908616781235,
0.07692307233810425,
0,
0.05882352590560913,
0,
0.08695651590824127,
0.0624999962747097,
0.060606054961681366,
0.04651162400841713,
0.054054051637649536,
0,
0,
0.0555555522441864,
0.07999999821186066,
0,
0.04444444179534912,
0.0714285671710968,
0.07407406717538834,
0.09999999403953552,
0.07999999821186066,
0,
0.07547169178724289,
0,
0.05128204822540283
] | SyxytxBFDr | true | [
"A high performance robotics simulation and algorithm development framework."
] |
[
"There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift.",
"Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA).",
"Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation.",
"Devoid of such highly impractical assumptions, we propose a novel two-stage learning process.",
"Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.",
"To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework.",
"Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples.",
"To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM).",
"A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches.",
"Deep learning models have proven to be highly successful over a wide variety of tasks (Krizhevsky et al., 2012; Ren et al., 2015) .",
"However, a majority of these remain heavily dependent on access to a huge amount of labeled samples to achieve a reliable level of generalization.",
"A recognition model trained on a certain distribution of labeled samples (source domain) often fails to generalize when deployed in a new environment (target domain) in the presence a discrepancy in the input distribution (Shimodaira, 2000) .",
"Domain adaptation (DA) algorithms seek to minimize this discrepancy either by learning a domain invariant feature representation (Long et al., 2015; Kumar et al., 2018; Ganin et al., 2016; Tzeng et al., 2015) , or by learning independent domain transformations (Long et al., 2016) to a common latent representation through adversarial distribution matching (Tzeng et al., 2017; Nath Kundu et al., 2018) , in the absence of target label information.",
"Most of the existing approaches (Zhang et al., 2018c; Tzeng et al., 2017 ) assume a common label-set shared between the source and target domains (i.e. C s = C t ), which is often regarded as Closed-Set DA (see Fig. 1 ).",
"Though this assumption helps to analyze various insights of DA algorithms, such an assumption rarely holds true in real-world scenarios.",
"Recently researchers have independently explored two broad adaptation settings by partly relaxing the above assumption.",
"In the first kind, Partial DA (Zhang et al., 2018b; Cao et al., 2018a; b) , the target label space is considered as a subset of the source label space (i.e. C t ⊂ C s ).",
"This setting is more suited for large-scale universal source datasets, which will almost always subsume the label-set of a wide range of target domains.",
"However, the availability of such a universal source is highly questionable for a wide range of input domains and tasks.",
"In the second kind, regarded as Open-set DA (Baktashmotlagh et al., 2019; Ge et al., 2017) , the target label space is considered as a superset of the source label space (i.e. C t ⊃ C s ).",
"The major challenge in this setting is attributed to detection of target samples from the unobserved categories in a fully-unsupervised scenario.",
"Apart from the above two extremes, certain works define a partly mixed scenario by allowing \"private\" label-set for both source and target domains (i.e. C s \\ C t = ∅ and C t \\ C s = ∅) but with extra supervision such as few-shot labeled data (Luo et al., 2017) or access to the knowledge of common categories (Panareda Busto & Gall, 2017) .",
"Most of the prior approaches consider each scenario in isolation and propose independent solutions.",
"Thus, they require access to the knowledge of label-set relationship (or category-gap) to carefully choose a DA algorithm, which would be suitable for the problem in hand.",
"Furthermore, all the prior unsupervised DA works require coexistence of source and target samples even during deployment, hence not source-free.",
"This is highly impractical, as labeled source data may not be accessible after deployment due to several reasons such as, privacy concerns, restricted access to proprietary data, accidental loss of source data or other computational limitations in real-time deployment scenarios.",
"Acknowledging the aforementioned shortcomings, we propose one of the most convenient DA frameworks which is ingeniously equipped to address source-free DA for all kinds of label-set relationships, without any prior knowledge of the associated category-gap (i.e. universal-DA).",
"We not only focus on identifying the key complications associated with the challenging problem setting, but also devise insightful ideas to tackle such complications by adopting learning techniques much different from the available DA literature.",
"This leads us to realize a holistic solution which achieves superior DA performance even over prior source-dependent approaches.",
"a) Comparison with prior arts.",
"We compare our approach with UAN You et al. (2019) , and other prior methods.",
"The results are presented in Table 1 and Table 2 .",
"Clearly, our framework achieves state- Relative freq.",
"We have introduced a novel source-free, universal domain adaptation framework, acknowledging practical domain adaptation scenarios devoid of any assumption on the source-target label-set relationship.",
"In the proposed two-stage framework, learning in the Procurement stage is found to be highly crucial, as it aims to exploit the knowledge of class-separability in the most general form with enhanced robustness to out-of-distribution samples.",
"Besides this, success in the Deployment stage is attributed to the well-designed learning objectives effectively utilizing the source similarity criterion.",
"This work can be served as a pilot study towards learning efficient inheritable models in future."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.1249999925494194,
0.1538461446762085,
0.21739129722118378,
0.05882352590560913,
0.1395348757505417,
0.13333332538604736,
0.12765957415103912,
0.1090909019112587,
0.09756097197532654,
0,
0,
0.07999999821186066,
0.0845070406794548,
0.09836065024137497,
0.04999999329447746,
0.1666666567325592,
0.07692307233810425,
0.09090908616781235,
0.1538461446762085,
0.07692307233810425,
0.04878048226237297,
0.10256409645080566,
0.11428570747375488,
0.04347825422883034,
0.19512194395065308,
0.07017543166875839,
0.14814814925193787,
0.037735845893621445,
0,
0,
0.0555555522441864,
0.06666666269302368,
0,
0.3255814015865326,
0.039215680211782455,
0.10256409645080566,
0
] | B1gd0nEFwS | true | [
"A novel unsupervised domain adaptation paradigm - performing adaptation without accessing the source data ('source-free') and without any assumption about the source-target category-gap ('universal')."
] |
[
"One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks.",
"Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks.",
"While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training.\n",
"In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision.",
"Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones.",
"We propose three distinct models under our active sampling framework.",
"An adaptive method with extremely competitive multi-tasking performance.",
"A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem.",
"A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor-critic methods for optimizing the multi-tasking performance directly.",
"We demonstrate results in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance.",
"Deep Reinforcement Learning (DRL) arises from the combination of the representation power of Deep learning (DL) BID10 BID3 ) with the use of Reinforcement Learning (RL) BID28 objective functions.",
"DRL agents can solve complex visual control tasks directly from raw pixels BID6 BID12 BID24 BID11 BID23 BID13 BID29 BID30 BID2 BID26 BID7 .",
"However, models trained using such algorithms tend to be task-specific because they train a different network for different tasks, however similar the tasks are.",
"This inability of the AI agents to generalize across tasks motivates the field of multi-task learning which seeks to find a single agent (in the case of DRL algorithms, a single deep neural network) which can perform well on all the tasks.",
"Training a neural network with a multi-task learning (MTL) algorithm on any fixed set of tasks (which we call a multi tasking instance (MTI)) leads to an instantiation of a multi-tasking agent (MTA) (we use the terms Multi-Tasking Network (MTN) and MTA interchangeably).",
"Such an MTA would possess the ability to learn task-agnostic representations and thus generalize learning across different tasks.",
"Successful DRL approaches to the goal-directed MTL problem fall into two categories.",
"First, there are approaches that seek to extract the prowess of multiple task-specific expert networks into a single student network.",
"The Policy Distillation framework BID20 and Actor-Mimic Networks BID16 fall into this category.",
"These works train k task-specific expert networks (DQNs ) and then distill the individual task-specific policies learned by the expert networks into a single student network which is trained using supervised learning.",
"While these approaches eventually produce a single a network that solves multiple tasks, individual expert networks must first be trained, and this training tends to be extremely computation and data intensive.The second set of DRL approaches to multi-tasking are related to the field of transfer learning.",
"Many recent DRL works BID16 BID21 BID18 BID4 attempt to solve the transfer learning problem.",
"Progressive networks BID21 ) is one such framework which can be adapted to the MTL problem.",
"Progressive networks iteratively learn to solve each successive task that is presented.",
"Thus, they are not a truly on-line learning algorithm.",
"Progressive Networks instantiate a task-specific column network for each new task.",
"This implies that the number of parameters they require grows as a large linear factor with each new task.",
"This limits the scalability of the approach with the results presented in the work being limited to a maximum of four tasks only.",
"Another important limitation of this approach is that one has to decide the order in which the network trains on the tasks.",
"In this work we propose a fully on-line multi-task DRL approach that uses networks that are comparable in size to the single-task networks.In particular, our contributions are the following:",
"1) We propose the first successful on-line multi-task learning framework which operates on MTIs that have many tasks with very visually different high-dimensional state spaces (See FIG0 for a visual depiction of the 21 tasks that constitute our largest multi-tasking instance).",
"2) We present three concrete instantiations of our MTL framework: an adaptive method, a UCB-based meta-learning method and a A3C based meta-learning method.",
"3) We propose a family of robust evaluation metrics for the multi-tasking problem and demonstrate that they evaluate a multi-tasking algorithm in a more sensible manner than existing metrics.",
"4) We provide extensive analyses of the abstract features learned by our methods and argue that most of the features help in generalization across tasks because they are task-agnostic.",
"5) We report results on seven distinct MTIs: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance.",
"Previous works have only reported results on a single MTI.",
"Our largest MTI has more than double the number of tasks present in the largest MTI on which results have been published in the Deep RL literature BID20 .",
"6) We hence demonstrate how hyper-parameters tuned for an MTI (an instance with six tasks) generalize to other MTIs (with up to 21 tasks).",
"We propose a framework for training MTNs which , through a form of active learning succeeds in learning to perform on-line multi-task learning.",
"The key insight in our work is that by choosing the task to train on, an MTA can choose to concentrate its resources on tasks in which it currently performs poorly.",
"While we do not claim that our method solves the problem of on-line multi-task reinforcement learning definitively, we believe it is an important first step.",
"Our method is complementary to many Figure 6 : Turn Off analysis heap-maps for the all agents.",
"For BA3C since the agent scored 0 on one of the games, normalization along the neuron was done only across the other 5 games.of the existing works in the field of multi-task learning such as: BID20 and BID16 .",
"These methods could potentially benefit from our work.",
"Another possible direction for future work could be to explicitly force the learned abstract representations to be task-agnostic by imposing objective function based regularizations.",
"One possible regularization could be to force the average firing rate of a neuron to be the same across the different tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.2083333283662796,
0.21276594698429108,
0.23255813121795654,
0.1621621549129486,
0,
0.07407406717538834,
0.37837836146354675,
0.1904761791229248,
0.1860465109348297,
0.04878048226237297,
0,
0.2380952388048172,
0.23529411852359772,
0.27586206793785095,
0.21621620655059814,
0.12903225421905518,
0.1538461446762085,
0,
0.12765957415103912,
0.1355932205915451,
0.11764705181121826,
0.11428570747375488,
0.12903225421905518,
0.0714285671710968,
0.19999998807907104,
0.15789473056793213,
0.21052631735801697,
0.25641024112701416,
0.22727271914482117,
0.21052631735801697,
0.10256409645080566,
0.22727271914482117,
0.08888888359069824,
0.051282044500112534,
0.13793103396892548,
0.1428571343421936,
0.1463414579629898,
0.25641024112701416,
0.2916666567325592,
0.1395348757505417,
0.1666666567325592,
0.19607841968536377,
0,
0.1463414579629898,
0.1621621549129486
] | B1nZ1weCZ | true | [
"Letting a meta-learner decide the task to train on for an agent in a multi-task setting improves multi-tasking ability substantially"
] |
[
"Numerous machine reading comprehension (MRC) datasets often involve manual annotation, requiring enormous human effort, and hence the size of the dataset remains significantly smaller than the size of the data available for unsupervised learning.",
"Recently, researchers proposed a model for generating synthetic question-and-answer data from large corpora such as Wikipedia.",
"This model is utilized to generate synthetic data for training an MRC model before fine-tuning it using the original MRC dataset.",
"This technique shows better performance than other general pre-training techniques such as language modeling, because the characteristics of the generated data are similar to those of the downstream MRC data.",
"However, it is difficult to have high-quality synthetic data comparable to human-annotated MRC datasets.",
"To address this issue, we propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data involving two advanced techniques, (1) dynamically determining K answers and (2) pre-training the question generator on the answer-containing sentence generation task.",
"We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.",
"Experimental results show that our approach outperforms existing generation methods and increases the performance of the state-of-the-art MRC models across a range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T without any architectural modifications to the original MRC model.",
"Machine reading comprehension (MRC), which finds an answer to a given question from given paragraphs called context, is an essential task in natural language processing.",
"With the use of high-quality human-annotated datasets for this task, such as SQuAD-v1.1 (Rajpurkar et al., 2016) , SQuAD-v2.0 (Rajpurkar et al., 2018) , and KorQuAD (Lim et al., 2019) , researchers have proposed MRC models, often surpassing human performance on these datasets.",
"These datasets commonly involve finding a short snippet within a paragraph as an answer to a given question.",
"However, these datasets require a significant amount of human annotation to create pairs of a question and its relevant answer from a given context.",
"Often the size of the annotated data is relatively small compared to that of data used in other unsupervised tasks such as language modeling.",
"Hence, researchers often rely on the two-phase training method of transfer learning, i.e., pre-training the model using large corpora from another domain in the first phase, followed by fine-tuning it using the main MRC dataset in the second phase.",
"Most state-of-the-art models for MRC tasks involve such pre-training methods.",
"Peters et al. (2018) present a bidirectional contextual word representation method called ELMo, which is pre-trained on a large corpus, and its learned contextual embedding layer has been widely adapted to many other MRC models.",
"Devlin et al. (2019a) show that pre-training with a masked language model on a large corpus and then fine-tuning on a downstream dataset results in significant performance improvements.",
"However, pre-training on another domain task and then fine-tuning on a downstream task may suffer from performance degradation, depending on which pre-training task is used in the first phase.",
"For example, Yang et al. (2019) show that the pre-training task of next sentence classification decreases performance on the downstream MRC tasks.",
"To handle this problem, generating synthetic data similar to the those of a downstream task is crucial to obtain a properly pre-trained model.",
"Recently, researchers have studied a model for generating synthetic MRC data from large corpora such as Wikipedia.",
"This is essentially a form of transfer learning, by training a generation model and using this model to create synthetic data for training the MRC model, before fine-tuning on the downstream MRC dataset.",
"Golub et al. (2017) suggest a two-stage synthesis network that decomposes the process of generating question-answer pairs into two steps, generating a fixed number (K) of answers conditioned on the paragraph, and question generation conditioned on the paragraph and the generated answer.",
"Devlin et al. (2019b) introduced a pre-training technique for the question generator of this method by pretraining on the generation of next-sentence that follows the paragraph.",
"However, choosing a fixed number (K) of candidate answers from each paragraph will lead to missing candidates if K is too small, and will lead to having lower-quality candidates if K is too big.",
"Moreover, the next sentence generation task is not conditioned on the answer, despite the answer being a strong conditional restriction for question generation task.",
"Also, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and hence is not the ideal candidate for pre-training question generation.",
"To address these issues, we propose Answer-containing Sentence Generation (ASGen), a novel method for a synthetic data generator with two novel processes, (1) dynamically predicting K answers to generate diverse questions and (2) pre-training the question generator on answer-containing sentence generation task.",
"We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on downstream MRC datasets after training on the generated data.",
"Experimental results show that our approach outperforms existing generation methods, increasing the performance of the state-ofthe-art MRC models across a wide range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T (Dhingra et al., 2017) without any architectural modifications to the MRC model.",
"We propose two advanced training methods for generating high-quality and diverse synthetic data for MRC.",
"First, we dynamically choose the K top answer spans from an answer generator and then generate the sentence containing the corresponding answer span as a pre-training task for the question generator.",
"Using the proposed methods, we generate 43M synthetic training samples and train the MRC model before fine-tuning on the downstream MRC dataset.",
"Our proposed method outperforms existing questions generation methods achieving new state-of-the-art results on SQuAD question generation and consistently shows the performance improvement for the state-of-the-art models on SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T datasets without any architectural modification to the MRC model.",
"We also remove all pages with less than 500 characters, as these pages are often low-quality stub articles, which removes a further 16% of the articles.",
"We remove all \"meta\" namespace pages such as talk, disambiguation, user pages, portals, etc. as these often contain irrelevant text or casual conversations between editors.",
"In order to extract usable text from the wiki-markup format of the Wikipedia articles, we remove extraneous entities from the markup including table of contents, headers, footers, links/URLs, image captions, IPA double parentheticals, category tables, math equations, unit conversions, HTML escape codes, section headings, double brace templates such as info-boxes, image galleries, HTML tags, HTML comments and all other tables.",
"We then split the cleaned text from the pages into paragraphs, and remove all paragraphs with less than 150 characters or more than 3500 characters.",
"Paragraphs with the number of characters between 150 to 500 were sub-sampled such that these paragraphs make up 16.5% of the final dataset, as originally done for the SQuAD dataset.",
"Since the majority of the paragraphs in Wikipedia are rather short, of the 60M paragraphs from the final 2.4M articles, our final Wikipedia dataset contains 8.3M paragraphs."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21739129722118378,
0.3030303120613098,
0.1666666567325592,
0.09302324801683426,
0.13333332538604736,
0.4814814627170563,
0.1818181723356247,
0.03703703358769417,
0.14999999105930328,
0.03703703358769417,
0.060606054961681366,
0.052631575614213943,
0.052631575614213943,
0.07692307233810425,
0.14814814925193787,
0.07999999821186066,
0.0952380895614624,
0.09756097197532654,
0.052631575614213943,
0.21052631735801697,
0.29411762952804565,
0.17777776718139648,
0.07999999821186066,
0.19999998807907104,
0.04651162400841713,
0.10810810327529907,
0.1304347813129425,
0.4285714328289032,
0.13333332538604736,
0.03389830142259598,
0.3870967626571655,
0.1428571343421936,
0.0555555522441864,
0.07407406717538834,
0.0952380895614624,
0.04878048226237297,
0,
0.05128204822540283,
0.04444443807005882,
0
] | H1lFsREYPS | true | [
"We propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data for machine reading comprehension."
] |
[
"Deep neural networks (DNNs) dominate current research in machine learning.",
"Due to massive GPU parallelization DNN training is no longer a bottleneck, and large models with many parameters and high computational effort lead common benchmark tables.",
"In contrast, embedded devices have a very limited capability.",
"As a result, both model size and inference time must be significantly reduced if DNNs are to achieve suitable performance on embedded devices.\n",
"We propose a soft quantization approach to train DNNs that can be evaluated using pure fixed-point arithmetic.",
"By exploiting the bit-shift mechanism, we derive fixed-point quantization constraints for all important components, including batch normalization and ReLU.",
"Compared to floating-point arithmetic, fixed-point calculations significantly reduce computational effort whereas low-bit representations immediately decrease memory costs.",
"We evaluate our approach with different architectures on common benchmark data sets and compare with recent quantization approaches.",
"We achieve new state of the art performance using 4-bit fixed-point models with an error rate of 4.98% on CIFAR-10.",
"Deep neural networks (DNNs) are state of the art in many machine learning challenges, pushing recent progress in computer vision, speech recognition and object detection (Deng & Yu (2014) ; Lecun et al. (2015) ; Karki et al. (2019) ).",
"However, the greatest results have been accomplished by training large models with many parameters using large amounts of training data.",
"As a result, modern DNNs show an extensive memory footprint and high-precision floating-point multiplications are especially expensive in terms of computation time and power consumption.",
"When deployed on embedded devices, the complexity of DNNs is necessarily restricted by the computational capability.",
"Therefore, efforts have been made to modify DNNs to better suit specific hardware instructions.",
"This includes both the transfer from floating-point to fixed-point arithmetic and the reduction in bit-size.",
"This process is termed fixed-point quantization and especially low-bit representations simultanouesly reduce memory cost, inference time, and energy consumption.",
"A survey is given in Sze et al. (2017) .",
"Furthermore, ternary-valued weights or even binary-valued weights allow replacement of many multiplications with additions 1 .",
"However, most quantization approaches do not fit to the common structure in modern DNNs.",
"State of the art architectures (such as ResNet, DenseNet, or MobileNetV2) consist of interconnected blocks that combine a convolution or fully-connected layer, a batch normalization layer and a ReLU activation function.",
"Each block can be optionally extended by a pooling layer, as shown in Figure 1 .",
"Since both convolution and fully-connected layers perform weighted sums, we summarize the two as a Linear component.",
"In contrast to the block structure, recent quantization approaches focus on the Linear component while preserving floating-point batch normalization (BN) layers.",
"This is crucial, since BN layers are folded into the preceding layer after the training and consequently destroy its fixed-point representation.",
"Even when performed separately, channel-wise floating-point multiplications make a pure fixed-point representation impossible.",
"Furthermore, many quantization methods strictly binarize activations which only works for very large models.",
"In this paper, we propose a soft quantization approach to learn pure fixed-point representations of state of the art DNN architectures.",
"Thereby, we follow the block structure and transfer all individual components into fixed-point representations before combining them appropriately.",
"We follow the same approach as Enderich et al. (2019) and formulate bit-size dependent fixed-point constraints for each component before transferring these constraints into regularization terms.",
"To the best of our knowledge, we are the first to provide a soft quantization approach to learn pure fixed-point representations of DNNs.",
"We extensively validate our approach on several benchmark data sets and with state of the art DNN architectures.",
"Although our approach is completely flexible in bit-size, we test two special cases:",
"• A pure fixed-point model with 4-bit weights and 4-bit activations which performs explicitly well, outperforming the floating-point baseline in many cases.",
"• A model with ternary-valued weights and 4-bit activations that can be evaluated using additions, bit shifts and clipping operations alone (no multiplications needed).",
"Soft quantization aims to reduce the complexity of DNNs at test time rather than at training time.",
"Therefore, training remains in floating-point precision, but maintains consideration of dedicated quantization constraints.",
"In this paper, we propose a novel soft quantization approach to learn pure fixed-point representations of state of the art DNN architectures.",
"With exponentially increasing fixed-point priors and weight clipping, our approach provides self-reliant weight adaptation.",
"In detailed experiments, we achieve new state of the art quantization results.",
"Especially the combination of 4-bit weights, 4-bit activations and fixed-point batch normalization layers seems quite promising"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.05405404791235924,
0,
0.0555555522441864,
0.3448275923728943,
0.12903225421905518,
0.20689654350280762,
0.13793103396892548,
0.1249999925494194,
0.1249999925494194,
0.06666666269302368,
0.0555555522441864,
0.07407406717538834,
0.07999999821186066,
0.1538461446762085,
0.19999998807907104,
0,
0.07692307233810425,
0.1538461446762085,
0.05128204822540283,
0,
0,
0.1249999925494194,
0.0624999962747097,
0.1599999964237213,
0.07692307233810425,
0.5,
0.13333332538604736,
0.10810810327529907,
0.5,
0.13333332538604736,
0.07999999821186066,
0.12121211737394333,
0,
0.29629629850387573,
0.1599999964237213,
0.4848484694957733,
0.1599999964237213,
0.1666666567325592,
0.14814814925193787
] | rJgKzlSKPH | true | [
"Soft quantization approach to learn pure fixed-point representations of deep neural networks"
] |
[
"\tWe present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks.",
"Existing approaches conventionally learn full model parameters independently and then compress them via \\emph{ad hoc} processing such as model pruning or filter factorization.",
"Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces {parameter sharing} throughout the learning process.",
"We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably.",
"By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters.",
"Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification.",
"Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet.",
"Combined with weight quantization, the resulted models are up to \\textbf{180$\\times$} smaller and theoretically up to \\textbf{16$\\times$} faster than the well-established baselines, without noticeable performance drop.",
"Despite remarkable successes in various applications, including e.g. audio classification, speech recognition and natural language processing, deep neural networks (DNNs) usually suffer following two problems that stem from their inherent huge parameter space.",
"First, most of state-of-the-art deep architectures are prone to over-fitting even when trained on large datasets BID42 .",
"Secondly, DNNs usually consume large amount of storage memory and energy BID17 .",
"Therefore these networks are difficult to embed into devices with limited memory and power (such as portable devices or chips).",
"Most existing networks aim to reduce computational budget through network pruning BID16 BID1 BID11 , filter factorization BID23 BID28 , low bit representation BID36 for weights and knowledge transfering BID20 .",
"In contrast to the above works that ignore the strong dependencies among weights and learn filters independently based on existing network architectures, this paper proposes to explicitly enforce the parameter sharing among filters to more effectively learn compact and efficient deep networks.In this paper, we propose a Weight Sampling deep neural network (i.e. WSNet) to significantly reduce both the model size and computation cost of deep networks, achieving more than 100× smaller size and up to 16× speedup at negligible performance drop or even achieving better performance than the baseline (i.e. conventional networks that learn filters independently).",
"Specifically, WSNet is parameterized by layer-wise condensed filters from which each filter participating in actual convolutions can be directly sampled, in both spatial and channel dimensions.",
"Since condensed filters have significantly fewer parameters than independently trained filters as in conventional CNNs, learning by sampling from them makes WSNet a more compact model compared to conventional CNNs.",
"In addition, to reduce the ubiquitous computational redundancy in convolving the overlapped filters and input patches, we propose an integral image based method to dramatically reduce the computation cost of WSNet in both training and inference.",
"The integral image method is also advantageous because it enables weight sampling with different filter size and minimizes computational overhead to enhance the learning capability of WSNet.In order to demonstrate the efficacy of WSNet, we conduct extensive experiments on the challenging acoustic scene classification and music detection tasks.",
"On each test dataset, including MusicDet200K (a self-collected dataset, as detailed in Section 4), ESC-50 (Piczak, 2015a) , UrbanSound8K BID40 and DCASE BID45 , WSNet significantly reduces the model size of the baseline by 100× with comparable or even higher classification accuracy.",
"When compressing more than 180×, WSNet is only subject to negligible accuracy drop.",
"At the same time, WSNet significantly reduces the computation cost (up to 16×).",
"Such results strongly establish the capability of WSNet to learn compact and efficient networks.",
"Although we detailed experiments mostly limited to 1D CNNs in this paper, we will explore how the same approach can be naturally generalized to 2D CNNs in future work.",
"In this paper, we present a class of Weight Sampling networks (WSNet) which are highly compact and efficient.",
"A novel weight sampling method is proposed to sample filters from condensed filters which are much smaller than the independently trained filters in conventional networks.",
"The weight sampling in conducted in two dimensions of the condensed filters, i.e. by spatial sampling and channel sampling.",
"TAB2"
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.774193525314331,
0.0555555522441864,
0.1621621549129486,
0.24242423474788666,
0.05405404791235924,
0.5,
0,
0.05405404791235924,
0.1666666567325592,
0.06451612710952759,
0.07692307233810425,
0.12121211737394333,
0.1860465109348297,
0.190476194024086,
0.05128204822540283,
0.1428571343421936,
0.045454539358615875,
0.06896551698446274,
0.037735845893621445,
0,
0,
0.2857142686843872,
0,
0.375,
0.10810810327529907,
0.06451612710952759
] | H1I3M7Z0b | true | [
"We present a novel network architecture for learning compact and efficient deep neural networks"
] |
[
"Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.",
"However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles.",
"Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy.",
"In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection.",
"Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards.",
"We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.",
"Since the first Adversarial Example (AE) against traffic sign image classification discovered by Eykholt et al. (Eykholt et al., 2018) , several research work in adversarial machine learning (Eykholt et al., 2017; Xie et al., 2017; Lu et al., 2017a; b; Zhao et al., 2018b; Chen et al., 2018; Cao et al., 2019) started to focus on the context of visual perception in autonomous driving, and studied AEs on object detection models.",
"For example, Eykholt et al. (Eykholt et al., 2017) and Zhong et al. (Zhong et al., 2018) studied AEs in the form of adversarial stickers on stop signs or the back of front cars against YOLO object detectors (Redmon & Farhadi, 2017) , and performed indoor experiments to demonstrate the attack feasibility in the real world.",
"Building upon these work, most recently Zhao et al. (Zhao et al., 2018b) leveraged image transformation techniques to improve the robustness of such adversarial sticker attacks in outdoor settings, and were able to achieve a 72% attack success rate with a car running at a constant speed of 30 km/h on real roads.",
"While these results from prior work are alarming, object detection is in fact only the first half of the visual perception pipeline in autonomous driving, or in robotic systems in general -in the second half, the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories, called trackers, of surrounding obstacles.",
"This is required for the subsequent driving decision making process, which needs the built trajectories to predict future moving trajectories for these obstacles and then plan a driving path accordingly to avoid collisions with them.",
"To ensure high tracking accuracy and robustness against errors in object detection, in MOT only the detection results with sufficient consistency and stability across multiple frames can be included in the tracking results and actually influence the driving decisions.",
"Thus, MOT in the visual The complete visual perception pipeline in autonomous driving, i.e., both object detection and Multiple Object Tracking (MOT) (Baidu; Kato et al., 2018; 2015; Zhao et al., 2018a; Ess et al., 2010; MathWorks; Udacity) .",
"perception of autonomous driving poses a general challenge to existing attack techniques that blindly target objection detection.",
"For example, as shown by our analysis later in §4, an attack on objection detection needs to succeed consecutively for at least 60 frames to fool a representative MOT process, which requires an at least 98% attack success rate ( §4).",
"To the best of our knowledge, no existing attacks on objection detection can achieve such a high success rate (Eykholt et al., 2017; Xie et al., 2017; Lu et al., 2017a; b; Zhao et al., 2018b; Chen et al., 2018) .",
"In this paper, we are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving, i.e., both object detection and object tracking, and discover a novel attack technique, called tracker hijacking, that can effectively fool the MOT process using AEs on object detection.",
"Our key insight is that although it is highly difficult to directly create a tracker for fake objects or delete a tracker for existing objects, we can carefully design AEs to attack the tracking error reduction process in MOT to deviate the tracking results of existing objects towards an attacker-desired moving direction.",
"Such process is designed for increasing the robustness and accuracy of the tracking results, but ironically, we find that it can be exploited by attackers to substantially alter the tracking results.",
"Leveraging such attack technique, successful AEs on as few as one single frame is enough to move an existing object in to or out of the headway of an autonomous vehicle and thus may cause potential safety hazards.",
"We select 20 out of 100 randomly sampled video clips from the Berkeley Deep Drive dataset for evaluation.",
"Under recommended MOT configurations in practice (Zhu et al., 2018) and normal measurement noise levels, we find that our attack can succeed with successful AEs on as few as one frame, and 2 to 3 consecutive frames on average.",
"We reproduce and compare with previous attacks that blindly target object detection, and find that when attacking 3 consecutive frames, our attack has a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.",
"Contributions.",
"In summary, this paper makes the following contributions:",
"• We are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving, i.e., both object detection and MOT.",
"We find that without considering MOT, an attack blindly targeting object detection needs at least a success rate of 98% to actually affect the complete visual perception pipeline in autonomous driving, which is a requirement that no existing attack technique can satisfy.",
"• We discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection.",
"This technique exploits the tracking error reduction process in MOT, and can enable successful AEs on as few as one single frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards.",
"• The attack evaluation using the Berkeley Deep Drive dataset shows that our attack can succeed with successful AEs on as few as one frame, and only 2 to 3 consecutive frames on average, and when 3 consecutive frames are attacked, our attack has a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.",
"• Code and evaluation data are all available at GitHub (Github).",
"Implications for future research in this area.",
"Today, adversarial machine learning research targeting the visual perception in autonomous driving, no matter on attack or defense, uses the accuracy of objection detection as the de facto evaluation metric (Luo et al., 2014) .",
"However, as concretely shown in our work, without considering MOT, successful attacks on the detection results alone do not have direct implication on equally or even closely successful attacks on the MOT results, the final output of the visual perception task in real-world autonomous driving (Baidu; Kato et al., 2018) .",
"Thus, we argue that future research in this area should consider: (1) using the MOT accuracy as the evaluation metric, and (2) instead of solely focusing on object detection, also studying weaknesses specific to MOT or interactions between MOT and object detection, which is a highly under-explored research space today.",
"This paper marks the first research effort towards both directions.",
"Practicality improvement.",
"Our evaluation currently are all conducted digitally with captured video frames, while our method should still be effective when applied to generate physical patches.",
"For example, our proposed adversarial patch generation method can be naturally combined with different techniques proposed by previous work to enhance reliability of AEs in the physical world (e.g., non-printable loss (Sharif et al., 2016) and expectation-over-transformation (Athalye et al., 2017) ).",
"We leave this as future work.",
"Generality improvement.",
"Though in this work we focused on MOT algorithm that uses IoU based data association, our approach of finding location to place adversarial bounding box is generally applicable to other association mechanisms (e.g., appearance-based matching).",
"Our AE generation algorithm against YOLOv3 should also be applicable to other object detection models with modest adaptations.",
"We plan to provide reference implementations of more real-world end-to-end visual perception pipelines to pave the way for future adversarial learning research in self-driving scenarios.",
"In this work, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, i.e., both object detection and MOT.",
"We discover a novel attack technique, tracker hijacking, that exploits the tracking error reduction process in MOT and can enable successful AEs on as few as one frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards.",
"The evaluation results show that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.",
"The source code and data is all available at (Github).",
"Our discovery and results strongly suggest that MOT should be systematically considered and incorporated into future adversarial machine learning research targeting the visual perception in autonomous driving.",
"Our work initiates the first research effort along this direction, and we hope that it can inspire more future research into this largely overlooked research perspective."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25,
0.1860465109348297,
0.09836065024137497,
0.290909081697464,
0.04444443807005882,
0.1111111044883728,
0.17391304671764374,
0.10169491171836853,
0.0937499925494194,
0.1515151411294937,
0.08695651590824127,
0.08510638028383255,
0.1599999964237213,
0,
0.038461532443761826,
0.0833333283662796,
0.22580644488334656,
0.06896550953388214,
0.09090908616781235,
0.03999999538064003,
0.1764705777168274,
0,
0.08163265138864517,
0.0833333283662796,
0.3720930218696594,
0.072727270424366,
0.05714285373687744,
0.038461532443761826,
0.0615384578704834,
0,
0.08695651590824127,
0.16326530277729034,
0.06779660284519196,
0.03389830142259598,
0.1538461446762085,
0,
0.07017543166875839,
0.09090908616781235,
0.07692307233810425,
0.05882352590560913,
0.25,
0.35555556416511536,
0.06896550953388214,
0.04255318641662598,
0,
0.1904761791229248,
0.10256409645080566
] | rJl31TNYPr | true | [
"We study the adversarial machine learning attacks against the Multiple Object Tracking mechanisms for the first time. "
] |
[
"Self-supervised learning (SlfSL), aiming at learning feature representations through ingeniously designed pretext tasks without human annotation, has achieved compelling progress in the past few years.",
"Very recently, SlfSL has also been identified as a promising solution for semi-supervised learning (SemSL) since it offers a new paradigm to utilize unlabeled data.",
"This work further explores this direction by proposing a new framework to seamlessly couple SlfSL with SemSL.",
"Our insight is that the prediction target in SemSL can be modeled as the latent factor in the predictor for the SlfSL target.",
"Marginalizing over the latent factor naturally derives a new formulation which marries the prediction targets of these two learning processes.",
"By implementing this framework through a simple-but-effective SlfSL approach -- rotation angle prediction, we create a new SemSL approach called Conditional Rotation Angle Prediction (CRAP).",
"Specifically, CRAP is featured by adopting a module which predicts the image rotation angle \\textbf{conditioned on the candidate image class}.",
"Through experimental evaluation, we show that CRAP achieves superior performance over the other existing ways of combining SlfSL and SemSL.",
"Moreover, the proposed SemSL framework is highly extendable.",
"By augmenting CRAP with a simple SemSL technique and a modification of the rotation angle prediction task, our method has already achieved the state-of-the-art SemSL performance.",
"The recent success of deep learning is largely attributed to the availability of a large amount of labeled data.",
"However, acquiring high-quality labels can be very expensive and time-consuming.",
"Thus methods that can leverage easily accessible unlabeled data become extremely attractive.",
"Semisupervised learning (SemSL) and self-supervised learning (SlfSL) are two learning paradigms that can effectively utilize massive unlabeled data to bring improvement to predictive models.",
"SemSL assumes that a small portion of training data is provided with annotations and the research question is how to use the unlabeled training data to generate additional supervision signals for building a better predictive model.",
"In the past few years, various SemSL approaches have been developed in the context of deep learning.",
"The current state-of-the-art methods, e.g. MixMatch (Berthelot et al., 2019) , unsupervised data augmentation (Li et al., 2018) , converge to the strategy of combining multiple SemSL techniques, e.g. Π-Model (Laine & Aila, 2017) , Mean Teacher (Tarvainen & Valpola, 2017) , mixup (Zhang et al., 2018) , which have been proved successful in the past literature.",
"SlfSL aims for a more ambitious goal of learning representation without any human annotation.",
"The key assumption in SlfSL is that a properly designed pretext predictive task which can be effortlessly derived from data itself can provide sufficient supervision to train a good feature representation.",
"In the standard setting, the feature learning process is unaware of the downstream tasks, and it is expected that the learned feature can benefit various recognition tasks.",
"SlfSL also offers a new possibility for SemSL since it suggests a new paradigm of using unlabeled data, i.e., use them for feature training.",
"Recent work has shown great potential in this direction.",
"This work further advances this direction by proposing a new framework to seamlessly couple SlfSL with SemSL.",
"The key idea is that the prediction target in SemSL can serve as a latent factor in the course of predicting the pretext target in a SlfSL approach.",
"The connection between the predictive targets of those two learning processes can be established through marginalization over the latent factor, which also implies a new framework of SemSL.",
"The key component in this framework is a module that predicts the pretext target conditioned on the target of SemSL.",
"In this preliminary work, we implement this module by extending the rotation angle prediction method, a recently proposed SlfSL approach for image recognition.",
"Specifically, we make its prediction conditioned on each candidate image class, and we call our method Conditional Rotation Angle Prediction (CRAP).",
"The proposed framework is also highly extendable.",
"It is compatible with many SemSL and SlfSL approaches.",
"To demonstrate this, we further extend CRAP by using a simple SemSL technique and a modification to the rotation prediction task.",
"Through experimental evaluation, we show that the proposed CRAP achieves significantly better performance than the other SlfSL-based SemSL approaches, and the extended CRAP is on par with the state-of-the-art SemSL methods.",
"In summary, the main contributions of this paper are as follows:",
"• We propose a new SemSL framework which seamlessly couples SlfSL and SemSL.",
"It points out a principal way of upgrading a SlfSL method to a SemSL approach.",
"• Implementing this idea with a SlfSL approach, we create a new SemSL approach (CRAP) that can achieve superior performance than other SlfSL-based SemSL methods.",
"• We further extend CRAP with a SemSL technique and an improvement over the SlfSL task.",
"The resulted new method achieves the state-of-the-art performance of SemSL.",
"In this work, we introduce a framework for effectively coupling SemSL with SlfSL.",
"The proposed CRAP method is an implementation of this framework and it shows compelling performance on several benchmark datasets compared to other SlfSL-based SemSL methods.",
"Furthermore, two extensions are incorporated into CRAP to create an improved method which achieves comparable performance to the state-of-the-art SemSL methods."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.10810810327529907,
0.10810810327529907,
0.06666666269302368,
0.06451612710952759,
0.1249999925494194,
0,
0.12903225421905518,
0.12121211737394333,
0.0952380895614624,
0.1666666567325592,
0.13333332538604736,
0.08695651590824127,
0,
0.1764705777168274,
0.1395348757505417,
0.13793103396892548,
0.03448275476694107,
0.07407406717538834,
0.0476190447807312,
0.17142856121063232,
0,
0,
0.06666666269302368,
0.05714285373687744,
0.10256409645080566,
0.19354838132858276,
0.05714285373687744,
0.1818181723356247,
0,
0.1818181723356247,
0.1818181723356247,
0.20512820780277252,
0.0833333283662796,
0.07999999821186066,
0,
0.0555555522441864,
0.27586206793785095,
0.08695651590824127,
0.07692307233810425,
0.10526315122842789,
0.060606054961681366
] | BJxoz1rKwr | true | [
"Coupling semi-supervised learning with self-supervised learning and explicitly modeling the self-supervised task conditioned on the semi-supervised one"
] |
[
"Ranking is a central task in machine learning and information retrieval.",
"In this task, it is especially important to present the user with a slate of items that is appealing as a whole.",
"This in turn requires taking into account interactions between items, since intuitively, placing an item on the slate affects the decision of which other items should be chosen alongside it.\n",
"In this work, we propose a sequence-to-sequence model for ranking called seq2slate.",
"At each step, the model predicts the next item to place on the slate given the items already chosen.",
"The recurrent nature of the model allows complex dependencies between items to be captured directly in a flexible and scalable way.",
"We show how to learn the model end-to-end from weak supervision in the form of easily obtained click-through data.",
"We further demonstrate the usefulness of our approach in experiments on standard ranking benchmarks as well as in a real-world recommendation system.",
"Ranking a set of candidate items is a central task in machine learning and information retrieval.",
"Many existing ranking systems are based on pointwise estimators, where the model assigns a score to each item in a candidate set and the resulting slate is obtained by sorting the list according to item scores ).",
"Such models are usually trained from click-through data to optimize an appropriate loss function BID17 .",
"This simple approach is computationally attractive as it only requires a sort operation over the candidate set at test (or serving) time, and can therefore scale to large problems.",
"On the other hand, in terms of modeling, pointwise rankers cannot easily express dependencies between ranked items.",
"In particular, the score of an item (e.g., its probability of being clicked) often depends on the other items in the slate and their joint placement.",
"Such interactions between items can be especially dominant in the common case where display area is limited or when strong position bias is present, so that only a few highly ranked items get the user's attention.",
"In this case it may be preferable, for example, to present a diverse set of items at the top positions of the slate in order to cover a wider range of user interests.",
"A significant amount of work on learning-to-rank does consider interactions between ranked items when training the model.",
"In pairwise approaches a classifier is trained to determine which item should be ranked first within a pair of items (e.g., BID13 BID17 BID6 .",
"Similarly, in listwise approaches the loss depends on the full permutation of items (e.g., BID7 BID47 .",
"Although these losses consider inter-item dependencies, the ranking function itself is pointwise, so at inference time the model still assigns a score to each item which does not depend on scores of other items.",
"There has been some work on trying to capture interactions between items in the ranking scores themselves (e.g., BID29 BID22 BID49 BID32 BID8 .",
"Such approaches can, for example, encourage a pair of items to appear next to (or far from) each other in the resulting ranking.",
"Approaches of this type often assume that the relationship between items takes a simple form (e.g., submodular) in order to obtain tractable inference and learning algorithms.",
"Unfortunately, this comes at the expense of the model's expressive power.",
"In this paper, we present a general, scalable approach to ranking, which naturally accounts for high-order interactions.",
"In particular, we apply a sequence-to-sequence (seq2seq) model BID35 to the ranking task, where the input is the list of candidate items and the output is the resulting ordering.",
"Since the output sequence corresponds to ranked items on the slate, we call this model sequence-to-slate (seq2slate).",
"The order in which the input is processed can significantly affect the performance of such models BID39 .",
"For this reason, we often assume the availability of a base (or \"production\") ranker with which the input sequence is ordered (e.g., a simple pointwise method that ignores the interactions we seek to model), and view the output of our model as a re-ranking of the items.To address the seq2seq problem, we build on the recent success of recurrent neural networks (RNNs) in a wide range of applications (e.g., BID35 .",
"This allows us to use a deep model to capture rich dependencies between ranked items, while keeping the computational cost of inference manageable.",
"More specifically, we use pointer networks, which are seq2seq models with an attention mechanism for pointing at positions in the input BID38 .",
"We show how to train the network end-to-end to directly optimize several commonly used ranking measures.",
"To this end, we adapt RNN training to use weak supervision in the form of click-through data obtained from logs, instead of relying on ground-truth rankings, which are much more expensive to obtain.",
"Finally, we demonstrate the usefulness of the proposed approach in a number of learning-to-rank benchmarks and in a large-scale, real-world recommendeation system.",
"We presented a novel seq2slate approach to ranking sets of items.",
"We found the formalism of pointer-networks particularly suitable for this setting.",
"We addressed the challenge of training the model from weak user feedback to improve the ranking quality.",
"Our experiments show that the proposed approach is highly scalable and can deliver significant improvements in ranking results.",
"Our work can be extended in several directions.",
"In terms of architecture, we aim to explore the Transformer network BID36 in place of the RNN.",
"Several variants can potentially improve the performance of our model, including beam-search inference BID44 , and training with Actor-Critic BID2 or SeaRNN BID21 ) and it will be interesting to study their performance in the ranking setting.",
"Finally, an interesting future work direction will be to study off-policy correction BID16 Since the terms are continuous (and smooth) in S for all j and π <j , so is the entire function."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.04878048226237297,
0.08695651590824127,
0,
0,
0.13793103396892548,
0,
0,
0,
0.1538461446762085,
0,
0,
0,
0,
0.05128204822540283,
0.0714285671710968,
0,
0,
0,
0,
0.060606054961681366,
0,
0,
0.0714285671710968,
0,
0,
0,
0.029411761090159416,
0.060606054961681366,
0.12121211737394333,
0.07692307233810425,
0.0952380895614624,
0,
0,
0.09090908616781235,
0.07692307233810425,
0,
0,
0.07692307233810425,
0,
0.045454543083906174
] | HkgHk3RctX | true | [
"A pointer network architecture for re-ranking items, learned from click-through logs."
] |
[
"While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.",
"In practice, the momentum parameter is often chosen in a heuristic fashion with little theoretical guidance.",
"In this work, we use the framework of algorithmic stability to provide an upper-bound on the generalization error for the class of strongly convex loss functions, under mild technical assumptions.",
"Our bound decays to zero inversely with the size of the training set, and increases as the momentum parameter is increased.",
"We also develop an upper-bound on the expected true risk, in terms of the number of training steps, the size of the training set, and the momentum parameter.",
"A fundamental issue for any machine learning algorithm is its ability to generalize from the training dataset to the test data.",
"A classical framework used to study the generalization error in machine learning is PAC learning BID0 BID1 .",
"However, the associated bounds using this approach can be conservative.",
"Recently, the notion of uniform stability, introduced in the seminal work of Bousquet and Elisseeff BID2 , is leveraged to analyze the generalization error of the stochastic gradient method (SGM) BID3 .",
"The result in BID3 ) is a substantial step forward, since SGM is widely used in many practical systems.",
"This method is scalable, robust, and widely adopted in a broad range of problems.To accelerate the convergence of SGM, a momentum term is often added in the iterative update of the stochastic gradient BID4 .",
"This approach has a long history, with proven benefits in various settings.",
"The heavy-ball momentum method was first introduced by Polyak BID5 , where a weighted version of the previous update is added to the current gradient update.",
"Polyak motivated his method by its resemblance to a heavy ball moving in a potential well defined by the objective function.",
"Momentum methods have been used to accelerate the backpropagation algorithm when training neural networks BID6 .",
"Intuitively, adding momentum accelerates convergence by circumventing sharp curvatures and long ravines of the sublevel sets of the objective function BID7 .",
"For example, Ochs et al. has presented an illustrative example to show that the momentum can potentially avoid local minima BID8 .",
"Nesterov has proposed an accelerated gradient method, which converges as O(1/k 2 ) where k is the number of iterations (Nesterov, 1983) .",
"However, the Netstrov momentum does not seem to improve the rate of convergence for stochastic gradient (Goodfellow et al., 2016, Section 8.3.3) .",
"In this work, we focus on the heavy-ball momentum.Although momentum methods are well known to improve the convergence in SGM, their effect on the generalization error is not well understood.",
"In this work, we first build upon the framework in BID3 to obtain a bound on the generalization error of SGM with momentum (SGMM) for the case of strongly convex loss functions.",
"Our bound is independent of the number of training iterations and decreases inversely with the size of the training set.",
"Secondly, we develop an upper-bound on the optimization error, which quantifies the gap between the empirical risk of SGMM and the global optimum.",
"Our bound can be made arbitrarily small by choosing sufficiently many iterations and a sufficiently small learning rate.",
"Finally, we establish an upper-bound on the expected true risk of SGMM as a function of various problem parameters.",
"We note that the class of strongly convex loss functions appears in several important machine learning problems, including linear and logistic regression with a weight decay regularization term.Other related works: convergence analysis of first order methods with momentum is studied in (Nesterov, 1983; BID11 BID12 BID13 BID14 BID15 BID16 BID17 .",
"Most of these works consider the deterministic setting for gradient update.",
"Only a few works have analyzed the stochastic setting BID15 BID16 BID17 .",
"Our convergence analysis results are not directly comparable with these works due to their different assumptions regarding the properties of loss functions.",
"In particular, we analyze the convergence of SGMM for a smooth and strongly convex loss function as in BID3 , which is new.First-order methods with noisy gradient are studied in BID18 and references therein.",
"In BID18 , the authors show that there exists linear regression problems for which SGM outperforms SGMM in terms of convergence.Our main focus in this work is on the generalization, and hence true risk, of SGMM.",
"We are aware of only one similar work in this regard, which provides stability bounds for quadratic loss functions BID19 .",
"In this paper, we obtain stability bounds for the general case of strongly convex loss functions.",
"In addition, unlike BID19 , our results show that machine learning models can be trained for multiple epochs of SGMM with bounded generalization errors.",
"We study the generalization error and convergence of SGMM for the class of strongly convex loss functions, under mild technical conditions.",
"We establish an upper-bound on the generalization error, which decreases with the size of the training set, and increases as the momentum parameter is increased.",
"Secondly, we analyze the convergence of SGMM during training, by establishing an upper-bound on the gap between the empirical risk of SGMM and the global minimum.",
"Our proposed bound reduces to a classical bound on the optimization error of SGM BID20 for convex functions, when the momentum parameter is set to zero.",
"Finally, we establish an upper-bound on the expected difference between the true risk of SGMM and the global minimum of the empirical risk, and illustrate how it scales with the number of training steps and the size of the training set.",
"Although our results are established for the case when the learning rate is constant, they can be easily extended to the case when the learning rate decreases with the number of iterations.",
"We also present experimental evaluations on the notMNIST dataset and show that the numerical plots are consistent with our theoretical bounds on the generalization error and the convergence gap."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11428571492433548,
0.1818181723356247,
0,
0.1599999964237213,
0.07407406717538834,
0,
0,
0,
0.125,
0,
0.1764705926179886,
0.1111111044883728,
0.20000000298023224,
0.07999999821186066,
0,
0.07999999821186066,
0.07407406717538834,
0.0714285671710968,
0.13333332538604736,
0.0624999962747097,
0.11428571492433548,
0.0952380895614624,
0,
0,
0,
0.07407407462596893,
0.11764705181121826,
0,
0.0714285671710968,
0.10256409645080566,
0,
0,
0,
0.06666666269302368,
0,
0.1428571343421936,
0,
0.06896551698446274,
0.05714285373687744,
0.06666666269302368,
0.06666666269302368
] | S1lwRjR9YX | true | [
"Stochastic gradient method with momentum generalizes."
] |
[
"Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions.",
"However, in real life expert demonstrations, often the action information is missing and only state trajectories are available.",
"We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories.",
"Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability.",
"Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics.",
"Experimental evaluations show that our proposed method successfully achieves performance similar to (state, action) trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods.",
"We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations.",
"Reinforcement learning(RL) involves training an agent to learn a policy that accomplishes a certain task in an environment.",
"The objective of reinforcement learning is to maximize the expected future reward Sutton & Barto (1998) from a guiding signal.",
"BID11 showed that neural networks can be used to approximate state-action value functions used by an agent to perform discrete control based on a guiding reward.",
"This was demonstrated in Atari games where the score was used as the reward signal.",
"Similarly, continuous control of robotics arm was achieved by BID9 minimizing the distance between end-effector and target location.",
"Following these, other methods such as BID20 were proposed to improve the sample efficiency of modelfree algorithms with theoretical guarantees of policy improvement in each step.",
"These algorithms assume that a guiding reward signal is available for the agent to learn optimal behavior for a certain task.",
"However, in most cases of natural learning, such guiding signal is not present and learning is performed by imitating an expert behavior.Imitation learning involves copying the behavior of an expert agent to accomplish the desired task.",
"In the conventional imitation learning setting, a set of expert trajectories providing states and optimal actions τ = {s 0 , a 0 , s 1 , a 1 , ..., s n , a n ) performed by an expert agent π E are available but the reward (or cost function), r E (s, a) used to achieve the expert behavior is not available.",
"The goal is to learn a new policy π, which imitates the expert behavior by maximizing the likelihood of given demonstration trajectories.A straightforward way for imitation learning is to direct learn the optimal action to perform given the current state as proposed by Pomerleau (1991); BID2 .",
"The policy π can learn to imitate the expert behavior by maximizing likelihood of the condition distribution of action given states p(a|s).",
"This can be achieved by simply training a parameterized function (neural networks for instance) with state and action pairs from the expert trajectories.",
"Since this involves end-to-end supervised learning, training is much more sample-efficient compared to reinforcement learning and overcomes inherent problems in model-free methods such as credit assignment BID22 ).",
"However, since behavior cloning learns optimal action from a single state value only, it is unaware of the future state distribution the current action will produce.",
"Thus, errors are compounded in the future states leading to undesired agent behavior as shown by BID18 ; BID17 .",
"Therefore, numerous training samples are required for behavior cloning to reduce errors in action prediction required for satisfactory imitation learning.The second approach to imitation learning involves setting up exploration in a Markov Decision Process(MDP) setting.",
"The goal then is to recover a reward signal that best explains the expert trajectories.",
"BID12 first introduced Inverse Reinforcement Learning(IRL), where the goal is to find a reward signalr from the trajectories such that the expert is uniquely optimal.",
"After computing this estimated reward signal, usually, a model-free reinforcement learning performed to obtain the desired policy imitating the expert behavior by maximizing the expected discounted reward E π ( t γ tr (s t , a t )).",
"While this alleviates the problem of compounding errors as in behavior cloning, BID25 showed that estimating a unique reward function from state and action trajectories is an ill-posed problem.Following the success of Generative Adversarial Networks(GANs) BID3 ) in various fields of machine learning, adversarial learning has also been shown incorporated in the imitation learning framework.",
"The recent work on Generative Adversarial Imitation Leaning or GAIL by BID4 showed that model-free reinforcement learning using the discriminator as a cost function can learn to imitate the expert agent with much less number of demonstrated trajectories compared to behavior cloning.",
"Following the success of GAIL, there have extensions by BID0 to model-based generative imitation learning using a differentiable dynamics model of the environment.",
"Robust imitation policy strategies using a combination of variational autoencoders BID7 ; BID16 ) and GAIL has also been proposed by BID23 .The",
"previous works assume that the expert trajectories consist of both action and state values from the optimal agent. However",
", optimal actions are usually not available in real-world imitation learning. For example",
", we often learn tasks like skipping, jump rope, gymnastics, etc. just by watching other expert humans perform the task. In this case",
", the optimal expert trajectories only consist of visual input, in other words, the consecutive states of the expert human with no action information. We learn to",
"jump rope by trying to reproduce actions that result in state trajectories similar to the state trajectories observed from the expert. This requires",
"exploring the environment in a structured fashion to learn the dynamics of the rope (for jump rope) which then enables executing optimal actions to imitate the expert behavior. The recent work",
"of BID10 presents learning from observations only with focus to transferring skills learned from source domain to an unseen target domain, using rewards obtained by feature tracking for model-free reinforcement learning.Inspired by the above method of learning in humans, we present a principled way of learning to imitate an expert from state information only, with no action information available. We first learn",
"a distribution of the next state from the current state trajectory, used to estimate a heuristic reward signal enabling model-free exploration. The state, action",
"and next states information from modelfree exploration is used to learn a dynamics model of the environment. For the case of learning",
"in humans, this is similar to performing actions for replicating the witnessed expert state trajectories, which in turn gives information about the dynamics of the environment. Once this forward model",
"is learned, we try to find the action that maximizes the likelihood of next state. Since the forward model",
"gives a function approximation for the environment dynamics, we can back propagate errors through it to perform model-based policy update by end to end supervised learning. We demonstrate that our",
"proposed network can reach, with fewer iterations, the level close to an expert agent behavior (which is a pre-trained actor network or manually provided by humans), and compare it with reinforcement learning using a hand-crafted reward or a heuristics reward that is based on prediction error of next state learned from the optimal state trajectories of the expert.",
"We presented a model-based imitation learning method that can learn to act from expert state trajectories in the absence of action information.",
"Our method uses trajectories sampled from the modelfree policy exploration to train a dynamics model of the environment.",
"As model-free policy is enriched through time, the forward model can better approximate the actual environment dynamics, which leads to improved gradient flow, leading to better model-based policy update which is trained in a supervised fashion from expert state trajectories.",
"In the ideal case, when dynamics model perfectly approximates the environment, our proposed method is equivalent to behavior cloning, even in the absence of action information.",
"We demonstrate that the proposed method learns the desired policy in less number of iterations compared conventional model-free methods.",
"We also show that once the dynamics model is trained it can be used to transfer learning for other tasks in a similar environment in an end-to-end supervised manner.",
"Future work includes tighter integration of the model-based learning and the model-free learning for higher data efficiency by sharing information (1) between the model-free policy π mf and the model-based policy π mb and (2) between the next state predictor p(s t+1 |s t ) and the dynamics model p(s t+1 |s t , a t ) and (3) improving the limitations of compounding errors and requirement of large number of demonstration, by adversarial training which can maximize likelihood of future state distributions as well."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3030303120613098,
0.1666666567325592,
0.277777761220932,
0.23255813121795654,
0.3589743673801422,
0.23076923191547394,
0.17391303181648254,
0.29411762952804565,
0.2631579041481018,
0.1428571343421936,
0.12903225421905518,
0.1111111044883728,
0.1860465109348297,
0.21621620655059814,
0.2916666567325592,
0.2769230604171753,
0.25,
0.2631579041481018,
0.1463414579629898,
0.1304347813129425,
0.19512194395065308,
0.1621621549129486,
0.1702127605676651,
0.24242423474788666,
0.25,
0.19607841968536377,
0.1818181723356247,
0.24137930572032928,
0.41025641560554504,
0.09756097197532654,
0.2222222238779068,
0.25806450843811035,
0.09756097197532654,
0.2926829159259796,
0.2702702581882477,
0.5333333015441895,
0.2647058665752411,
0.20512819290161133,
0.42105263471603394,
0.40909090638160706,
0.22857142984867096,
0.21739129722118378,
0.25,
0.4000000059604645,
0.4000000059604645,
0.26923075318336487,
0.3333333432674408,
0.1666666567325592,
0.3913043439388275,
0.1621621549129486
] | S1GDXzb0b | true | [
"Learning to imitate an expert in the absence of optimal actions learning a dynamics model while exploring the environment."
] |
[
"Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.",
"While this discovery is insightful, finding proper sub-networks requires iterative training and pruning.",
"The high cost incurred limits the applications of the lottery ticket hypothesis.",
"We show there exists a subset of the aforementioned sub-networks that converge significantly faster during the training process and thus can mitigate the cost issue.",
"We conduct extensive experiments to show such sub-networks consistently exist across various model structures for a restrictive setting of hyperparameters (e.g., carefully selected learning rate, pruning ratio, and model capacity). ",
"As a practical application of our findings, we demonstrate that such sub-networks can help in cutting down the total time of adversarial training, a standard approach to improve robustness, by up to 49% on CIFAR-10 to achieve the state-of-the-art robustness.",
"Pruning has served as an important technique for removing redundant structure in neural networks (Han et al., 2015b; a; Li et al., 2016; He et al., 2017) .",
"Properly pruning can reduce cost in computation and storage without harming performance.",
"However, pruning was until recently only used as a post-processing procedure, while pruning at initialization was believed ineffective (Han et al., 2015a; Li et al., 2016) .",
"Recently, proposed the lottery ticket hypothesis, showing that for a deep neural network there exist sub-networks, when trained from certain initialization obtained by pruning, performing equally or better than the original model with commensurate convergence rates.",
"Such pairs of sub-networks and initialization are called winning tickets.",
"This phenomenon indicates it is possible to perform pruning at initialization.",
"However, finding winning tickets still requires iterative pruning and excessive training.",
"Its high cost limits the application of winning tickets.",
"Although shows that winning tickets converge faster than the corresponding full models, it is only observed on small networks, such as a convolutional neural network (CNN) with only a few convolution layers.",
"In this paper, we show that for a variety of model architectures, there consistently exist such sub-networks that converge significantly faster when trained from certain initialization after pruning.",
"We call these boosting tickets.",
"We observe the standard technique introduced in for identifying winning tickets does not always find boosting tickets.",
"In fact, the requirements are more restrictive.",
"We extensively investigate underlining factors that affect such boosting effect, considering three stateof-the-art large model architectures: VGG-16 (Simonyan & Zisserman, 2014) , ResNet-18 (He et al., 2016) , and WideResNet (Zagoruyko & Komodakis, 2016) .",
"We conclude that the boosting effect depends principally on three factors:",
"(i) learning rate,",
"(ii) pruning ratio, and",
"(iii) network capacity; we also demonstrate how these factors affect the boosting effect.",
"By controlling these factors, after only one training epoch on CIFAR-10, we are able to obtain 90.88%/90.28% validation/test accuracy (regularly requires >30 training epochs) on WideResNet-34-10 when 80% parameters are pruned.",
"We further show that the boosting tickets have a practical application in accelerating adversarial training, an effective but expensive defensive training method for obtaining robust models against adversarial examples.",
"Adversarial examples are carefully perturbed inputs that are indistinguishable from natural inputs but can easily fool a classifier (Szegedy et al., 2013; Goodfellow et al., 2015) .",
"We first show our observations on winning and boosting tickets extend to the adversarial training scheme.",
"Furthermore, we observe that the boosting tickets pruned from a weakly robust model can be used to accelerate the adversarial training process for obtaining a strongly robust model.",
"On CIFAR-10 trained with WideResNet-34-10, we manage to save up to 49% of the total training time (including both pruning and training) compared to the regular adversarial training process.",
"Our contributions are summarized as follows:",
"1. We demonstrate that there exists boosting tickets, a special type of winning tickets that significantly accelerate the training process while still maintaining high accuracy.",
"2. We conduct extensive experiments to investigate the major factors affecting the performance of boosting tickets.",
"3. We demonstrate that winning tickets and boosting tickets exist for adversarial training scheme as well.",
"4. We show that pruning a non-robust model allows us to find winning/boosting tickets for a strongly robust model, which enables accelerated adversarial training process.",
"2 BACKGROUND AND RELATED WORK",
"Not knowledge distillation.",
"It may seem that winning tickets and boosting tickets behave like knowledge distillation (Ba & Caruana, 2014; Hinton et al., 2015) where the learned knowledge from a large model is transferred to a small model.",
"This conjecture may explain the boosting effects as the pruned model quickly recover the knowledge from the full model.",
"However, the lottery ticket framework seems to be distinctive to knowledge distillation.",
"If boosting tickets simply transfer knowledge from the full model to the pruned model, then an FGSM-based adversarially trained model should not find tickets that improves the robustness of the sub-model against PGD attacks, as the full model itself is vulnerable to PGD attacks.",
"Yet in Section 4.1 we observe an FGSM-based adversarially trained model still leads to boosting tickets that accelerates PGD-based adversarial training.",
"We believe the cause of boosting tickets requires further investigation in the future.",
"Accelerate adversarial training.",
"Recently, Shafahi et al. (2019) propose to reduce the training time for PGD-based adversarial training by recycling the gradients computed for parameter updates and constructing adversarial examples.",
"While their approach focuses on reducing the computational time for each epoch, our method focuses more on the convergence rate (i.e., reduce the number of epochs required for convergence).",
"Therefore, our approach is compatible with theirs, making it a promising future direction to combine both to further reduce the training time.",
"In this paper, we investigate boosting tickets, sub-networks coupled with certain initialization that can be trained with significantly faster convergence rate.",
"As a practical application, in the adversarial training scheme, we show pruning a weakly robust model allows to find boosting tickets that can save up to 49% of the total training time to obtain a strongly robust model that matches the state-ofthe-art robustness.",
"Finally, it is an interesting direction to investigate whether there is a way to find boosting tickets without training the full model beforehand, as it is technically not necessary."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20408162474632263,
0.0624999962747097,
0.13333332538604736,
0.2857142686843872,
0.2745097875595093,
0.14814814925193787,
0,
0.06451612710952759,
0.0952380895614624,
0.2222222238779068,
0.06896550953388214,
0.13333332538604736,
0.06666666269302368,
0.1428571343421936,
0.2448979616165161,
0.260869562625885,
0.0833333283662796,
0.17142856121063232,
0.07692307233810425,
0.07999999821186066,
0.13333332538604736,
0,
0.08695651590824127,
0.0624999962747097,
0.04081632196903229,
0.1702127605676651,
0.0476190410554409,
0.22857142984867096,
0.1860465109348297,
0.22727271914482117,
0,
0.23255813121795654,
0.23529411852359772,
0.05882352590560913,
0.3255814015865326,
0,
0,
0.19607841968536377,
0.1764705777168274,
0.13333332538604736,
0.22641508281230927,
0.09756097197532654,
0.19354838132858276,
0,
0.0952380895614624,
0.17777776718139648,
0.19999998807907104,
0.20512819290161133,
0.307692289352417,
0.27272728085517883
] | Sye2c3NYDB | true | [
"We show the possibility of pruning to find a small sub-network with significantly higher convergence rate than the full model."
] |
[
"Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc.",
"We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors.",
"We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement.",
"We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output.",
"We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality). \n\n",
"Feature representations of the observed raw data play a crucial role in the success of machine learning algorithms.",
"Effective representations should be able to capture the underlying (abstract or high-level) latent generative factors that are relevant for the end task while ignoring the inconsequential or nuisance factors.",
"Disentangled feature representations have the property that the generative factors are revealed in disjoint subsets of the feature dimensions, such that a change in a single generative factor causes a highly sparse change in the representation.",
"Disentangled representations offer several advantages -(i",
") Invariance: it is easier to derive representations that are invariant to nuisance factors by simply marginalizing over the corresponding dimensions, (",
"ii) Transferability: they are arguably more suitable for transfer learning as most of the key underlying generative factors appear segregated along feature dimensions,",
"(iii) Interpretability: a human expert may be able to assign meanings to the dimensions,",
"(iv) Conditioning and intervention: they allow for interpretable conditioning and/or intervention over a subset of the latents and observe the effects on other nodes in the graph.",
"Indeed, the importance of learning disentangled representations has been argued in several recent works BID5 BID37 BID50 .Recognizing",
"the significance of disentangled representations, several attempts have been made in this direction in the past BID50 . Much of the",
"earlier work assumes some sort of supervision in terms of: (i) partial",
"or full access to the generative factors per instance BID48 BID58 BID35 BID33 , (ii) knowledge",
"about the nature of generative factors (e.g, translation, rotation, etc.) BID29 BID11 , (iii) knowledge",
"about the changes in the generative factors across observations (e.g., sparse changes in consecutive frames of a Video) BID25 BID57 BID21 BID14 BID32 , (iv) knowledge",
"of a complementary signal to infer representations that are conditionally independent of it 1 BID10 BID41 BID53 . However, in most",
"real scenarios, we only have access to raw observations without any supervision about the generative factors. It is a challenging",
"problem and many of the earlier attempts have not been able to scale well for realistic settings BID51 BID15 BID13 ) (see also, ).Recently, BID9 proposed",
"an approach to learn a generative model with disentangled factors based on Generative Adversarial Networks (GAN) BID24 , however implicit generative models like GANs lack an effective inference mechanism 2 , which hinders its applicability to the problem of learning disentangled representations. More recently, proposed",
"an approach based on Variational AutoEncoder (VAE) BID34 for inferring disentangled factors. The inferred latents using",
"their method (termed as β-VAE ) are empirically shown to have better disentangling properties, however the method deviates from the basic principles of variational inference, creating increased tension between observed data likelihood and disentanglement. This in turn leads to poor",
"quality of generated samples as observed in .In this work, we propose a",
"principled approach for inference of disentangled latent factors based on the popular and scalable framework of amortized variational inference BID34 BID55 BID23 BID49 powered by stochastic optimization BID30 BID34 BID49 . Disentanglement is encouraged",
"by introducing a regularizer over the induced inferred prior. Unlike β-VAE , our approach does",
"not introduce any extra conflict between disentanglement of the latents and the observed data likelihood, which is reflected in the overall quality of the generated samples that matches the VAE and is much better than β-VAE. This does not come at the cost of",
"higher entanglement and our approach also outperforms β-VAE in disentangling the latents as measured by various quantitative metrics. We also propose a new disentanglement",
"metric, called Separated Attribute Predictability or SAP, which is better aligned with the qualitative disentanglement observed in the decoder's output compared to the existing metrics."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08163265138864517,
0.45454543828964233,
0.2857142686843872,
0.4324324131011963,
0.21052631735801697,
0.17142856121063232,
0.09090908616781235,
0.13636362552642822,
0,
0.04999999329447746,
0.1428571343421936,
0.1249999925494194,
0.23255813121795654,
0.1621621549129486,
0.17142856121063232,
0.06451612710952759,
0.05714285373687744,
0.1111111044883728,
0.1395348757505417,
0.10526315122842789,
0.10256409645080566,
0.13333332538604736,
0.23728813230991364,
0.2857142686843872,
0.1428571343421936,
0.1875,
0.3333333432674408,
0.1764705777168274,
0.15094339847564697,
0.4285714328289032,
0.0952380895614624
] | H1kG7GZAW | true | [
"We propose a variational inference based approach for encouraging the inference of disentangled latents. We also propose a new metric for quantifying disentanglement. "
] |
[
"In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution.",
"Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents.",
"Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents.",
"Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination.",
"However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents.",
"In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents.",
"ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them.",
"ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance.",
"Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.",
"Deep reinforcement learning (DRL) (Sutton & Barto, 2018) has achieved a lot of success at finding optimal policies to address single-agent complex tasks (Mnih et al., 2015; Silver et al., 2017) .",
"However, there also exist a lot of challenges in multiagent systems (MASs) since agents' behaviors are influenced by each other and the environment exhibits more stochasticity and uncertainties (Claus & Boutilier, 1998; Hu & Wellman, 1998; Bu et al., 2008; Hauwere et al., 2016) .",
"Recently, a number of deep multiagent reinforcement learning (MARL) approaches have been proposed to address complex multiagent problems, e.g., coordination of robot swarm systems (Sosic et al., 2017) and autonomous cars (Oh et al., 2015) .",
"One major class of works incorporates various multiagent coordination mechanisms into deep multiagent learning architecture (Lowe et al., 2017; Yang et al., 2018; Palmer et al., 2018) .",
"Lowe et al. (2017) proposed a centralized actor-critic architecture to address the partial observability in MASs.",
"They also incorporate the idea of joint action learner (JAL) (Littman, 1994) to facilitate multiagent coordination.",
"Later, Foerster et al. (2018) proposed Counterfactual Multi-Agent Policy Gradients (COMA) motivated from the difference reward mechanism (Wolpert & Tumer, 2001) to address the challenges of multiagent credit assignment.",
"Recently, Yang et al. (2018) proposed applying mean-field theory (Stanley, 1971) to solve large-scale multiagent learning problems.",
"More recently, Palmer et al. (2018) extended the idea of leniency (Potter & Jong, 1994; Panait et al., 2008) to deep MARL and proposed the retroactive temperature decay schedule to address stochastic rewards problems.",
"However, all these works ignore the natural property of the action influence between agents, which we aim to exploit to facilitate multiagent coordination.",
"Another class of works focus on specific network structure design to address multiagent learning problems (Sunehag et al., 2018; Rashid et al., 2018; Sukhbaatar et al., 2016; Singh et al., 2019) .",
"Sunehag et al. (2018) designed a value-decomposition network (VDN) to learn an optimal linear value decomposition from the team reward signal based on the assumption that the joint actionvalue function for the system can be additively decomposed into value functions across agents.",
"Later, Rashid et al. (2018) relaxed the linear assumption in VDN by assuming that the Q-values of individual agents and the global one are also monotonic, and proposed QMIX employing a network that estimates joint action-values as a complex non-linear combination of per-agent values.",
"Recently, proposed the relational deep RL to learn environmental entities relations.",
"However, they considered the entity relations on the pixel-level of raw visual data, which ignores the natural property of the influence of actions between agents.",
"Tacchetti et al. (2019) proposed a novel network architecture called Relational Forward Model (RFM) for predictive modeling in multiagent learning.",
"RFM takes a semantic description of the state of an environment as input, and outputs either an action prediction for each agent or a prediction of the cumulative reward of an episode.",
"However, RFM does not consider from the perspective of the influence of each action on other agents.",
"OpenAI designed network structures to address multiagent learning problems in a famous Multiplayer Online Battle Arena (MOBA), Dota2.",
"They used a scaled-up version of PPO (Schulman et al. (2017) ), adopted the attention mechanism to compute the weight of choosing the target unit, with some of information selected from all information as input.",
"However, this selection is not considered from the influence of each action on other agents.",
"There are also a number of works designing network structures for multiagent communication (Sukhbaatar et al., 2016; Singh et al., 2019) .",
"However, none of the above works explicitly leverage the fact that an agent's different actions may have different impacts on other agents, which is a natural property in MASs and should be considered in the decision-making process.",
"In multiagent settings, each agent's action set can be naturally divided into two types: one type containing actions that affect environmental information or its private properties and the other type containing actions that directly influence other agents (i.e., their private properties).",
"Intuitively, the estimation of performing actions with different types should be evaluated separately by explicitly considering different information.",
"We refer to the property that different actions may have different impacts on other agents as action semantics.",
"We can leverage the action semantics information to improve an agent's policy/Q network design toward more efficient multiagent learning.",
"To this end, we propose a novel network architecture, named Action Semantics Network (ASN) to characterize such action semantics for more efficient multiagent coordination.",
"The main contributions of this paper can be summarized as follows:",
"1) to the best of our knowledge, we are the first to explicitly consider action semantics and design a novel network to extract it to facilitate learning in MASs;",
"2) ASN can be easily combined with existing DRL algorithms to boost its learning performance;",
"3) experimental results * on StarCraft II micromanagement (Samvelyan et al., 2019) and Neural MMO (Suarez et al., 2019) show our ASN leads to better performance compared with state-of-the-art approaches in terms of both convergence speed and final performance.",
"We propose a new network architecture, ASN, to facilitate more efficient multiagent learning by explicitly investigating the semantics of actions between agents.",
"To the best of our knowledge, ASN is the first to explicitly characterize the action semantics in MASs, which can be easily combined with various multiagent DRL algorithms to boost the learning performance.",
"ASN greatly improves the performance of state-of-the-art DRL methods compared with a number of network architectures.",
"In this paper, we only consider the direct action influence between any of two agents.",
"As future work, it is worth investigating how to model the action semantics among more than two agents.",
"Another interesting direction is to consider the action semantics between agents in continuous action spaces.",
"0.2 0.4 0.6 0.8 1.0 1.2 1.4",
"Step"
] | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.10526315122842789,
0.15789473056793213,
0.1249999925494194,
0,
0.4444444477558136,
0.19512194395065308,
0.9444444179534912,
0.05714285373687744,
0.1395348757505417,
0,
0.06779660284519196,
0.038461532443761826,
0,
0.11428570747375488,
0.11428570747375488,
0.08510638028383255,
0.0555555522441864,
0.07999999821186066,
0.19999998807907104,
0.045454539358615875,
0.14035087823867798,
0.10526315122842789,
0.13333332538604736,
0.25641024112701416,
0.051282044500112534,
0.09302324801683426,
0.3529411852359772,
0,
0.04081632196903229,
0.3529411852359772,
0,
0.1538461446762085,
0.178571417927742,
0.1111111044883728,
0.3888888955116272,
0.15789473056793213,
0.09302324801683426,
0,
0.13636362552642822,
0.05882352590560913,
0.07407406717538834,
0.19512194395065308,
0.1666666567325592,
0.11764705181121826,
0.29411762952804565,
0.21621620655059814,
0.3030303120613098,
0
] | ryg48p4tPH | true | [
"Our proposed ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them."
] |
[
"Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network.",
"While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far.\n",
"Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation.",
"Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons.",
"These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains.",
"For such neurons, we approximate the effective activation function, which resembles a sigmoid.",
"We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation.",
"We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber (1997), and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze.",
"Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks.",
"With the manifold success of biologically inspired deep neural networks, networks of spiking neurons are being investigated as potential models for computational and energy efficiency.",
"Spiking neural networks mimic the pulse-based communication in biological neurons, where in brains, neurons spike only sparingly -on average 1-5 spikes per second BID0 .",
"A number of successful convolutional neural networks based on spiking neurons have been reported BID7 BID13 BID6 BID15 BID12 , with varying degrees of biological plausibility and efficiency.",
"Still, while spiking neural networks have thus been applied successfully to solve image-recognition tasks, many deep learning algorithms use recurrent neural networks (RNNs), in particular using Long Short-Term Memory (LSTM) layers BID11 .",
"Compared to convolutional neural networks, LSTMs use memory cells to store selected information and various gates to direct the flow of information in and out of the memory cells.",
"To date, the only spike-based version of LSTM has been realized for the IBM TrueNorth platform Shrestha et al.: this work proposes a method to approximate LSTM specifically for the constrains of this neurosynaptic platform by means of a store-and-release mechanism that synchronizes the modules.",
"This translates to a frame-based rate coding computation, which is less biological plausible and energy efficient than an asynchronous approach, as the one proposed here.Here, we demonstrate a gated recurrent spiking neural network that corresponds to an LSTM unit with a memory cell and an input gate.",
"Analogous to recent work on spiking neural networks (O 'Connor et al., 2013; BID6 BID19 BID20 , we first train a network with modified LSTM units that computes with analog values, and show how this LSTMnetwork can be converted to a spiking neural network using adaptive stochastic spiking neurons that encode and decode information in spikes using a form of sigma-delta coding BID18 BID19 BID14 .",
"In particular, we develop a binary version of the adaptive sigma-delta coding proposed in BID19 : we approximate the shape of the transfer function that this model of fast-adapting spiking neurons exhibits, and we assemble the analog LSTM units using just this transfer function.",
"Since input-gating is essential for maintaining memorized information without interference from unrelated sensory inputs BID11 , and to reduce complexity, we model a limited LSTM neuron consisting of an input cell, input gating cell, a Constant Error Carousel (CEC) and output cell.",
"The resultant analog LSTM network is then trained on a number of classical sequential tasks, such as the noise-free and noisy Sequence Prediction and the T-Maze task BID11 BID1 .",
"We demonstrate how nearly all the corresponding spiking LSTM neural networks correctly compute the same function as the analog version.Note that the conversion of gated RNNs to spike-based computation implies a conversion of the neural network from a time step based behavior to the continuous-time domain: for RNNs, this means having to consider the continuous signal integration in the memory cell.",
"We solve the time conversion problem by approximating analytically the spiking memory cell behavior through time.Together, this work is a first step towards using spiking neural networks in such diverse and challenging tasks like speech recognition and working memory cognitive tasks.",
"Gating is a crucial ingredient in recurrent neural networks that are able to learn long-range dependencies BID11 .",
"Input gates in particular allow memory cells to maintain information over long stretches of time regardless of the presented -irrelevant -sensory input BID11 .",
"The ability to recognize and maintain information for later use is also that which makes gated RNNs like LSTM so successful in the great many sequence related problems, ranging from natural language processing to learning cognitive tasks BID1 .To",
"transfer deep neural networks to networks of spiking neurons, a highly effective method has been to map the transfer function of spiking neurons to analog counterparts and then, once the network has been trained, substitute the analog neurons with spiking neurons O' Connor et al. (2013); BID6 ; BID19 . Here",
", we showed how this approach can be extended to gated memory units, and we demonstrated this for an LSTM network comprised of an input gate and a CEC. Hence",
", we effectively obtained a low-firing rate asynchronous LSTM network.The most complex aspect of a gating mechanism turned out to be the requirement of a differentiable gating function, for which analog networks use sigmoidal units. We approximated",
"the activation function for a stochastic Adaptive Spiking Neurons, which, as many real neurons, approximates a half-sigmoid (Fig. 1) . We showed how the",
"stochastic spiking neuron has an effective activation even below the resting threshold ϑ 0 . This provides a gradient",
"for training even in that area. The resultant LSTM network",
"was then shown to be suitable for learning sequence prediction tasks, both in a noise-free and noisy setting, and a standard working memory reinforcement learning task. The learned network could",
"then successfully be mapped to its spiking neural network equivalent for at least 90% of the trained analog networks. Figure 6 : The values of",
"the analog CECs and spiking CECs for the noise-free Sequence Prediction (left, only one CEC cell was used) and noise-free T-maze (right, three CEC cells were used) tasks. The spiking CEC is the internal",
"stateŜ of the output cell of the Adaptive Spiking LSTM.We also showed that some difficulties arise in the conversion of analog to spiking LSTM. Principally, the ASN activation",
"function is derived for steady-state adapted spiking neurons, and this difference causes an error that may be large for fast changing signals. Analog-valued spikes as explored",
"in BID19 could likely resolve this issue, at the expense of some loss of representational efficiency.Although the adaptive spiking LSTM implemented in this paper does not have output gates BID11 , they can be included by following the same approach used for the input gates: a modulation of the synaptic strength. The reasons for our approach are",
"multiple: first of all, most of the tasks do not really require output gates; moreover, modulating each output synapse independently is less intuitive and biologically plausible than for the input gates. A similar argument can be made for",
"the forget gates, which were not included in the original LSTM formulation: here, the solution consists in modulating the decaying factor of the CEC.Finally, which gates are really needed in an LSTM network is still an open question, with answers depending on the kind of task to be solved BID9 BID21 . For example, the AuGMEnT framework",
"does not use gates to solve many working memory RL tasks BID16 . In addition, it has been shown by",
"BID4 ; BID9 that a combination of input and forget gates can outperform LSTM on a variety of tasks while reducing the LSTM complexity."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1666666567325592,
0.25,
0.10526315122842789,
0.24242423474788666,
0.1818181723356247,
0.0714285671710968,
0.19999998807907104,
0.13333332538604736,
0.21052631735801697,
0.10256409645080566,
0.052631575614213943,
0.0952380895614624,
0.17777776718139648,
0.1111111044883728,
0.15686273574829102,
0.4912280738353729,
0.20588235557079315,
0.16326530277729034,
0.15094339847564697,
0.1428571343421936,
0.3125,
0.15686273574829102,
0.3125,
0.05405404791235924,
0.15094339847564697,
0.19607843458652496,
0.2926829159259796,
0.2448979616165161,
0.1111111044883728,
0.1818181723356247,
0.23999999463558197,
0.1428571343421936,
0.20512820780277252,
0.04999999701976776,
0.2631579041481018,
0.14999999105930328,
0.0952380895614624,
0,
0.13333332538604736,
0.05882352590560913,
0.1666666567325592
] | rk8R_JWRW | true | [
" We demonstrate a gated recurrent asynchronous spiking neural network that corresponds to an LSTM unit."
] |
[
"CNNs are widely successful in recognizing human actions in videos, albeit with a great cost of computation.",
"This cost is significantly higher in the case of long-range actions, where a video can span up to a few minutes, on average.",
"The goal of this paper is to reduce the computational cost of these CNNs, without sacrificing their performance.",
"We propose VideoEpitoma, a neural network architecture comprising two modules: a timestamp selector and a video classifier.",
"Given a long-range video of thousands of timesteps, the selector learns to choose only a few but most representative timesteps for the video.",
"This selector resides on top of a lightweight CNN such as MobileNet and uses a novel gating module to take a binary decision: consider or discard a video timestep.",
"This decision is conditioned on both the timestep-level feature and the video-level consensus.",
"A heavyweight CNN model such as I3D takes the selected frames as input and performs video classification.",
"Using off-the-shelf video classifiers, VideoEpitoma reduces the computation by up to 50\\% without compromising the accuracy.",
"In addition, we show that if trained end-to-end, the selector learns to make better choices to the benefit of the classifier, despite the selector and the classifier residing on two different CNNs.",
"Finally, we report state-of-the-art results on two datasets for long-range action recognition: Charades and Breakfast Actions, with much-reduced computation.",
"In particular, we match the accuracy of I3D by using less than half of the computation.\n\n",
"A human can skim through a minute-long video in just a few seconds, and still grasp its underlying story (Szelag et al., 2004) .",
"This extreme efficiency of the human visual and temporal information processing beggars belief.",
"The unmatched trade-off between efficiency and accuracy can be attributed to visual attention (Szelag et al., 2004 ) -one of the hallmarks of the human cognitive abilities.",
"This raises the question: can we build an efficient, yet effective, neural model to recognize minutes-long actions in videos?",
"A possible solution is building efficient neural networks, which have a demonstrated record of success in the efficient recognition of images (Howard et al., 2017) .",
"Such models have been successful for recognizing short-range actions in datasets such as HMDB (Kuehne et al., 2011) and UCF-101 (Soomro et al., 2012) , where analysis of only a few frames would suffice (Schindler & Van Gool, 2008) .",
"In contrast, a long-range action can take up to a few minutes to unfold (Hussein et al., 2019a) .",
"Current methods fully process the long-range action video to successfully recognize it.",
"Thus, for long-range actions, the major computational bottleneck is the sheer number of video frames to be processed.",
"Another potential solution is attention.",
"Not only it is biologically plausible, but also it is used in a wide spectrum of computer vision tasks, such as image classification , semantic segmentation (Oktay et al., 2018) , action recognition (Wang et al., 2018) and temporal localization (Nguyen et al., 2018) .",
"Attention has also been applied to language understanding (Lin et al., 2017) and graph modeling (Veličković et al., 2017) .",
"Most of these methods use soft-attention, where the insignificant visual signals are least attended to.",
"However, such signals are still fully processed by the neural network and hence no reduction on the computation cost is obtained.",
"Neural gating is a more conceivable choice to realize the efficiency, by completely dropping the insignificant visual signals.",
"Recently, there has been a notable success in making neural gating differentiable (Maddison et al., 2016) .",
"Neural gating is applied to conditional learning, and is used to gate network layers (Veit & Belongie, 2018) , convolutional channels (Bejnordi et al., 2019) , and more (Shetty et al., 2017) .",
"That begs the question: can neural gating help in reducing the computational cost of recognizing minutes-long actions?",
"That is to say, can we learn a gating mechanism to consider or discard video frames, conditioned on their video?",
"Motivated by the aforementioned questions, we propose VideoEpitoma, a two-stage neural network for efficient classification of long-range actions without compromising the performance.",
"The first stage is the timestep selector, in which, many timesteps of a long-range action are efficiently represented by lightweight CNN, such as MobileNet (Howard et al., 2017; Sandler et al., 2018; Howard et al., 2019) .",
"Then, a novel gating module learns to select only the most significant timesteps -practically achieving the epitoma (Latin for summary) of this video.",
"In the second stage, a heavyweight CNN, such as I3D (Carreira & Zisserman, 2017) , is used to effectively represent only the selected timesteps, followed by temporal modeling for the video-level recognition.",
"This paper contributes the followings: i.",
"VideoEpitoma, a neural network model for efficient recognition of long-range actions.",
"The proposed model uses a novel gating module for timestep selection, conditioned on both the input frame and its context.",
"ii.",
"Off the shelf, our timestamp selector benefits video classification models and yields signification reduction in computation costs.",
"We also show that if trained end-to-end, the timestep selector learns better gating mechanism to the benefit of the video classifier.",
"iii.",
"We present state-of-the-art results on two long-range action recognition benchmarks: Charades (Sigurdsson et al., 2016) and Breakfast Actions (Kuehne et al., 2014) with significant reductions in the computational costs.",
"In this paper, we proposed VideoEpitoma, a neural model for efficient recognition of long-range actions in videos.",
"We stated the fundamental differences between long-range actions and their shortrange counterparts (Hussein et al., 2019a; b) .",
"And we highlighted how these differences influenced our way of find a solution for an efficient recognition of such videos.",
"The outcome of this paper is VideoEpitoma, a neural model with the ability to retain the performance of off-the-shelf CNN classifiers at a fraction of the computational budget.",
"This paper concludes the following.",
"Rather than building an efficient CNN video classifier, we opted for an efficient selection of the most salient parts of the video, followed by an effective classification of only these salient parts.",
"For a successful selection, we proposed a novel gating module, able to select timesteps conditioned on their importance to their video.",
"We experimented how this selection benefits off-the-shelf CNN classifiers.",
"Futher more, we showed how VideoEpitoma, i.e. both the selector and the classifier, improves even further when trained end-to-end.",
"Finally, we experimented VideoEpitoma on two benchmarks for longrange actions.",
"We compared against realted methods to hightight the efficiency of videoEpitoma for saving the computation, and its effectiveness of recognizing the long-range actions."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.04999999329447746,
0,
0.1818181723356247,
0.10810810327529907,
0.1818181723356247,
0.06666666269302368,
0.1764705777168274,
0.12121211737394333,
0.09090908616781235,
0.10810810327529907,
0.12121211737394333,
0.09756097197532654,
0.12903225421905518,
0.045454539358615875,
0,
0,
0.072727270424366,
0,
0.06666666269302368,
0.11428570747375488,
0,
0.11320754140615463,
0.11428570747375488,
0,
0.10526315122842789,
0.11428570747375488,
0.05714285373687744,
0.13636362552642822,
0.05882352590560913,
0.1621621549129486,
0.1538461446762085,
0.039215680211782455,
0.19999998807907104,
0.2083333283662796,
0,
0.06896550953388214,
0.21052631735801697,
0.17142856121063232,
0.1621621549129486,
0.043478257954120636,
0.05714285373687744,
0.0555555522441864,
0.05405404791235924,
0,
0,
0.2380952388048172,
0.1111111044883728,
0,
0.05405404791235924,
0.0714285671710968,
0.10526315122842789
] | Skx1dhNYPS | true | [
"Efficient video classification using frame-based conditional gating module for selecting most-dominant frames, followed by temporal modeling and classifier."
] |
[
"Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages.",
"A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings).",
"To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing.",
"As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters.",
"Our method (Differentiable Perturb-and-Parse) relies on differentiable dynamic programming over stochastically perturbed edge scores.",
"We demonstrate effectiveness of our approach with experiments on English, French and Swedish.",
"A dependency tree is a lightweight syntactic structure exposing (possibly labeled) bi-lexical relations between words BID77 BID24 , see Figure 1 .",
"This representation has been widely studied by the NLP community leading to very efficient state-of-the-art parsers BID30 BID12 BID43 , motivated by the fact that dependency trees are useful in downstream tasks such as semantic parsing BID66 , machine translation BID11 BID4 , information extraction BID9 BID42 , question answering BID8 and even as a filtering method for constituency parsing BID34 , among others.Unfortunately, syntactic annotation is a tedious and expensive task, requiring highly-skilled human annotators.",
"Consequently, even though syntactic annotation is now available for many languages, the datasets are often small.",
"For example, 31 languages in the Universal Dependency Treebank, 1 the largest dependency annotation resource, have fewer than 5,000 sentences, including such major languages as Vietnamese and Telugu.",
"This makes the idea of using unlabeled texts as an additional source of supervision especially attractive.In previous work, before the rise of deep learning, the semi-supervised parsing setting has been mainly tackled with two-step algorithms.",
"On the one hand, feature extraction methods first learn an intermediate representation using an unlabeled dataset which is then used as input to train a supervised parser BID35 BID83 BID7 BID73 .",
"On the other hand, the self-training and co-training methods start by learning a supervised parser that is then used to label extra data.",
"Then, the parser is retrained with this additional annotation BID68 BID25 BID50 .",
"Nowadays, unsupervised feature extraction is achieved in neural parsers by the means of word embeddings BID55 BID65 .",
"The natural question to ask is whether one can exploit unlabeled data in neural parsers beyond only inducing generalizable word representations.",
"Figure 1: Dependency tree example: each arc represents a labeled relation between the head word (the source of the arc) and the modifier word (the destination of the arc).",
"The first token is a fake root word.",
"Our method can be regarded as semi-supervised Variational Auto-Encoder (VAE, Kingma et al., 2014) .",
"Specifically, we introduce a probabilistic model (Section",
"3) parametrized with a neural network (Section 4).",
"The model assumes that a sentence is generated conditioned on a latent dependency tree.",
"Dependency parsing corresponds to approximating the posterior distribution over the latent trees within this model, achieved by the encoder component of VAE, see Figure 2a .",
"The parameters of the generative model and the parser (i.e. the encoder) are estimated by maximizing the likelihood of unlabeled sentences.",
"In order to ensure that the latent representation is consistent with treebank annotation, we combine the above objective with maximizing the likelihood of gold parse trees in the labeled data.Training a VAE via backpropagation requires marginalization over the latent variables, which is intractable for dependency trees.",
"In this case, previous work proposed approximate training methods, mainly differentiable Monte-Carlo estimation BID27 BID67 and score function estimation, e.g. REINFORCE BID80 .",
"However, REINFORCE is known to suffer from high variance BID56 .",
"Therefore, we propose an approximate differentiable Monte-Carlo approach that we call Differentiable Perturb-and-Parse (Section 5).",
"The key idea is that we can obtain a differentiable relaxation of an approximate sample by (1) perturbing weights of candidate dependencies and (2) performing structured argmax inference with differentiable dynamic programming, relying on the perturbed scores.",
"In this way we bring together ideas of perturb-and-map inference BID62 BID45 and continuous relaxation for dynamic programming BID53 .",
"Our model differs from previous works on latent structured models which compute marginal probabilities of individual edges BID26 ; BID41 .",
"Instead, we sample a single tree from the distribution that is represented with a soft selection of arcs.",
"Therefore, we preserve higher-order statistics, which can then inform the decoder.",
"Computing marginals would correspond to making strong independence assumptions.",
"We evaluate our semi-supervised parser on English, French and Swedish and show improvement over a comparable supervised baseline (Section 6).Our",
"main contributions can be summarized as follows: (1) we introduce a variational autoencoder for semi-supervised dependency parsing; (2) we propose the Differentiable Perturb-and-Parse method for its estimation; (3) we demonstrate the effectiveness of the approach on three different languages. In",
"short, we introduce a novel generative model for learning latent syntactic structures.",
"The fact that T is a soft selection of arcs, and not a combinatorial structure, does not impact the decoder.",
"Indeed, a GCN can be run over weighted graphs, the message passed between nodes is simply multiplied by the continuous weights.",
"This is one of motivations for using GCNs rather than a Recursive LSTMs BID74 in the decoder.",
"On the one hand, running a GCN with a matrix that represents a soft selection of arcs (i.e. with real values) has the same computational cost than using a standard adjacency matrix (i.e. with binary elements) if we use matrix multiplication on GPU.",
"13 On the other hand, a recursive network over a soft selection of arcs requires to build a O(n 2 ) set of RNN-cells that follow the dynamic programming chart where the possible inputs of a cell are multiplied by their corresponding weight in T, which is expensive and not GPU-friendly.",
"We presented a novel generative learning approach for semi-supervised dependency parsing.",
"We model the dependency structure of a sentence as a latent variable and build a VAE.",
"We hope to motivate investigation of latent syntactic structures via differentiable dynamic programming in neural networks.",
"Future work includes research for an informative prior for the dependency tree distribution, for example by introducing linguistic knowledge BID57 BID61 or with an adversarial training criterion BID46 .",
"This work could also be extended to the unsupervised scenario.where z is the sample.",
"As such, e ∼ N (0, 1) is an input of the neural network for which we do not need to compute partial derivatives.",
"This technique is called the reparametrization trick BID27 BID67 ."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.04878048226237297,
0.07407406717538834,
0.11764705181121826,
0.307692289352417,
0.07999999821186066,
0,
0.025316452607512474,
0,
0,
0.09090908616781235,
0.0952380895614624,
0.05882352590560913,
0.0833333283662796,
0,
0.060606054961681366,
0,
0,
0.07407406717538834,
0,
0.09999999403953552,
0,
0.11428570747375488,
0,
0.15686273574829102,
0,
0.09090908616781235,
0.07692307233810425,
0.1702127605676651,
0.12903225421905518,
0,
0.06896550953388214,
0,
0.0952380895614624,
0.1249999925494194,
0.08510638028383255,
0,
0,
0.1249999925494194,
0,
0.04255318641662598,
0.1428571343421936,
0.08695651590824127,
0.07692307233810425,
0.2142857164144516,
0.05405404791235924,
0.07692307233810425,
0.1111111044883728,
0
] | BJlgNh0qKQ | true | [
"Differentiable dynamic programming over perturbed input weights with application to semi-supervised VAE"
] |
[
"DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks.",
"We show that these methods do not produce the theoretically correct explanation for a linear model.",
"Yet they are used on multi-layer networks with millions of parameters.",
"This is a cause for concern since linear models are simple neural networks.",
"We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models.",
"Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.\n",
"Deep learning made a huge impact on a wide variety of applications BID8 BID24 BID9 BID15 BID10 BID18 and recent neural network classifiers have become excellent at detecting relevant signals (e.g., the presence of a cat) contained in input data points such as images by filtering out all other, nonrelevant and distracting components also present in the data.",
"This separation of signal and distractors is achieved by passing the input through many layers with millions of parameters and nonlinear activation functions in between, until finally at the output layer, these models yield a highly condensed version of the signal, e.g. a single number indicating the probability of a cat being in the image.While deep neural networks learn efficient and powerful representations, they are often considered a 'black-box'.",
"In order to better understand classifier decisions and to gain insight into how these models operate, a variety techniques have been proposed BID20 BID25 BID12 BID1 BID0 BID11 BID26 BID22 BID27 BID23 BID21 .",
"These methods for explaining classifier decisions operate under the assumption that it is possible to propagate the condensed output signal back through the classifier to arrive at something that shows how the relevant signal was encoded in the input and thereby explains the classifier decision.",
"Simply put, if the classifier detected a cat, the visualization should point to the cat-relevant aspects of the input image from the perspective of the network.",
"Techniques that are based on this principle include saliency maps from network gradients BID1 BID20 , DeConvNet (Zeiler & Fergus, 2014, DCN) , Guided BackProp (Springenberg et al., 2015, GBP) , Figure 1 : Illustration of explanation approaches.",
"Function and signal approximators visualize the explanation using the original color channels.",
"The attribution is visualized as a heat map of pixelwise contributions to the output Layer-wise Relevance Propagation (Bach et al., 2015, LRP) and the Deep Taylor Decomposition (Montavon et al., 2017, DTD) , Integrated Gradients BID23 and SmoothGrad BID21 .The",
"merit of explanation methods is often demonstrated by applying them to state-of-the-art deep learning models in the context of high dimensional real world data, such as ImageNet, where the provided explanation is intuitive to humans. Unfortunately",
", theoretical analysis as well as quantitative empirical evaluations of these methods are lacking.Deep neural networks are essentially a composition of linear transformations connected with nonlinear activation functions. Since approaches",
", such as DeConvNet, Guided BackProp, and LRP, back-propagate the explanations in a layer-wise fashion, it is crucial that the individual linear layers are handled correctly. In this work we",
"show that these gradient-based methods fail to recover the signal even for a single-layer architecture, i.e. a linear model. We argue that therefore",
"they cannot be expected to reliably explain a deep neural network and demonstrate this with quantitative and qualitative experiments. In particular, we provide",
"the following key contributions:• We analyze the performance of existing explanation approaches in the controlled setting of a linear model (Sections 2 and 3).• We categorize explanation",
"methods into three groups -functions, signals and attribution (see Fig. 1 ) -that require fundamentally different interpretations and are complementary in terms of information about the neural network (Section 3).• We propose two novel explanation",
"methods -PatternNet and PatternAttribution -that alleviate shortcomings of current approaches, as discovered during our analysis, and improve explanations in real-world deep neural networks visually and quantitatively (Sections 4 and 5).This presents a step towards a thorough",
"analysis of explanation methods and suggests qualitatively and measurably improved explanations. These are crucial requirements for reliable",
"explanation techniques, in particular in domains, where explanations are not necessarily intuitive, e.g. in health and the sciences BID16 .Notation and scope Scalars are lowercase letters",
"(i), column vectors are bold (u), element-wise multiplication is ( ). The covariance between u and v is cov [u, v] , the",
"covariance of u and i is cov [u, i] . The variance of a scalar random variable i is σ 2",
"i . Estimates of random variables will have a hat (û)",
". We analyze neural networks excluding the final soft-max",
"output layer. To allow for analytical treatment, we only consider networks",
"with linear neurons optionally followed by a rectified linear unit (ReLU), max-pooling or soft-max. We analyze linear neurons and nonlinearities independently such",
"that every neuron has its own weight vector. These restrictions are similar to those in the saliency map BID20",
", DCN (Zeiler & Fergus, 2014) , GBP (Springenberg Figure 2 : For linear models, i.e., a simple neural network, the weight vector does not explain the signal it detects BID5 . The data x = ya s + a d is color-coded w.r.t. the output y = w T",
"x. Only the signal s = ya s contributes",
"to y. The weight vector w",
"does not agree with the signal direction,",
"since its primary objective is canceling the distractor. Therefore, rotations of the basis vector a d of the distractor with",
"constant signal s lead to rotations of the weight vector (right). et al., 2015), LRP BID0 and DTD BID11 . Without loss of generality,",
"biases are considered constant neurons",
"to enhance clarity.",
"To evaluate the quality of the explanations, we focus on the task of image classification.",
"Nevertheless, our method is not restricted to networks operating on image inputs.",
"We used Theano BID2 and BID4 for our implementation.",
"We restrict the analysis to the well-known ImageNet dataset BID13 using the pre-trained VGG-16 model BID19 .",
"Images were rescaled and cropped to 224x224 pixels.",
"The signal estimators are trained on the first half of the training dataset.The vector v, used to measure the quality of the signal estimator ρ(x) in Eq. FORMULA5 , is optimized on the second half of the training dataset.",
"This enables us to test the signal estimators for generalization.",
"All the results presented here were obtained using the official validation set of 50000 samples.",
"The validation set was not used for training the signal estimators, nor for training the vector v to measure the quality.",
"Consequently our results are obtained on previously unseen data.The linear and the two component signal estimators are obtained by solving their respective closed form solutions (Eq.",
"(4) and Eq. FORMULA15 ).",
"With a highly parallelized implementation using 4 GPUs this could be done in 3-4 hours.",
"This can be considered reasonable given that several days are required to train the actual network.",
"The quality of a signal estimator is assessed with Eq. (1).",
"Solving it with the closed form solution is computationally prohibitive since it must be repeated for every single weight vector in the network.",
"Therefore we optimize the equivalent least-squares problem using stochastic mini-batch gradient descent with ADAM Kingma & Ba (2015) until convergence.",
"This was implemented on a NVIDIA Tesla K40 and took about 24 hours per optimized signal estimator.After learning to explain, individual explanations are computationally cheap since they can be implemented as a back-propagation pass with a modified weight vector.",
"As a result, our method produces explanations at least as fast as the work by BID3 on real time saliency.",
"However, our method has the advantage that it is not only applicable to image models but is a generalization of the theory commonly used in neuroimaging BID5 .",
"Measuring the quality of signal estimators In Fig. 3 we present the results from the correlation measure ρ(x), where higher values are better.",
"We use random directions as baseline signal estimators.",
"Clearly, this approach removes almost no correlation.",
"The filter-based estimator S w succeeds in removing some of the information in the first layer.",
"This indicates that the filters are similar to the patterns in this layer.",
"However, the gradient removes much less information in the higher layers.",
"Overall, it does not perform much better than the random estimator.",
"This implies that the weights do not correspond to the detected stimulus in a neural network.",
"Hence the implicit assumptions about the signal made by DeConvNet and Guided BackProp is not valid.",
"The optimized estimators remove much more of the correlations across the board.",
"For convolutional layers, S a and S a+− perform comparably in all but one layer.",
"The two component estimator S a+− is best in the dense layers.Image degradation The first experiment was a direct measurement of the quality of the signal estimators of individual neurons.",
"The second one is an indirect measurement of the quality, but it considers the whole network.",
"We measure how the prediction (after the soft-max) for the initially selected class changes as a function of corrupting more and more patches based on the ordering assigned by the attribution (see BID14 .",
"This is also related to the work by BID27 .",
"In this experiment, we split the image in non-overlapping patches of 9x9 pixels.",
"We compute the attribution and sum all the values within a patch.",
"We sort the patches in decreasing order based on the aggregate heat map value.",
"In step n = 1..100 we replace the first n patches with the their mean per color channel to remove the information in this patch.",
"Then, we measure how this influences the classifiers output.",
"We use the estimators from the previous experiment to obtain the function-signal attribution heat maps for evaluation.",
"A steeper decay indicates a better heat map.Results are shown in Fig. 4 .",
"The baseline, in which the patches are randomly ordered, performs worst.",
"The linear optimized estimator S a performs quite poorly, followed by the filter-based estimator S w .",
"The trivial signal estimator S x performs just slightly better.",
"However, the two component model S a+− leads to the fastest decrease in confidence in the original prediction by a large margin.",
"Its excellent quantitative performance is also backed up by the visualizations discussed next.Qualitative evaluation In FIG1 , we compare all signal estimators on a single input image.",
"For the trivial estimator S x , the signal is by definition the original input image and, thus, includes the distractor.",
"Therefore, its noisy attribution heat map shows contributions that cancel each other in the neural network.",
"The S w estimator captures some of the structure.",
"The optimized estimator S a results in slightly more structure but struggles on color information and produces dense heat maps.",
"The two component model S a+− on the right captures the original input during signal estimation and produces a crisp heat map of the attribution.",
"FIG2 shows the visualizations for six randomly selected images from ImageNet.",
"PatternNet is able to recover a signal close to the original without having to resort to the inclusion of additional rectifiers in contrast to DeConvNet and Guided BackProp.",
"We argue that this is due to the fact that the optimization of the pattern allows for capturing the important directions in input space.",
"This contrasts with the commonly used methods DeConvNet, Guided BackProp, LRP and DTD, for which the correlation experiment indicates that their implicit signal estimator cannot capture the true signal in the data.",
"Overall, the proposed approach produces the most crisp visualization in addition to being measurably better, as shown in the previous section.",
"Additonally, we also contrast our methods to the prediction-differences analysis by BID27 in the supplementary material.Relation to previous methods Our method can be thought of as a generalization of the work by BID5 , making it applicable on deep neural networks.",
"Remarkably, our proposed approach can solve the toy example in section 2 optimally while none of the previously published methods for deep learning are able to solve this BID0 BID11 BID21 BID23 BID27 BID3 BID26 BID22 .",
"Our method shares the idea that to explain a model properly one has to learn how to explain it with Zintgraf et al. FORMULA5 and BID3 .",
"Furthermore, since our approach is after training just as expensive as a single back-propagation step, it can be applied in a real-time context, which is also possible for the work done by BID3 but not for BID27 .",
"Understanding and explaining nonlinear methods is an important challenge in machine learning.",
"Algorithms for visualizing nonlinear models have emerged but theoretical contributions are scarce.",
"We have shown that the direction of the model gradient does not necessarily provide an estimate for the signal in the data.",
"Instead it reflects the relation between the signal direction and the distracting noise contributions ( Fig. 2) .",
"This implies that popular explanation approaches for neural networks (DeConvNet, Guided BackProp, LRP) do not provide the correct explanation, even for a simple linear model.",
"Our reasoning can be extended to nonlinear models.",
"We have proposed an objective function for neuron-wise explanations.",
"This can be optimized to correct the signal visualizations (PatternNet) and the decomposition methods (PatternAttribution) by taking the data distribution into account.",
"We have demonstrated that our methods constitute a theoretical, qualitative and quantitative improvement towards understanding deep neural networks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0833333283662796,
0.0714285671710968,
0,
0.1599999964237213,
0,
0.04878048226237297,
0.0615384578704834,
0.05714285373687744,
0.13636362552642822,
0.1702127605676651,
0.1249999925494194,
0,
0,
0.12765957415103912,
0.1395348757505417,
0.04999999701976776,
0.1463414579629898,
0.11764705181121826,
0.1764705777168274,
0.05714285373687744,
0,
0.04444444179534912,
0,
0,
0.05882352590560913,
0.13793103396892548,
0.09090908616781235,
0,
0,
0.06451612710952759,
0.06451612710952759,
0.13793103396892548,
0,
0.1111111044883728,
0,
0.1428571343421936,
0.11428570747375488,
0,
0.13333332538604736,
0,
0.1666666567325592,
0,
0.07692307233810425,
0.09999999403953552,
0.10256409645080566,
0.09090908616781235,
0,
0.06896550953388214,
0,
0,
0.07407406717538834,
0.0714285671710968,
0.17391303181648254,
0.12121211737394333,
0,
0.12244897335767746,
0.06451612710952759,
0.21621620655059814,
0,
0,
0,
0,
0.0833333283662796,
0,
0.08695651590824127,
0.14814814925193787,
0.07407406717538834,
0,
0.07692307233810425,
0.10526315122842789,
0.14814814925193787,
0.04999999701976776,
0.1904761791229248,
0,
0.08695651590824127,
0,
0.05714285373687744,
0,
0.07407406717538834,
0.07692307233810425,
0,
0.07692307233810425,
0,
0.12903225421905518,
0.09999999403953552,
0.06666666269302368,
0,
0,
0.0624999962747097,
0.05714285373687744,
0,
0.17142856121063232,
0.1249999925494194,
0,
0.06666666269302368,
0.1249999925494194,
0.08695651590824127,
0.22857142984867096,
0.13333332538604736,
0.25,
0,
0,
0.07407406717538834,
0.0555555522441864,
0.09999999403953552,
0,
0.0624999962747097,
0.06666666269302368
] | Hkn7CBaTW | true | [
"Without learning, it is impossible to explain a machine learning model's decisions."
] |
[
"Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks.",
"Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation.",
"Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs.",
"To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM).",
"GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly.",
"These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations.",
"With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks.",
"Furthermore, we empirically demonstrate the significance of considering global information.",
"The source code will be publicly available in the near future.",
"Graphs are universal data representations that exist in a wide variety of real-world problems, such as analyzing social networks (Perozzi et al., 2014; Jia et al., 2017) , forecasting traffic flow (Manley, 2015; Yu et al., 2017) , and recommending products based on personal preferences (Page et al., 1999; Kim et al., 2019) .",
"Owing to breakthroughs in deep learning, recent graph neural networks (GNNs) (Scarselli et al., 2008) have achieved considerable success on diverse graph problems by collectively aggregating information from graph structures Xu et al., 2018; Gao & Ji, 2019) .",
"As a result, much research in recent years has focused on how to aggregate the feature representations of neighbor nodes so that the dependence of graphs is effectively utilized.",
"The majority of studies have predominantly depended on edges to aggregate the neighboring nodes' features.",
"These edge-based methods are premised on the concept of relational inductive bias within graphs (Battaglia et al., 2018) , which implies that two connected nodes have similar properties and are more likely to share the same label (Kipf & Welling, 2017) .",
"While this approach leverages graphs' unique property of capturing relations, it appears less capable of generalizing to new or unseen graphs (Wu et al., 2019b) .",
"To improve the neighborhood aggregation scheme, some studies have incorporated node information; They fully utilize node information and reduce the effects of relational (edge) information.",
"A recent approach, graph attention networks (GAT), employs the attention mechanism so that weights used for neighborhood aggregation differ according to the feature of nodes (Veličković et al., 2018) .",
"This approach has yielded impressive performance and has shown promise in improving generalization for unseen graphs.",
"Regardless of neighborhood aggregation schemes, most methods, however, suffer from a common problem where neighborhood information is considered to a limited degree (Klicpera et al., 2019) .",
"For example, graph convolutional networks (GCNs) (Kipf & Welling, 2017) only operate on data that are closely connected due to oversmoothing , which indicates the \"washing out\" of remote nodes' features via averaging.",
"Consequently, information becomes localized and access to global information is restricted (Xu et al., 2018) , leading to poor performance on datasets in which only a small portion is labeled .",
"In order to address the aforementioned issues, we propose a novel method, Graph Entities with Step Mixture via random walk (GESM), which considers information from all nodes in the graph and can be generalized to new graphs by incorporating random walk and attention.",
"Random walk enables our model to be applicable to previously unseen graph structures, and a mixture of random walks alleviates the oversmoothing problem, allowing global information to be included during training.",
"Hence, our method can be effective, particularly for nodes in the periphery or a sparsely labeled dataset.",
"The attention mechanism also advances our model by considering node information for aggregation.",
"This enhances the generalizability of models to diverse graph structures.",
"To validate our approach, we conducted experiments on four standard benchmark datasets: Cora, Citeseer, and Pubmed, which are citation networks for transductive learning, and protein-protein interaction (PPI) for inductive learning, in which test graphs remain unseen during training.",
"In addition to these experiments, we verified whether our model uses information of remote nodes by reducing the percentage of labeled data.",
"The experimental results demonstrate the superior performance of GESM on inductive learning as well as transductive learning for datasets.",
"Moreover, our model achieved enhanced accuracy for datasets with reduced label rates, indicating the contribution of global information.",
"The key contributions of our approach are as follows:",
"• We present graphs with step mixture via random walk, which can adaptively consider local and global information, and demonstrate its effectiveness through experiments on public benchmark datasets with few labels.",
"• We propose Graph Entities with Step Mixture via random walk (GESM), an advanced model which incorporates attention, and experimentally show that it is applicable to both transductive and inductive learning tasks, for both nodes and edges are utilized for the neighborhood aggregation scheme.",
"• We empirically demonstrate the importance of propagation steps by analyzing its effect on performance in terms of inference time and accuracy."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12121211737394333,
0.0624999962747097,
0,
0.30434781312942505,
0.4117647111415863,
0.13333332538604736,
0.10526315122842789,
0.08695651590824127,
0,
0.0714285671710968,
0.0833333283662796,
0.04999999701976776,
0.0714285671710968,
0.07692307233810425,
0.052631575614213943,
0.11428570747375488,
0.1463414579629898,
0.0714285671710968,
0.052631575614213943,
0.08695651590824127,
0.04999999701976776,
0.23529411852359772,
0.2926829159259796,
0,
0.07692307233810425,
0.17391303181648254,
0.04255318641662598,
0.05882352590560913,
0.06666666269302368,
0.12903225421905518,
0.09090908616781235,
0.1904761791229248,
0.15094339847564697,
0.1764705777168274
] | S1eWbkSFPS | true | [
"Simple and effective graph neural network with mixture of random walk steps and attention"
] |
[
"Basis pursuit is a compressed sensing optimization in which the l1-norm is minimized subject to model error constraints.",
"Here we use a deep neural network prior instead of l1-regularization.",
"Using known noise statistics, we jointly learn the prior and reconstruct images without access to ground-truth data.",
"During training, we use alternating minimization across an unrolled iterative network and jointly solve for the neural network weights and training set image reconstructions.",
"At inference, we fix the weights and pass the measurements through the network.",
"We compare reconstruction performance between unsupervised and supervised (i.e. with ground-truth) methods.",
"We hypothesize this technique could be used to learn reconstruction when ground-truth data are unavailable, such as in high-resolution dynamic MRI.",
"Deep learning in tandem with model-based iterative optimization [2] - [6] , i.e. model-based deep learning, has shown great promise at solving imaging-based inverse problems beyond the capabilities of compressed sensing [7] .",
"These networks typically require hundreds to thousands of examples for training, consisting of pairs of corrupted measurements and the desired ground-truth image.",
"The reconstruction is then trained in an end-to-end fashion, in which data are reconstructed with the network and compared to the ground-truth result.",
"in many cases, collecting a large set of fully sampled data for training is expensive, impractical, or impossible.",
"In this work, we present an approach to model-based deep learning without access to ground-truth data [8] - [10] .",
"We take advantage of (known) noise statistics for each training example and formulate the problem as an extension of basis pursuit denoising [11] with a deep convolutional neural network (CNN) prior in place of image sparsity.",
"During training, we jointly solve for the CNN weights and the reconstructed training set images.",
"At inference time, we fix the weights and pass the measured data through the network.",
"As proof of principle, we apply the technique to undersampled, multi-channel magnetic resonance imaging (MRI).",
"We compare our Deep Basis Pursuit (DBP) formulation with and without supervised learning, as well as to MoDL [6] , a recently proposed unrolled modelbased network that uses ground-truth data for training.",
"We show that in the unsupervised setting, we are able to approach the image reconstruction quality of supervised learning, thus opening the door to applications where collecting fully sampled data is not possible.",
"There are strong connections to iterative optimization and unrolled deep networks [10] , [18] , [19] .",
"Jointly optimizing over the images and weights could be viewed a non-linear extension to dictionary learning.",
"Nonetheless, there is a cost in reconstruction error when moving to unsupervised learning, highlighting the importance of a large training data set to offset the missing ground-truth information.",
"The choice of measurement loss function and data SNR may also greatly impact the quality.",
"Fortunately, in many practical settings there is an abundance of undersampled or corrupted measurement data available for training.",
"In conclusion, the combination of basis pursuit denoising and deep learning can take advantage of undersampled data and provide a means to learn model-based deep learning reconstructions without access to groundtruth images."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.05714285373687744,
0.13793103396892548,
0,
0.14999999105930328,
0,
0.25806450843811035,
0.10256409645080566,
0.23999999463558197,
0.10526315122842789,
0.1538461446762085,
0.0555555522441864,
0.277777761220932,
0.23076923191547394,
0.0624999962747097,
0,
0.060606054961681366,
0.16326530277729034,
0.1666666567325592,
0.12121211737394333,
0.05882352590560913,
0.09302324801683426,
0,
0.1111111044883728,
0.13333332538604736
] | BylRn72cUH | true | [
"We present an unsupervised deep learning reconstruction for imaging inverse problems that combines neural networks with model-based constraints."
] |
[
"Deep learning approaches usually require a large amount of labeled data to generalize.",
"However, humans can learn a new concept only by a few samples.",
"One of the high cogntition human capablities is to learn several concepts at the same time.",
"In this paper, we address the task of classifying multiple objects by seeing only a few samples from each category.",
"To the best of authors' knowledge, there is no dataset specially designed for few-shot multiclass classification.",
"We design a task of mutli-object few class classification and an environment for easy creating controllable datasets for this task.",
"We demonstrate that the proposed dataset is sound using a method which is an extension of prototypical networks.",
"Deep learning approaches are usually capable of solving a classification problem when a large labeled dataset is available during the training BID12 Sutskever et al., 2014; BID4 .",
"However, when a very few samples of a new category is shown to a trained classifier, it either fails to generalize or overfit on the new samples.",
"Humans, however, can easily generalize their prior knowledge to learn a new concept even with one sample.",
"Few-shot learning approaches are proposed to address this gap between human capablity of learning a new concept with only a very few labled samples and machine capablity in generalizing to a new concept.",
"mini-ImageNet (Vinyals et al., 2016) and tiered-Imagenet BID19 are two main datasets that are developed to help the research community addressing the problem of few-shot classification.",
"Although that human are capable of learning a new concept with only a very few samples.",
"Learning a few new concepts at the same time, and with only a very few samples of each is considered as a high cognition task BID1 and very challenging even for humans.It is yet an active area of study to know how human are capable of doing this.",
"There could be many factors involved in this high cognition process, and there are many hypothesis around this.",
"One popular hypothesis is that the brain is able to learn a good representation that has high capacity and can generalize well BID7 .",
"Studying the reasons behind human high cognitive capablity of learning a few new concepts in paralell and with only a very few samples, is out of the scope of this paper.",
"However, in this paper, we propose to extend the few shot learning problem to multi-class few shot classification problem and moving a step towards filling the gap between human cognitive capablity of learning multiple new concepts in paralel and with only a few samples, and machine learning approaches.To do so, our first step is to define a dataset and a setup to address this problem, and an evaluation metric to measure our progression towards solving this problem.We argue that the existing datasets are not desirable for this task.",
"Omniglot BID13 , mini-ImageNet, tiered-ImagaNet, are designed for single object classification.",
"Such datasets as, MS COCO BID15 and Pascal VOC BID5 have multiple object classes but they are not well suited for few-shot learning.",
"The issue is the high imbalance of class cooccurrence (for example, 'human' label occures with all other classes).",
"Therefore it is hard to prevent the learner from \"sneak peeking\" new concepts.To sum it up, this work's contribution is two-fold: 1.",
"We propose the new task of mutli-object few-shot classification to test model's ability to disentagle and represent several object on an image (see Section",
"3) and propose an extension to prototypical-style models to efficiently solve the task (Section 3.1);2.",
"We construct a new dataset which provides clean and controlled environment of multiobject images (see Section",
"4) and provide the framework, benchmarks and the code for the community to explore controlled scenarios and alternative few-shot classification tasks.",
"In this work we introduced a task of few-shot multi-object classification and an environment for generating datasets for this task.",
"We compared the proposed dataset to existing ones in singleobject case.",
"Then, we used a simple extension of prototypical networks to conduct experiments multi-object case.",
"We believe that this task will help diagnosing metric-learning models that need to disentangle several objects on an image.One of the future directions we are taking is to lift the limitation of known object order (Section 3.1).",
"Then we plan to use stronger feature extractors BID17 and extend the work to more natural data."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.2142857164144516,
0.07692307233810425,
0.13333332538604736,
0.17142856121063232,
0.32258063554763794,
0.3636363446712494,
0.375,
0.2380952388048172,
0.21621620655059814,
0.0624999962747097,
0.19512194395065308,
0.14999999105930328,
0.19999998807907104,
0.2142857164144516,
0.06451612710952759,
0.1666666567325592,
0.24390242993831635,
0.23076923191547394,
0.07692307233810425,
0.21052631735801697,
0.12121211737394333,
0.1111111044883728,
0.2631579041481018,
0.13333332538604736,
0.3870967626571655,
0.1875,
0.375,
0.1538461446762085,
0.13793103396892548,
0.1599999964237213,
0.06451612710952759
] | H1gxgiA4uN | true | [
"We introduce a diagnostic task which is a variation of few-shot learning and introduce a dataset for it."
] |
[
"We present a new approach to defining a sequence loss function to train a summarizer by using a secondary encoder-decoder as a loss function, alleviating a shortcoming of word level training for sequence outputs.",
"The technique is based on the intuition that if a summary is a good one, it should contain the most essential information from the original article, and therefore should itself be a good input sequence, in lieu of the original, from which a summary can be generated.",
"We present experimental results where we apply this additional loss function to a general abstractive summarizer on a news summarization dataset.",
"The result is an improvement in the ROUGE metric and an especially large improvement in human evaluations, suggesting enhanced performance that is competitive with specialized state-of-the-art models.",
"Neural networks are a popular solution to the problem of text summarization, the task of taking as input a piece of natural language text, such as a paragraph or a news article, and generating a more succinct text that captures the most essential information from the original.",
"One popular type of neural network that has achieved state of the art results is an attentional encoderdecoder neural network See et al. (2017) ; Paulus et al. (2018) ; Celikyilmaz et al. (2018) .",
"In an encoder-decoder network, the encoder scans over the input sequence by ingesting one word token at a time to create an internal representation.",
"The decoder is trained to compute a probability distribution over next words conditioned on a sequence prefix.",
"A beam search decoder is typically used to find a high likelihood output sequence based on these conditional word probability distributions.",
"Since the next word depends heavily on previous words, the decoder has little hope of a correct distribution over next words unless it has the correct previous words.",
"Thus the decoder is typically trained using teacher forcing Williams & Zipser (1989) , where the reference sequence prefix is always given to the decoder at each decoding step.",
"In other words, regardless of what distributions are output by the decoder in training for timesteps (1, ..., t−1), at timestep t, it is given the reference sequence prefix (y Training at the sequence level can alleviate this discrepancy, but requires a differentiable loss function.",
"In the Related Work section we review previous efforts.",
"We present a novel approach to address the problem by defining a loss function at the sequence level using an encoder-decoder network as a loss function.",
"In training, the summarizer's beam search decoded output sequence is fed as input into another network called the recoder.",
"The recoder is an independent attentional encoder-decoder trained to produce the reference summary.",
"Our experiments show that adding the recoder as a loss function improves a general abstractive summarizer on the popular CNN/DailyMail dataset Hermann et al. (2015) ; Nallapati et al. (2016) , achieving significant improvements in the ROUGE metric and an especially large improvement in human evaluations.",
"We have presented the use of an encoder-decoder as a sophisticated loss function for sequence outputs in the problem of summarization.",
"The recoder allows us to define a differentiable loss function on the decoded output sequence during training.",
"Experimental results using both ROUGE and human evaluations show that adding the recoder in training a general abstractive summarizer significantly boosts its performance, without requiring any changes to the model itself.",
"In future work we may explore whether the general concept of using a model as loss function has wider applicability to other problems."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.5853658318519592,
0.11999999731779099,
0.4000000059604645,
0.052631575614213943,
0.19607843458652496,
0.09999999403953552,
0.21621620655059814,
0.12903225421905518,
0.1111111044883728,
0.1666666567325592,
0.09999999403953552,
0.178571417927742,
0.0833333283662796,
0.5,
0.12121211737394333,
0.2142857164144516,
0.2181818187236786,
0.5294117331504822,
0.3125,
0.17777776718139648,
0.3684210479259491
] | SylkzaEYPS | true | [
"We present the use of a secondary encoder-decoder as a loss function to help train a summarizer."
] |
[
"Existing unsupervised video-to-video translation methods fail to produce translated videos which are frame-wise realistic, semantic information preserving and video-level consistent.",
"In this work, we propose a novel unsupervised video-to-video translation model.",
"Our model decomposes the style and the content, uses specialized encoder-decoder structure and propagates the inter-frame information through bidirectional recurrent neural network (RNN) units.",
"The style-content decomposition mechanism enables us to achieve long-term style-consistent video translation results as well as provides us with a good interface for modality flexible translation.",
"In addition, by changing the input frames and style codes incorporated in our translation, we propose a video interpolation loss, which captures temporal information within the sequence to train our building blocks in a self-supervised manner.",
"Our model can produce photo-realistic, spatio-temporal consistent translated videos in a multimodal way.",
"Subjective and objective experimental results validate the superiority of our model over the existing methods.",
"Recent image-to-image translation (I2I) works have achieved astonishing results by employing Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) .",
"Most of the GAN-based I2I methods mainly focus on the case where paired data exists (Isola et al. (2017b) , , Wang et al. (2018b) ).",
"However, with the cycle-consistency loss introduced in CycleGAN , promising performance has been achieved also for the unsupervised image-to-image translation (Huang et al. (2018) , Almahairi et al. (2018) , Liu et al. (2017) , Mo et al. (2018) , Romero et al. (2018) , Gong et al. (2019) ).",
"While there is an explosion of papers on I2I, its video counterpart is much less explored.",
"Compared with the I2I task, video-to-video translation (V2V) is more challenging.",
"Besides the frame-wise realistic and semantic preserving requirements, which is also required in the I2I task, V2V methods additionally need to consider the temporal consistency issue for generating sequence-wise realistic videos.",
"Consequently, directly applying I2I methods on each frame of the video will not work out as those methods fail to utilize temporal information and can not assure any temporal consistency within the sequence.",
"In their seminal work, Wang et al. (2018a) combined the optical flow and video-specific constraints and proposed a general solution for V2V in a supervised way.",
"Their sequential generator can generate long-term high-resolution video sequences.",
"However, their vid2vid model (Wang et al. (2018a) ) relies heavily on labeled data.",
"Based on the I2I CycleGAN approach, previous methods on unsupervised V2V proposed to design spatio-temporal translator or loss to achieve more temporally consistent results while preserving semantic information.",
"In order to generate temporally consistent video sequences, Bashkirova et al. (2018) proposed a 3DCycleGAN method which adopts 3D convolution in the generator and discriminator of the CycleGAN framework to capture temporal information.",
"However, since the small 3D convolution operator (with temporal dimension 3) only captures dependency between adjacent frames, 3DCycleGAN can not exploit temporal information for generating long-term consistent video sequences.",
"Furthermore, the vanilla 3D discriminator is also limited in capturing complex temporal relationships between video frames.",
"As a result, when the gap between input and target domain is large, 3DCycleGAN tends to sacrifice the image-level reality and generates blurry and gray outcomes.",
"Recently, Bansal et al. (2018) designed a Recycle loss for jointly modeling the spatio-temporal relationship between video frames.",
"They trained a temporal predictor to predict the next frame based on two past frames, and plugged the temporal predictor in the cycle-loss to impose the spatio-temporal constraint on the traditional image translator.",
"As the temporal predictors can be trained from video sequences in source and target domain in a self-supervised manner, the recycle-loss is more stable than the 3D discriminator loss proposed by Bashkirova et al. (2018) .",
"The RecycleGAN method of Bansal et al. (2018) achieved state-of-the-art unsupervised V2V results.",
"Despite its success in translation scenarios with less variety, such as faceto-face or flower-to-flower, we have experimentally found that applying RecycleGAN to translate videos between domains with a large gap is still challenging.",
"We think the following two points are major reasons which affect the application of RecycleGAN method in complex scenarios.",
"On one hand, the translator in Bansal et al. (2018) processes input frames independently, which has limited capacity in exploiting temporal information; and on the other hand, its temporal predictor only imposes the temporal constraint between a few adjacent frames, the generated video content still might shift abnormally: a sunny scene could change to a snowy scene in the following frames.",
"In a concurrent work, incorporate optical flow to add motion cycle consistency and motion translation constraints.",
"However, their Motion-guided CycleGAN still suffers from the same two limitations as the RecycleGAN method.",
"In this paper, we propose UVIT, a novel method for unsupervised video-to-video translation.",
"We assume that a temporally consistent video sequence should simultaneously be:",
"1) long-term style consistent and",
"2) short-term content consistent.",
"Style consistency requires the whole video sequence to have the same style, it ensures the video frames to be overall realistic; while the content consistency refers to the appearance continuity of contents in adjacent video frames and ensures the video frames to be dynamically vivid.",
"Compared with previous methods which mainly focused on imposing short-term consistency between frames, we have considered in addition the long-term consistency issue which is crucial to generate visually realistic video sequences.",
"Figure 1: Overview of our proposed UVIT model: given an input video sequence, we first decompose it to the content by Content Encoder and the style by Style Encoder.",
"Then the content is processed by special RNN units-TrajGRUs (Shi et al., 2017) to get the content used for translation and interpolation recurrently.",
"Finally, the translation content and the interpolation content are decoded to the translated video and the interpolated video together with the style latent variable.",
"We depict here the video translation loss (orange), the cycle consistency loss (violet), the video interpolation loss (green) and the style encoder loss (blue).",
"To simultaneously impose style and content consistency, we adopt an encoder-decoder architecture as the video translator.",
"Given an input frame sequence, a content encoder and a style encoder firstly extract its content and style information, respectively.",
"Then, a bidirectional recurrent network propagates the inter-frame content information.",
"Updating this information with the single frame content information, we get the spatio-temporal content information.",
"At last, making use of the conditional instance normalization (Dumoulin et al. (2016) , Perez et al. (2018) ), the decoder takes the style information as the condition and utilizes the spatio-temporal content information to generate the translation result.",
"An illustration of the proposed architecture can be found in figure 1 .",
"By applying the same style code to decode the content feature for a specific translated video, we can produce a long-term consistent video sequence, while the recurrent network helps us combine multi-frame content information to achieve content consistent outputs.",
"The conditional decoder also provides us with a good interface to achieve modality flexible video translation.",
"Besides using the style dependent content decoder and bidirectional RNNs to ensure long-term and short-term consistency, another advantage of the proposed method lies in our training strategy.",
"Due to our flexible Encoder-RNN-Decoder architecture, the proposed translator can benefit from the highly structured video data and being trained in a self-supervised manner.",
"Concretely, by removing content information from frame t and using posterior style information, we use our Encoder-RNNDecoder translator to solve the video interpolation task, which can be trained by video sequences in each domain in a supervised manner.",
"In the RecycleGAN method, Bansal et al. (2018) proposed to train video predictors and plugged them into the GAN losses to impose spatio-temporal constraints.",
"They utilize the structured video data in an indirect way: using video predictor trained in a supervised way to provide spatio-temporal loss for training video translator.",
"In contrast, we use the temporal information within the video data itself, all the components, i.e. Encoders, RNNs and Decoders, can be directly trained with the proposed video interpolation loss.",
"The processing pipelines of using our Encoder-RNN-Decoder architecture for the video interpolation and translation tasks can be found in figure 2, more details of our video interpolation loss can be found in section 2.",
"The main contributions of our paper are summarized as follows:",
"1. a novel Encoder-RNN-Decoder framework which decomposes content and style for temporally consistent and modality flexible unsupervised video-to-video translation;",
"2. a novel video interpolation loss that captures the temporal information within the sequence to train translator components in a self-supervised manner;",
"3. extensive experiments showing the superiority of our model at both video and image level.",
"In this paper, we have proposed UVIT, a novel method for unsupervised video-to-video translation.",
"A novel Encoder-RNN-Decoder architecture has been proposed to decompose style and content in the video for temporally consistent and modality flexible video-to-video translation.",
"In addition, we have designed a video interpolation loss which utilizes highly structured video data to train our translators in a self-supervised manner.",
"Extensive experiments have been conducted to show the effectiveness of the proposed UVIT model.",
"Without using any paired training data, the proposed UVIT model is capable of producing excellent multimodal video translation results, which are image-level realistic, semantic information preserving and video-level consistent."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.307692289352417,
0.0555555522441864,
0.21052631735801697,
0.21276594698429108,
0.2142857164144516,
0.06896550953388214,
0.05714285373687744,
0,
0.1304347813129425,
0,
0.1538461446762085,
0.09302324801683426,
0.045454539358615875,
0.1538461446762085,
0,
0,
0.1463414579629898,
0.260869562625885,
0.04651162400841713,
0.06451612710952759,
0.10526315122842789,
0.060606054961681366,
0.19999998807907104,
0.21276594698429108,
0.0714285671710968,
0.12765957415103912,
0.060606054961681366,
0.0952380895614624,
0.19999998807907104,
0,
0.2857142686843872,
0.23076923191547394,
0.20000000298023224,
0.10526315122842789,
0.09090908616781235,
0.045454539358615875,
0.04878048226237297,
0.10810810327529907,
0.1249999925494194,
0.1249999925494194,
0.06451612710952759,
0.13333332538604736,
0.07999999821186066,
0,
0.08695651590824127,
0.07407406717538834,
0.08510638028383255,
0.25806450843811035,
0.09999999403953552,
0.3684210479259491,
0.20000000298023224,
0.05405404791235924,
0.15789473056793213,
0.0952380895614624,
0.1463414579629898,
0,
0.5454545617103577,
0.17142856121063232,
0.06666666269302368,
0.27586206793785095,
0.4864864945411682,
0.2222222238779068,
0,
0.13636362552642822
] | HkevCyrFDS | true | [
"A temporally consistent and modality flexible unsupervised video-to-video translation framework trained in a self-supervised manner."
] |
[
"Providing transparency of AI planning systems is crucial for their success in practical applications.",
"In order to create a transparent system, a user must be able to query it for explanations about its outputs.",
"We argue that a key underlying principle for this is the use of causality within a planning model, and that argumentation frameworks provide an intuitive representation of such causality.",
"In this paper, we discuss how argumentation can aid in extracting causalities in plans and models, and how they can create explanations from them.",
"Explainability of AI decision-making is crucial for increasing trust in AI systems, efficiency in human-AI teaming, and enabling better implementation into real-world settings.",
"Explainable AI Planning (XAIP) is a field that involves explaining AI planning systems to a user.",
"Approaches to this problem include explaining planner decision-making processes as well as forming explanations from the models.",
"Past work on model-based explanations includes an iterative approach BID14 as well as using explanations for more intuitive communication with the user BID5 .",
"With respect to human-AI teaming, the more helpful and illustrative the explanations, the better the performance of the system overall.Research into the types of questions and motivations a user might have includes work with contrastive questions BID9 .",
"These questions are structured as 'Why F rather than G?', where F is some part (i.e. action(s) in a plan) of the original solution and G is something the user imagines to be better.",
"While contrastive questions are useful, they do not consider the case when a user doesn't have something else in mind (i.e. G) or has a more general question about the model.",
"This includes the scenario in which the user's understanding of the model is incomplete or inaccurate.",
"Research in the area of model reconciliation attempts to address this knowledge gap BID1 .More",
"broadly, questions such as 'Why A?', where A is an action in the plan, or 'How G?', where G is a (sub)goal, must be answerable and explainable. Questions",
"like these are inherently based upon definitions held in the domain related to a particular problem and solution. The user's",
"motivation behind such questions can vary: he could think the action is unnecessary, be unsure as to its effects, or think there is a better option. Furthermore",
", questions regarding particular state information may arise, such as 'Why A here?' and 'Why can't A go here?'. For these,",
"explanations that include relevant state information would vastly improve their efficiency when communicating with a user BID9 . This is especially",
"true for long plans, when a user does not have access to a domain, or the domain is too complex to be easily understood. Thus, extracting relevant",
"information about action-state causality from the model is required.In the space of planning, causality underpins a variety of research areas including determining plan complexity BID6 and heuristics BID7 . Many planners also can create",
"causal graph visualizations of plans for a user to interact with BID12 . The general structure of causality",
"in planning is 'action causes state'. Indirectly, this can be seen as 'action",
"enables action', where the intermediary state is sufficient for the second action to occur. Hilton describes different 'causal chains",
"' which mirror the types of causality found in planning; action-state causality can be identified as either a 'temporal' or 'unfolding' chain, while action-action causality is similar to an 'opportunity chain' BID8 . For now, we will focus on these two types",
"of general causality.To represent the causality of a model, argumentation is a good candidate; as detailed by BID0 , argumentation frameworks and causal models can be viewed as two versions of one entity. A recent related work uses argumentation",
"for explainable scheduling (Cyras et al. 2019) . We consider an ASPIC + (Modgil and Prakken",
"2013) style framework with defeasible rules capturing the relationships between actions in a plan and strict rules capturing actionstate causality. This structure allows more than a causal representation",
"of a plan; it allows multiple types of causality to be distinguished and different causal 'chunks' to be created and combined to be used as justification for explanations.In this paper we present an initial approach for using argumentation to represent causality, which can then be used to form more robust explanations. In the following sections, a motivating scenario will be",
"introduced and used to showcase our current approaches of abstracting causalities and state information into argumentation frameworks.Consider a simple logistics scenario in which three trucks are tasked with delivering three packages to different locations. The user analyzing the planner output has the plan as well",
"as a general, non-technical understanding of the model and the goals of the problem; the user knows that trucks can move between certain waypoints that have connecting roads of differing lengths, there are refueling stations at waypoints B and E, and some subgoals of the problem are to have package 1 delivered to waypoint C, package 2 delivered to waypoint G, and package 3 delivered to waypoint D. The user is also aware that the three trucks and three packages are at waypoint A in the initial state. A basic map of the domain and plan are shown in FIG1 , respectively",
". Even with a simple and intuitive problem such as this, questions may",
"arise which cannot be answered trivially. One such question is 'Why drive truck 1 to waypoint E?'. Addressing",
"this question requires the causal consequence of applying",
"the action; in other words, how does driving truck 1 to waypoint E help in achieving the goal(s)?As discussed previously, tracking state information throughout a plan",
"can be useful for explanations. This is especially true when values of state variables are not obvious",
"at any given point in a plan and their relevance to a question is not known. A question such as 'Why drive truck 3 to waypoint B?' has this property",
". These two questions will be addressed in the following sections.",
"We acknowledge that this is a preliminary step and more work is required to expand on the ideas presented in this paper.",
"One such future work involves defining exactly what questions, which range from action-specific to model-based, can be answered and explained using our approach.",
"Also, how these questions are captured from a user is an open question.",
"The query, 'Why didn't truck 3 deliver any packages?' can be answered using the causal information captured in the framework, but how one converts this question to a form that the system understands requires further research.",
"Potential methods for communicating a user question include a dialogue system or Natural Language Processing techniques.",
"Along with expanding the set of questions that can be addressed, extensions to the argumentation framework itself should be considered.",
"Better methods for creating causal 'chunks' for specific user questions are needed.",
"It may be advantageous to use argumentation schemes to help identify relevant topics of chunks and which causal chains should be included from the framework.",
"This relates to the idea of 'context' and identifying the motivation of a question.",
"If the system can be more precise in extracting the relevant information, the explanations themselves will be more effective.Related to this is the need to explore other ways of presenting an explanation to a user.",
"Research into the efficacy of explanations and how to properly assess the effectiveness of the explanations in practice are future areas of research, and will require user studies.",
"Our starting point will be the approach outlined in Section 4.3 which has been shown empirically to be effective in contexts such as human-robot teaming BID13 .",
"In this paper we proposed an initial approach to explainable planning using argumentation in which causal chains are extracted from a plan and model and abstracted into an argumentation framework.",
"Our hypothesis is that this allows ease of forming and communicating explanations to a user.",
"Furthermore, causal 'chunks' can be created by combining relevant causal links from the chains which explain the causalities surrounding one 'topic'.",
"We believe these help with making more precise explanations, and that chunks can be used to provide hierarchical explanations.",
"Overall, the approach is a first step towards exploiting the intuitive functionality of argumentation in order to use causality for explanations."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.25806450843811035,
0.21052631735801697,
0.060606054961681366,
0.11764705181121826,
0.07407406717538834,
0.13793103396892548,
0.11764705181121826,
0.09302324801683426,
0.17777776718139648,
0.04651162400841713,
0.07407406717538834,
0.1428571343421936,
0.05128204822540283,
0.12121211737394333,
0.10526315122842789,
0,
0.060606054961681366,
0.15789473056793213,
0.09090908616781235,
0.27586206793785095,
0.07999999821186066,
0.12903225421905518,
0.15686273574829102,
0.21739129722118378,
0.0714285671710968,
0.052631575614213943,
0.2711864411830902,
0.1818181723356247,
0.07894736528396606,
0,
0.12903225421905518,
0.0952380895614624,
0.05405404791235924,
0.3333333432674408,
0.04999999701976776,
0.08695651590824127,
0.060606054961681366,
0.1111111044883728,
0.07692307233810425,
0.08510638028383255,
0.0714285671710968,
0.19354838132858276,
0.1666666567325592,
0.1666666567325592,
0.1599999964237213,
0.1904761791229248,
0.22857142984867096,
0.10526315122842789,
0.09999999403953552,
0.2142857164144516,
0.0624999962747097,
0.25,
0.3030303120613098
] | Byef4anQcE | true | [
"Argumentation frameworks are used to represent causality of plans/models to be utilized for explanations."
] |
[
"Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines.",
"We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time.",
"We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures.",
"We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos.",
"We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions.",
"We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach.",
"Deep learning methods have proven themselves as powerful computational tools in many disciplines, and within it a topic of strongly growing interest is deep learning for point-based data sets.",
"These Lagrangian representations are challenging for learning methods due to their unordered nature, but are highly useful in a variety of settings from geometry processing and 3D scanning to physical simulations, and since the seminal work of Qi Charles et al. (2017) , a range of powerful inference tasks can be achieved based on point sets.",
"Despite their success, interestingly, no works so far have taken into account time.",
"Our world, and the objects within it, naturally move and change over time, and as such it is crucial for flexible point-based inference to take the time dimension into account.",
"In this context, we propose a method to learn temporally stable representations for point-based data sets, and demonstrate its usefulness in the context of super-resolution.",
"An inherent difficulty of point-based data is their lack of ordering, which makes operations such as convolutions, which are easy to perform for Eulerian data, unexpectedly difficult.",
"Several powerful approaches for point-based convolutions have been proposed (Qi et al., 2017; Hermosilla et al., 2018; Hua et al., 2018) , and we leverage similar neural network architectures in conjunction with the permutation-invariant Earth Mover's Distance (EMD) to propose a first formulation of a loss for temporal coherence.",
"In addition, several works have recognized the importance of training point networks for localized patches, in order to avoid having the network to rely on a full view of the whole data-set for tasks that are inherently local, such as normal estimation (Qi Charles et al., 2017) , and super-resolution ).",
"This also makes it possible to flexibly process inputs of any size.",
"Later on we will demonstrate the importance of such a patch-based approach with sets of changing cardinality in our setting.",
"A general challenge here is to deal with varying input sizes, and for super-resolution tasks, also varying output sizes.",
"Thus, in summary we target an extremely challenging learning problem: we are facing permutation-invariant inputs and targets of varying size, that dynamically move and deform over time.",
"In order to enable deep learning approaches in this context, we make the following key contributions: Permutation invariant loss terms for temporally coherent point set generation; A Siamese training setup and generator architecture for point-based super-resolution with neural networks; Enabling improved output variance by allowing for dynamic adjustments of the output size; The identification of a specialized form of mode collapse for temporal point networks, together with a loss term to remove them.",
"We demonstrate that these contributions together make it possible to infer stable solutions for dynamically moving point clouds with millions of points.",
"More formally, we show that our learning approach can be used for generating a point set with an increased resolution from a given set of input points.",
"The generated points should provide an improved discretization of the underlying ground truth shape represented by the initial set of points.",
"For the increase, we will target a factor of two to three per spatial dimension.",
"Thus, the network has the task to estimate the underlying shape, and to generate suitable sampling positions as output.",
"This is generally difficult due to the lack of connectivity and ordering, and in our case, positions that move over time in combination with a changing number of input points.",
"Hence it is crucial that the network is able to establish a temporally stable latent space representation.",
"Although we assume that we know correspondences over time, i.e., we know which point at time t moved to a new location at time t + ∆t, the points can arbitrarily change their relative position and density over the course of their movement, leading to a substantially more difficult inference problem than for the static case.",
"We have proposed a first method to infer temporally coherent features for point clouds.",
"This is made possible by a combination of a novel loss function for temporal coherence in combination with enabling flexible truncation of the results.",
"In addition we have shown that it is crucial to prevent static patterns as easy-to-reach local minima for the network, which we avoid with the proposed a mingling loss term.",
"Our super-resolution results above demonstrate that our approach takes an important first step towards flexible deep learning methods for dynamic point clouds.",
"Looking ahead, our method could also be flexibly combined with other network architectures or could be adopted for other applications.",
"Specifically, a combination with PSGN (Fan et al., 2016) could be used to generate point clouds from image sequences instead of single images.",
"Other conceivable applications could employ methods like Dahnert et al. (2019) with our approach for generating animated meshes.",
"Due to the growing popularity and ubiquity of scanning devices it will, e.g., be interesting to investigate classification tasks of 3D scans over time as future work.",
"Apart from that, physical phenomena such as elastic bodies and fluids (Li et al., 2019) can likewise be represented in a Lagrangian manner, and pose interesting challenges and complex spatio-temporal changes."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0.42424240708351135,
0.07999999821186066,
0.20000000298023224,
0.19354838132858276,
0.25,
0.09999999403953552,
0.09677419066429138,
0,
0.05128204822540283,
0.21621620655059814,
0.05405404791235924,
0.1818181723356247,
0.13793103396892548,
0,
0.12903225421905518,
0.06666666269302368,
0,
0.16438356041908264,
0.23529411852359772,
0.21621620655059814,
0,
0.07407406717538834,
0.0714285671710968,
0.05128204822540283,
0.2142857164144516,
0.10344827175140381,
0.5384615063667297,
0.12121211737394333,
0.09999999403953552,
0.23529411852359772,
0.13793103396892548,
0.1666666567325592,
0.13333332538604736,
0,
0.0476190447807312
] | BJeKh3VYDH | true | [
"We propose a generative neural network approach for temporally coherent point clouds."
] |
[
"We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available.",
"We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense.",
"Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors.",
"We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples.",
"The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art.",
"The code for reproducing our work is available at https://git.io/fAjOJ.",
"Recent research has shown that neural networks exhibit significant vulnerability to adversarial examples, or slightly perturbed inputs designed to fool the network prediction.",
"This vulnerability is present in a wide range of settings, from situations in which inputs are fed directly to classifiers BID23 BID3 to highly variable real-world environments BID12 .",
"Researchers have developed a host of methods to construct such attacks BID7 BID17 BID2 BID15 , most of which correspond to first order (i.e., gradient based) methods.",
"These attacks turn out to be highly effective: in many cases, only a few gradient steps suffice to construct an adversarial perturbation.A significant shortcoming of many of these attacks, however, is that they fundamentally rely on the white-box threat model.",
"That is, they crucially require direct access to the gradient of the classification loss of the attacked network.",
"In many real-world situations, expecting this kind of complete access is not realistic.",
"In such settings, an attacker can only issue classification queries to the targeted network, which corresponds to a more restrictive black box threat model.",
"Recent work BID4 BID1 ) provides a number of attacks for this threat model.",
"BID4 show how to use a basic primitive of zeroth order optimization, the finite difference method, to estimate the gradient from classification queries and then use it (in addition to a number of optimizations) to mount a gradient based attack.",
"The method indeed successfully constructs adversarial perturbations.",
"It comes, however, at the cost of introducing a significant overhead in terms of the number of queries needed.",
"For instance, attacking an ImageNet BID21 classifier requires hundreds of thousands of queries.",
"Subsequent work improves this dependence significantly, but still falls short of fully mitigating this issue (see Section 4.1 for a more detailed analysis).",
"We develop a new, unifying perspective on black-box adversarial attacks.",
"This perspective casts the construction of such attacks as a gradient estimation problem.",
"We prove that a standard least-squares estimator both captures the existing state-of-the-art approaches to black-box adversarial attacks, and actually is, in a certain natural sense, an optimal solution to the problem.We then break the barrier posed by this optimality by considering a previously unexplored aspect of the problem: the fact that there exists plenty of extra prior information about the gradient that one can exploit to mount a successful adversarial attack.",
"We identify two examples of such priors: a \"time-dependent\" prior that corresponds to similarity of the gradients evaluated at similar inputs, and a \"data-dependent\" prior derived from the latent structure present in the input space.Finally, we develop a bandit optimization approach to black-box adversarial attacks that allows for a seamless integration of such priors.",
"The resulting framework significantly outperforms state-of-the-art by a factor of two to six in terms of success rate and query efficiency.",
"Our results thus open a new avenue towards finding priors for construction of even more efficient black-box adversarial attacks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.2083333283662796,
0.22641508281230927,
0.2916666567325592,
0.19999998807907104,
0.08695651590824127,
0,
0.07999999821186066,
0.1111111044883728,
0.15094339847564697,
0.1818181723356247,
0.09302324801683426,
0,
0.07843136787414551,
0.0952380895614624,
0.16949151456356049,
0.05714285373687744,
0.045454539358615875,
0,
0.039215680211782455,
0.3684210479259491,
0.24390242993831635,
0.1904761791229248,
0.25,
0.2083333283662796,
0.21276594698429108
] | BkMiWhR5K7 | true | [
"We present a unifying view on black-box adversarial attacks as a gradient estimation problem, and then present a framework (based on bandits optimization) to integrate priors into gradient estimation, leading to significantly increased performance."
] |
[
"Collecting high-quality, large scale datasets typically requires significant resources.",
"The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through multitask learning with self-supervised tasks on unlabeled data.",
"To this end, we trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks.",
"We describe three self-supervised learning tasks that can operate on any large, unlabeled audio corpus.",
"We demonstrate that, in a scenario with limited labeled training data, one can significantly improve the performance of a supervised classification task by simultaneously training it with these additional self-supervised tasks.",
"We show that one can improve performance on a diverse sound events classification task by nearly 6\\% when jointly trained with up to three distinct self-supervised tasks.",
"This improvement scales with the number of additional auxiliary tasks as well as the amount of unsupervised data.",
"We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance."
] | [
0,
1,
0,
0,
0,
0,
0,
0
] | [
0,
0.3529411852359772,
0.06451612710952759,
0.1666666567325592,
0,
0.0555555522441864,
0.0833333283662796,
0.07407406717538834
] | ryl-BQRisQ | false | [
"Improving label efficiency through multi-task learning on auditory data"
] |
[
"Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context.",
"We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain.",
"The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps.",
"We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling (ViST) task.",
"In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine (FSM) in neural sequential generation process by: (1) Scaffolding Structure in Decoder (SSiD) (2) Scaffolding Structure in Loss (SSiL).",
"These models show an improvement in empirical as well as human evaluation.",
"Our best performing model (SSiL) achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model.",
"We also conducted human evaluation of the generated grounded recipes, which reveal that 61% found that our proposed (SSiL) model is better than the baseline model in terms of overall recipes, and 72.5% preferred our model in terms of coherence and structure.",
"We also discuss analysis of the output highlighting key important NLP issues for prospective directions.\n",
"Interpretation is heavily conditioned on context.",
"Real world interactions provide this context in multiple modalities.",
"In this paper, the context is derived from vision and language.",
"The description of a picture changes drastically when seen in a sequential narrative context.",
"Formally, this task is defined as: given a sequence of images I = {I 1 , I 2 , ..., I n } and pairwise associated textual descriptions, T = {T 1 , T 2 , ..., T n }; for a new sequence I , our task is to generate the corresponding T .",
"Figure 1 depicts an example for making vegetable lasagna, where the input is the first row and the output is the second row.",
"We call this a 'storyboard', since it unravels the most important steps of a procedure associated with corresponding natural language text.",
"The sequential context differentiates this task from image captioning in isolation.",
"The narration of procedural content draws slight differentiation of this task from visual story telling.",
"The dataset is similar to that presented by ViST Huang et al. (2016) with an apparent difference between stories and instructional in-domain text which is the clear transition in phases of the narrative.",
"This task supplements the task of ViST with richer context of goal oriented procedure (how-to).",
"This paper attempts at capturing this high level structure present in procedural text and imposing this structure while generating sequential text from corresponding sequences of images.Numerous online blogs and videos depict various categories of how-to guides for games, do-ityourself (DIY) crafts, technology, gardening etc.",
"This task lays initial foundations for full fledged storyboarding of a given video, by selecting the right junctions/clips to ground significant events and generate sequential textual descriptions.",
"However, the main focus of this work is generating text from a given set of images.",
"We are going to focus on the domain of cooking recipes in the rest of this paper, leaving the exploration of other domains to future.",
"The two important dimensions to address in text generation are content and structure.",
"In this paper, we discuss our approach in generating more structural/coherent cooking recipes by explicitly modeling the state transitions between different stages of cooking (phases).",
"We address the question of generating textual interpretation of Figure 1 : Storyboard for the recipe of vegetable lasagna the procedure depicted as a sequence of pictures (snapped at different instances of time as the procedure progresses).",
"We introduce a framework to apply traditional FSMs to enhance incorporation of structure in neural text generation.",
"We plan to explore backpropable variants in place of FSMs in future to design structure aware generation models.The two main contributions of this paper are:1.",
"A dataset of 16k recipes targeted for sequential multimodal procedural text generation.",
"2.",
"Two models (SSiD: Structural Scaffolding in Decoder ,and SSiL: Structural Scaffolding in Loss) for incorporating high level structure learnt by an FSM into a neural text generation model to improve structure/coherence.The rest of the paper is organized as follows.",
"Section 2 describes the related work performed along the lines of planning while generating, understanding food and visual story telling.",
"Section 3 describes the data we gathered for this task and related statistics.",
"In Section 4, we describe our models: a baseline model (Glocal), SSiD and SSiL in detail.",
"Section 5 presents the results attained by each of these models both empirically and qualitatively.",
"Section 6 concludes this work and presents some future directions.",
"The two dimensions explored in clustering and FSM are the number of phases that are learnt in unsupervised manner (P) and the number of states attained through state splitting algorithm in FSM (S).",
"The results of searching this space for the best configuration are presented in Table 2 .",
"Table 2 : BLEU Scores for different number of phases (P) and states(S)The BLEU score BID25 is the highest when P is 40 and S is 100.",
"Fixing these values, we compare the models proposed in TAB4 .",
"The models with hard phases and hard states are not as stable as the one with soft phases since backpropagation affects the impact of the scaffolded phases.",
"Upon manual inspection, a key observation is that for SSiD model, most of the recipes followed a similar structure.",
"It seemed to be conditioned on a global structure learnt from all recipes rather than the current input.",
"However, SSiL model seems to generate recipes that are conditioned on the structure of each particular example.",
"Human Evaluation: We have also performed human evaluation by conducting user preference study to compare the baseline with our best performing SSiL model.",
"We randomly sampled generated outputs of 20 recipes and asked 10 users to answer two preference questions: (1) preference for overall recipe based on images, (2) preference for structurally coherent recipe.",
"For the second question, we gave examples of what structure and phases mean in a recipe.",
"Our SSiL model was preferred 61% and 72.5% for overall and structural preferences respectively.",
"This shows that while there is a viable space to build models that improve structure, generating an edible recipe needs to be explored to improve the overall preference.",
"Our main focus in this paper is instilling structure learnt from FSMs in neural models for sequential procedural text generation with multimodal data.",
"Recipes are being presented in the form of graphic novels reflecting the cultural change in expectations of presenting instructions.",
"With this change, a storyboard is a comprehensive representation of the important events.",
"In this direction, we gather a dataset of 16k recipes where each step has text and associated images.",
"The main difference between the dataset of ViST and our dataset is that our dataset is targeted at procedural how-to kind of text (specifically presenting cooking recipes in this work).",
"We setup a baseline inspired from the best performing model in ViST in the category of human evaluation.",
"We learn a high level structure of the recipe as a sequence of phases and a sequence of hard and soft representations of states learnt from a finite state machine.",
"We propose two techniques for incorporating structure learnt from this as a scaffold.",
"The first model imposes structure on the decoder (SSiD) and the second model imposes structure on the loss function (SSiL) by modeling it as a hierarchical multi-task learning problem.",
"We show that our proposed approach (SSiL) improves upon the baseline and achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model.",
"We plan on exploring backpropable variants as a scaffold for structure in future.",
"We also plan to extend these models to other domains present in these sources of data.",
"There is no standard way to explicitly evaluate the high level strcuture learnt in this task and we would like to explore evaluation strategies for the same."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12121211737394333,
0.3529411852359772,
0.1249999925494194,
0.14999999105930328,
0.31372547149658203,
0.06666666269302368,
0.10256409645080566,
0.11764705181121826,
0.05714285373687744,
0,
0.0714285671710968,
0.06666666269302368,
0.25,
0.18867924809455872,
0,
0.1538461446762085,
0.19999998807907104,
0.24242423474788666,
0.19999998807907104,
0.0624999962747097,
0.37288135290145874,
0.1304347813129425,
0.3529411852359772,
0.1538461446762085,
0.375,
0.1395348757505417,
0.1702127605676651,
0.34285715222358704,
0.3333333432674408,
0.19354838132858276,
0.3571428656578064,
0.052631575614213943,
0,
0.11428570747375488,
0.11764705181121826,
0.06896550953388214,
0.1818181723356247,
0.1764705777168274,
0.0476190410554409,
0.06896550953388214,
0.10256409645080566,
0.1621621549129486,
0.21621620655059814,
0.1666666567325592,
0.0476190410554409,
0.1304347813129425,
0.22857142984867096,
0,
0.1395348757505417,
0.2926829159259796,
0.11428570747375488,
0.12903225421905518,
0.21621620655059814,
0.22727271914482117,
0.22857142984867096,
0.3414634168148041,
0.3125,
0.1428571343421936,
0.08888888359069824,
0.1875,
0.1818181723356247,
0.1818181723356247
] | rJeQE8LYdV | true | [
"The paper presents two techniques to incorporate high level structure in generating procedural text from a sequence of images."
] |
[
"Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research.",
"Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions.",
"The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models.",
"This paper alleviates the need for such approximations by proposing the \\emph{Stein gradient estimator}, which directly estimates the score function of the implicitly defined distribution.",
"The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity.",
"Modelling is fundamental to the success of technological innovations for artificial intelligence.",
"A powerful model learns a useful representation of the observations for a specified prediction task, and generalises to unknown instances that follow similar generative mechanics.",
"A well established area of machine learning research focuses on developing prescribed probabilistic models BID8 , where learning is based on evaluating the probability of observations under the model.",
"Implicit probabilistic models, on the other hand, are defined by a stochastic procedure that allows for direct generation of samples, but not for the evaluation of model probabilities.",
"These are omnipresent in scientific and engineering research involving data analysis, for instance ecology, climate science and geography, where simulators are used to fit real-world observations to produce forecasting results.",
"Within the machine learning community there is a recent interest in a specific type of implicit models, generative adversarial networks (GANs) BID10 , which has been shown to be one of the most successful approaches to image and text generation BID56 BID2 BID5 .",
"Very recently, implicit distributions have also been considered as approximate posterior distributions for Bayesian inference, e.g. see BID25 ; BID53 ; BID22 ; BID19 ; BID29 ; BID15 ; BID23 ; BID48 .",
"These examples demonstrate the superior flexibility of implicit models, which provide highly expressive means of modelling complex data structures.Whilst prescribed probabilistic models can be learned by standard (approximate) maximum likelihood or Bayesian inference, implicit probabilistic models require substantially more severe approximations due to the intractability of the model distribution.",
"Many existing approaches first approximate the model distribution or optimisation objective function and then use those approximations to learn the associated parameters.",
"However, for any finite number of data points there exists an infinite number of functions, with arbitrarily diverse gradients, that can approximate perfectly the objective function at the training datapoints, and optimising such approximations can lead to unstable training and poor results.",
"Recent research on GANs, where the issue is highly prevalent, suggest that restricting the representational power of the discriminator is effective in stabilising training (e.g. see BID2 BID21 .",
"However, such restrictions often intro- A comparison between the two approximation schemes.",
"Since in practice the optimiser only visits finite number of locations in the parameter space, it can lead to over-fitting if the neural network based functional approximator is not carefully regularised, and therefore the curvature information of the approximated loss can be very different from that of the original loss (shown in",
"(a)).",
"On the other hand, the gradient approximation scheme",
"(b) can be more accurate since it only involves estimating the sensitivity of the loss function to the parameters in a local region.duce undesirable biases, responsible for problems such as mode collapse in the context of GANs, and the underestimation of uncertainty in variational inference methods BID49 .In",
"this paper we explore approximating the derivative of the log density, known as the score function, as an alternative method for training implicit models. An",
"accurate approximation of the score function then allows the application of many well-studied algorithms, such as maximum likelihood, maximum entropy estimation, variational inference and gradient-based MCMC, to implicit models. Concretely",
", our contributions include:• the Stein gradient estimator, a novel generalisation of the score matching gradient estimator BID16 , that includes both parametric and non-parametric forms; • a comparison of the proposed estimator with the score matching and the KDE plug-in estimators on performing gradient-free MCMC, meta-learning of approximate posterior samplers for Bayesian neural networks, and entropy based regularisation of GANs.",
"We have presented the Stein gradient estimator as a novel generalisation to the score matching gradient estimator.",
"With a focus on learning implicit models, we have empirically demonstrated the efficacy of the proposed estimator by showing how it opens the door to a range of novel learning tasks: approximating gradient-free MCMC, meta-learning for approximate inference, and unsupervised learning for image generation.",
"Future work will expand the understanding of gradient estimators in both theoretical and practical aspects.",
"Theoretical development will compare both the V-statistic and U-statistic Stein gradient estimators and formalise consistency proofs.",
"Practical work will improve the sample efficiency of kernel estimators in high dimensions and develop fast yet accurate approximations to matrix inversion.",
"It is also interesting to investigate applications of gradient approximation methods to training implicit generative models without the help of discriminators.",
"Finally it remains an open question that how to generalise the Stein gradient estimator to non-kernel settings and discrete distributions."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.19230768084526062,
0.26923075318336487,
0.22641508281230927,
0.08888888359069824,
0.2083333283662796,
0.05714285373687744,
0.12765957415103912,
0.1249999925494194,
0.2083333283662796,
0.07999999821186066,
0.19354838132858276,
0.12244897335767746,
0.060606054961681366,
0.09090908616781235,
0.1355932205915451,
0.04081632196903229,
0,
0.0312499962747097,
0.13333332538604736,
0.15625,
0.13333332538604736,
0.1599999964237213,
0.260869562625885,
0.2702702581882477,
0.36666667461395264,
0.10526315122842789,
0.10526315122842789,
0.04444443807005882,
0.1904761791229248,
0.1428571343421936
] | SJi9WOeRb | true | [
"We introduced a novel gradient estimator using Stein's method, and compared with other methods on learning implicit models for approximate inference and image generation."
] |
[
"Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks.",
"This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures.",
"Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison.",
"The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection.",
"The new approaches will provide better understanding on optimal learning procedures for image classification.",
"Convolutional Neural Networks (CNNs) find an ever-growing field of application throughout image and sound processing tasks, since the success of AlexNet (Krizhevsky et al., 2012) in the 2012 ImageNet competition.",
"Yet, training these networks still keeps the need of an \"artistic\" touch: even the most cited state-of-the-art studies employ wildly varying set of solvers, augmentation and regularization techniques (Domhan et al., 2015) .",
"In this study, one of the crucial data augmentation techniques, noise injection, will be thoroughly analysed to determine the correct way of application on image processing tasks.",
"Adding noise to the training data is not a procedure that is unique to the training of neural architectures: additive and multiplicative noise has long been used in signal processing for regression-based methods, in order to create more robust models (Saiz et al., 2005) .",
"The technique is also one of the oldest data augmentation methods employed in the training of feed forward networks, as analysed by Holmstrom & Koistinen (1992) , yet it is also pointed out in the same study that while using additive Gaussian noise is helpful, the magnitude of the noise cannot be selected blindly, as a badly-chosen variance may actually harm the performance of the resulting network (see Gu & Rigazio (2014) and Hussein et al. (2017) for more examples).",
"The main reasons for noise injection to the training data can be listed as such in a non-excluding manner: first of all, injection of any noise type makes the model more robust against the occurrence of that particular noise over the input data (see Braun et al. (2016) and Saiz et al. (2005) for further reference), such as the cases of Gaussian additive noise in photographs, and Gaussian-Poisson noise on low-light charge coupled devices (Bovik, 2005) .",
"Furthermore, it is shown that the neural networks optimize on the noise magnitude they are trained on (Yin et al., 2015) .",
"Therefore, it is important to choose the correct type and level of the noise to augment the data during training.",
"Another reason for noise addition is to encourage the model to learn the various aspects of each class by occluding random features.",
"Generally, stochastic regularization techniques embedded inside the neural network architectures are used for this purpose, such as Dropout layers, yet it is also possible to augment the input data for such purposes as in the example of \"cutout\" regularization proposed by Devries & Taylor (2017) .",
"The improvement of the generalization capacity of a network is highly correlated with its performance, which can be scored by the accuracy over a predetermined test set.",
"There has been similar studies conducted on the topic, with the example of Koziarski & Cyganek (2017) which focuses on the effects of noise injection on the training of deep networks and the possible denoising methods, yet they fail to provide a proper methodology to determine the level of noise to be injected into the training data, and use PSNR as the comparison metric between different noise types which is highly impractical (see Section 3).",
"To resolve these issues, this study focuses on the ways to determine which noise types to combine the training data with, and which levels, in addition to the validity of active noise injection techniques while experimenting on a larger set of noise models.",
"In the structure of this work, the effect of injecting different types of noises into images for varying CNN architectures is assessed based on their performance and noise robustness.",
"Their interaction and relationship with each other are analyzed over (also noise-injected) validation sets.",
"Finally as a follow-up study, proper ways on adding or applying noise to a CNN for image classification tasks are discussed.",
"There are several confirmations to acquire from this set of results for the literature: first of all, there exists a trade-off between noise robustness and clean set accuracy.",
"Yet contrary to the common notion, we believe that the data presents a highly valid optimum for this exchange in our study.",
"As it can be seen from Figures 6 and 7; in order to create a robust model against particular kind of noise while maintaining the performance of the model, one must apply a level of degradation that results in 0.8 MSSIM over training data.",
"We believe that as long as the noise or perturbation is somewhat homogeneously distributed, this rule of thumb will hold for all image classification tasks.",
"However, the same thing cannot be said for non-homogeneously distributed noise models, as SSIM (and also PSNR as demonstrated in Section 3) fails to capture the level of degradation appropriately for such a verdict (see Occlusion results in Figures 6 and 7) .",
"A second confirmation of the current literature is the fact that the neural networks optimize on the noise level they are trained with, as seen again at Figures 6 and 7 , and also the diagonals of Figure 8 .",
"Yet, the level of this optimization is quite small after 0.5 MSSIM, featuring similar robustness for each trained model.",
"Therefore, it is not particularly necessary to determine the noise level of a dataset, or sample the noise from a predetermined interval, as long as the MSSIM does not drop below 0.5, in which case noise removal techniques need to be considered for better models.",
"As noted above, occlusion noise type will not be thoroughly analyzed in this section because of the fact that the quality metric has failed to provide sufficient comparative data for this discussion.",
"Yet, the performance data and the lack of robustness the other models exhibit towards this particular noise type shows that \"cutout\" regularization as presented by Devries & Taylor (2017) is a crucial part of data augmentation in addition to any other perturbation or noise injection technique.",
"A way to further extend the contribution of this method would be to alternate the intensity level of the patches from 0 to 255 for 8-bit images, which can be a topic of another research.",
"For the rest of the noise types; Gaussian, speckle and Poisson noises are observed to increase the performance of the model while boosting the robustness, and their effects exhibit the possibility of interchangeable usage.",
"For image classification tasks involving RGB images of daily objects, injection of only one of these noise types with above-mentioned level is believed to be sufficient as repetition of the clusters can be observed in Figure 8 .",
"Among these three, Gaussian noise is recom-mended considering the results of model performance.",
"S&P noise contamination, on the other hand, may not be resolved by injection of the former noise types as the other models are not sufficiently robust against it.",
"Therefore, at this point one of the two methodologies are suggested: either S&P noise can be removed by simple filtering techniques, or S&P noise can be applied in an alternating manner with Gaussian noise during data augmentation.",
"Former approach is recommended for the simplicity of the training procedure.",
"The constant behaviour of the models towards occlusion noise in Figures 6, 7 and 8 unfortunately does not have a satisfactory explanation, despite several diagnostics of the training procedure.",
"A longer training procedure, which was not feasible in our experiment because of the model count, may resolve these undesirable results.",
"In this study, an extensive analysis of noise injection to training data has conducted.",
"The results confirmed some of the notions in the literature, while also providing new rule of thumbs for CNN training.",
"As further targets of research, extension of \"cutout\" regularization as described in the above paragraphs, and the distribution behavior of the SSIM and PSNR metrics in Figure 2 with regards to the work of Horé & Ziou (2010) may be pursued."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.13333332538604736,
0.054054051637649536,
0.05882352590560913,
0,
0,
0.04878048226237297,
0.17142856121063232,
0.1666666567325592,
0.0810810774564743,
0.1515151560306549,
0.06666666269302368,
0.37037035822868347,
0.13333332538604736,
0.12244897335767746,
0,
0.12121211737394333,
0.1818181723356247,
0.1111111044883728,
0,
0.20000000298023224,
0.1111111044883728,
0.12903225421905518,
0.1599999964237213,
0.05882352590560913,
0.0833333283662796,
0.04651162400841713,
0,
0.0833333283662796,
0.14999999105930328,
0.11999999731779099,
0.052631575614213943,
0.1111111044883728,
0.09302325546741486,
0.08695651590824127,
0.060606054961681366,
0.1428571343421936,
0.09999999403953552,
0.10810810327529907,
0.06451612710952759,
0.3333333432674408,
0.1428571343421936,
0.04651162400841713
] | SkeKtyHYPS | true | [
"Ideal methodology to inject noise to input data during CNN training"
] |
[
"State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains.",
"Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA.",
"In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions.",
"With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains.",
"In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations.",
"The first one minimizes the distances between the corresponding Gaussian component means of the source and target data.",
"The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution.",
"To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together.",
"Extensive experiments are conducted over two UDA tasks.",
"Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches.",
"More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset.",
"The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes.",
"Recent advances in deep learning have significantly improved state-of-the-art performance for a wide range of applications.",
"However, the improvement comes with the requirement of a massive amount of labeled data for each task domain to supervise the deep model.",
"Since manual labeling is expensive and time-consuming, it is therefore desirable to leverage or reuse rich labeled data from a related domain.",
"This process is called domain adaptation, which transfers knowledge from a label rich source domain to a label scarce target domain (Pan & Yang, 2009 ).",
"Domain adaptation is an important research problem with diverse applications in machine learning, computer vision (Gong et al., 2012; Gopalan et al., 2011; Saenko et al., 2010) and natural language processing (Collobert et al., 2011; Glorot et al., 2011) .",
"Traditional methods try to solve this problem via learning domain invariant features by minimizing certain distance metric measuring the domain discrepancy, for example Maximum Mean Discrepancy (MMD) (Gretton et al., 2009; Pan et al., 2008; and correlation distance (Sun & Saenko, 2016) .",
"Then labeled source data is used to learn a model for the target domain.",
"Recent studies have shown that deep neural networks can learn more transferable features for domain adaptation (Glorot et al., 2011; Yosinski et al., 2014) .",
"Consequently, adaptation layers have been embedded in the pipeline of deep feature learning to learn concurrently from the source domain supervision and some specially designed domain discrepancy losses Long et al., 2015; Sun & Saenko, 2016; Zellinger et al., 2017) .",
"However, none of these methods explicitly model the feature distributions of the source and target data to measure the discrepancy.",
"Inspired from the recent works by Wan et al. (2018) and Yang et al. (2018) , which have shown that modeling feature distribution of a training set improves classification performance, we explore explicit distribution modeling for UDA.",
"We model the feature distributions as Gaussin mixture distributions, which facilitates us to measure the discrepancy between the source and target domains.",
"Our proposed method, i.e., DMPN, works as follows.",
"We train a deep network over the source domain data to generate features following a Gaussian mixture distribution.",
"The network is then used to assign pseudo labels to the unlabeled target data.",
"To learn both discriminative and domain invariant features, we fine-tune the network to minimize the cross-entropy loss on the labeled source data and domain discrepancy losses.",
"Specifically, we propose two new domain discrepancy losses by exploiting the explicit Gaussian mixture distributions of the deep features.",
"The first one minimizes the distances between the corresponding Gaussian component means between the source and target data.",
"We call it Gaussian Component Mean Matching (GCMM).",
"The second one minimizes the negative log likelihood of generating the target features from the source feature distribution.",
"We call it Pseudo Distribution Matching (PDM).",
"Extensive experiments on Digits Image transfer tasks and synthetic-to-real image transfer task demonstrate our approach can provide superior results than state-of-the-art approaches.",
"We present our proposed method in Section 3, extensive experiment results and analysis in Section 4 and conclusion in Section 5.",
"In this paper, we propose Distribution Matching Prototypical Network (DMPN) for Unsupervised Domain Adaptation (UDA) where we explicitly model and match the deep feature distribution of the source and target data as Gaussian mixture distributions.",
"Our work fills the gap in UDA where stateof-the-art methods assume the deep feature distributions of the source and target data are unknown when minimizing the discrepancy between them.",
"We propose two new domain discrepancy losses based on the Figure 4 : Sensitivity analysis on confidence threshold.",
"Fig.",
"4 shows the sensitivity analysis of our method on different values of confidence threshold on VisDA 2017 dataset.",
"The experiment results show that we can get similar accuracy results or even better when changing the confidence threshold in a certain range, demonstrating that our method is robust against hyper-parameter changes.",
"A.3",
"OFFICE-HOME TRANSFER Table 3 presents experiment results of state-of-the-art UDA methods and our method on OfficeHome dataset.",
"Our method gives the best accuracy results in all transfer tasks, showing the effectiveness of our method.",
"In this experiment, we train the network for 100 epochs.",
"The learning rate is initially set to be 1e-5 for all the parameters and is decayed by 0.1 at epoch 60 and 80.",
"1+e −γp respectively, where γ is set to be the default value 10, p is the training process changing from 0 to 1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3529411852359772,
0.2641509473323822,
0.307692289352417,
0.04444444179534912,
0.04651162400841713,
0.25531914830207825,
0.1666666567325592,
0.1111111044883728,
0.10256409645080566,
0.08695651590824127,
0.04444444179534912,
0,
0.21276594698429108,
0.23529411852359772,
0.11538460850715637,
0.11320754140615463,
0.09677419066429138,
0.11428570747375488,
0.2666666507720947,
0.07407406717538834,
0.20588234066963196,
0.4583333432674408,
0.158730149269104,
0.3921568691730499,
0.04878048226237297,
0.2916666567325592,
0.13636362552642822,
0.15094339847564697,
0.2448979616165161,
0.21739129722118378,
0.10256409645080566,
0.1702127605676651,
0.052631575614213943,
0.23076923191547394,
0.1702127605676651,
0.60317462682724,
0.38596490025520325,
0.0833333283662796,
0.04255318641662598,
0.06557376682758331,
0.25,
0.1304347813129425,
0.04878048226237297,
0.11320754140615463,
0.039215680211782455
] | r1eX1yrKwB | true | [
"We propose to explicitly model deep feature distributions of source and target data as Gaussian mixture distributions for Unsupervised Domain Adaptation (UDA) and achieve superior results in multiple UDA tasks than state-of-the-art methods."
] |
[
"Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents.",
" We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them",
". Our framework has two learning phases",
": 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and",
"2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions.",
"We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail.",
"Many real-world applications, e.g., self-driving cars and in-home robotics, require an autonomous agent to execute different tasks within a single environment that features, e.g. high-dimensional state space, complex world dynamics or structured layouts.",
"In these settings, model-free reinforcement learning (RL) agents often struggle to learn efficiently, requiring a large amount of experience collections to converge to optimal behaviors.",
"Intuitively, an agent could learn more efficiently by focusing its exploration in task-relevant regions, if it has knowledge of the high-level structure of the environment.",
"We propose a method to",
"1) learn and",
"2) use an environment decomposition in the form of a world graph, a task-agnostic abstraction.",
"World graph nodes are waypoint states, a set of salient states that can summarize agent trajectories and provide meaningful starting points for efficient exploration (Chatzigiorgaki & Skodras, 2009; Jayaraman et al., 2018; Ghosh et al., 2018) .",
"The directed and weighted world graph edges characterize feasible traversals among the waypoints.",
"To leverage the world graph, we model hierarchical RL (HRL) agents where a high-level policy chooses a waypoint state as a goal to guide exploration towards task-relevant regions, and a low-level policy strives to reach the chosen goals.",
"Our framework consists of two phases.",
"In the task-agnostic phase, we obtain world graphs by training a recurrent variational auto-encoder (VAE) (Chung et al., 2015; Gregor et al., 2015; Kingma & Welling, 2013) with binary latent variables (Nalisnick & Smyth, 2016) over trajectories collected using a random walk policy (Ha & Schmidhuber, 2018 ) and a curiosity-driven goal-conditioned policy (Ghosh et al., 2018; Nair et al., 2018) .",
"World graph nodes are states that are most frequently selected by the binary latent variables, while edges are inferred from empirical transition statistics between neighboring waypoints.",
"In the task-specific phase, taking advantage of the learned world graph for structured exploration, we efficiently train an HRL model (Taylor & Stone, 2009 ).",
"In summary, our main contributions are:",
"• A task-agnostic unsupervised approach to learn world graphs, using a recurrent VAE with binary latent variables and a curiosity-driven goal-conditioned policy.",
"• An HRL scheme for the task-specific phase that features multi-goal selection (Wide-thenNarrow) and navigation via world graph traversal.",
"4. On its traversal course to wide goal, agent hits final target and exits.",
": waypoints selected by the manager : waypoints initiates traversal : trajectories directly from worker actions : exit point : agent : final goal from manager close to selected waypoints : trajectories from world graph traversal",
"We have shown that world graphs are powerful environment abstractions, which, in particular, are capable of accelerating reinforcement learning.",
"Future works may extend their applications to more challenging RL setups, such as real-world multi-task learning and navigation.",
"It is also interesting to generalize the proposed framework to learn dynamic world graphs for evolving environments, and applying world graphs to multi-agent problems, where agents become part of the world graphs of other agents."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.09999999403953552,
0.380952388048172,
0,
0.1860465109348297,
0.2978723347187042,
0.29032257199287415,
0.17241378128528595,
0.12765957415103912,
0.25531914830207825,
0.13793103396892548,
0.14814814925193787,
0.3684210479259491,
0.23728813230991364,
0.21621620655059814,
0.2142857164144516,
0.06666666269302368,
0.1621621549129486,
0.0833333283662796,
0.2916666567325592,
0,
0.2666666507720947,
0.2790697515010834,
0.052631575614213943,
0.1304347813129425,
0.1904761791229248,
0.0952380895614624,
0.23999999463558197
] | BkgRe1SFDS | true | [
"We learn a task-agnostic world graph abstraction of the environment and show how using it for structured exploration can significantly accelerate downstream task-specific RL."
] |
[
"We introduce the notion of property signatures, a representation for programs and\n",
"program specifications meant for consumption by machine learning algorithms.\n",
"Given a function with input type τ_in and output type τ_out, a property is a function\n",
"of type: (τ_in, τ_out) → Bool that (informally) describes some simple property\n",
"of the function under consideration.",
"For instance, if τ_in and τ_out are both lists\n",
"of the same type, one property might ask ‘is the input list the same length as the\n",
"output list?’.",
"If we have a list of such properties, we can evaluate them all for our\n",
"function to get a list of outputs that we will call the property signature.",
"Crucially,\n",
"we can ‘guess’ the property signature for a function given only a set of input/output\n",
"pairs meant to specify that function.",
"We discuss several potential applications of\n",
"property signatures and show experimentally that they can be used to improve\n",
"over a baseline synthesizer so that it emits twice as many programs in less than\n",
"one-tenth of the time.",
"Program synthesis is a longstanding goal of computer science research (Manna & Waldinger, 1971; Waldinger et al., 1969; Summers, 1977; Shaw; Pnueli & Rosner, 1989; Manna & Waldinger, 1975) , arguably dating to the 1940s and 50s (Copeland, 2012; Backus et al., 1957) .",
"Deep learning methods have shown promise at automatically generating programs from a small set of input-output examples (Balog et al., 2016; Devlin et al., 2017; Ellis et al., 2018b; 2019b) .",
"In order to deliver on this promise, we believe it is important to represent programs and specifications in a way that supports learning.",
"Just as computer vision methods benefit from the inductive bias inherent to convolutional neural networks (LeCun et al., 1989) , and likewise with LSTMs for natural language and other sequence data (Hochreiter & Schmidhuber, 1997) , it stands to reason that ML techniques for computer programs will benefit from architectures with a suitable inductive bias.",
"We introduce a new representation for programs and their specifications, based on the principle that to represent a program, we can use a set of simpler programs.",
"This leads us to introduce the concept of a property, which is a program that computes a boolean function of the input and output of another program.",
"For example, consider the problem of synthesizing a program from a small set of input-output examples.",
"Perhaps the synthesizer is given a few pairs of lists of integers, and the user hopes that the synthesizer will produce a sorting function.",
"Then useful properties might include functions that check if the input and output lists have the same length, if the input list is a subset of the output, if element 0 of the output list is less than element 42, and so on.",
"The outputs of a set of properties can be concatenated into a vector, yielding a representation that we call a property signature.",
"Property signatures can then be used for consumption by machine learning algorithms, essentially serving as the first layer of a neural network.",
"In this paper, we demonstrate the utility of property signatures for program synthesis, using them to perform a type of premise selection as in Balog et al. (2016) .",
"More broadly, however, we envision that property signatures could be useful across a broad range of problems, including algorithm induction (Devlin et al., 2017) , improving code readability (Allamanis et al., 2014) , and program analysis (Heo et al., 2019) .",
"More specifically, our contributions are:",
"• We introduce the notion of property signatures, which are a general purpose way of featurizing both programs and program specifications (Section 3).",
"• We demonstrate how to use property signatures within a machine-learning based synthesizer for a general-purpose programming language.",
"This allows us to automatically learn a useful set of property signatures, rather than choosing them manually (Sections 3.2 and 4).",
"• We show that a machine learning model can predict the signatures of individual functions given the signature of their composition, and describe several ways this could be used to improve existing synthesizers (Section 5).",
"• We perform experiments on a new test set of 185 functional programs of varying difficulty, designed to be the sort of algorithmic problems that one would ask on an undergraduate computer science examination.",
"We find that the use of property signatures leads to a dramatic improvement in the performance of the synthesizer, allowing it to synthesize over twice as many programs in less than one-tenth of the time (Section 4).",
"An example of a complex program that was synthesized only by the property signatures method is shown in Listing 1.",
"For our experiments, we created a specialized programming language, called Searcho 1 (Section 2), based on strongly-typed functional languages such as Standard ML and Haskell.",
"Searcho is designed so that many similar programs can be executed rapidly, as is needed during a large-scale distributed search during synthesis.",
"We release 2 the programming language, runtime environment, distributed search infrastructure, machine learning models, and training data from our experiments so that they can be used for future research.",
"Listing 1: A program synthesized by our system, reformatted and with variables renamed for readability.",
"This program returns the sub-list of all of the elements in a list that are distinct from their previous value in the list.",
"In this work, we have introduced the idea of properties and property signatures.",
"We have shown that property signatures allow us to synthesize programs that a baseline otherwise was not able to synthesize, and have sketched out other potential applications as well.",
"Finally, we have open sourced all of our code, which we hope will accelerate future research into ML-guided program synthesis.",
"The top-down synthesizer that we use as a baseline in this work.",
"In a loop until a satisfying program is found or we run out of time, we pop the lowest-cost partial program from the queue of all partial programs, then we fill in the holes in all ways allowed by the type system, pushing each new partial program back onto the queue.",
"If there are no holes to fill, the program is complete, and we check it against the spec.",
"The cost of a partial program is the sum of the costs of its pool elements, plus a lower bound on the cost of filling each of its typed holes, plus the sum of the costs of a few special operations such as tuple construction and lambda abstraction."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4000000059604645,
0.0714285671710968,
0.13333332538604736,
0.06666666269302368,
0.08695651590824127,
0.07407406717538834,
0.06451612710952759,
0.1249999925494194,
0.1875,
0.1875,
0.0833333283662796,
0.1666666567325592,
0.19999998807907104,
0.12121211737394333,
0.09090908616781235,
0.2142857164144516,
0.17777776718139648,
0.29999998211860657,
0.1904761791229248,
0.5238094925880432,
0.25641024112701416,
0.25,
0.1621621549129486,
0.1249999925494194,
0.2222222238779068,
0.09999999403953552,
0.2666666507720947,
0.15094339847564697,
0,
0.29999998211860657,
0.22857142984867096,
0.25,
0.2745097875595093,
0.2857142686843872,
0.25,
0.15789473056793213,
0.09302324801683426,
0.15789473056793213,
0.08510638028383255,
0.12121211737394333,
0.1666666567325592,
0.19354838132858276,
0.22727271914482117,
0.1621621549129486,
0.19999998807907104,
0.1111111044883728,
0.17142856121063232,
0.16326530277729034
] | rylHspEKPr | true | [
"We represent a computer program using a set of simpler programs and use this representation to improve program synthesis techniques."
] |
[
"Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare.",
"Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others.",
"We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation).",
"We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas.",
"Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.",
"Bilateral cooperative relationships, where individuals face a choice to pay personal costs to give larger benefits to others, are ubiquitous in our daily lives.",
"In such situations mutual cooperation can lead to higher payoffs for all involved but there always exists an incentive to free ride.",
"In a seminal work BID3 asks a practical question: since social dilemmas are so ubiquitous, how should a person behave when confronted with one?",
"In this work we will take up a variant of that question: how can we construct artificial agents that can solve complex bilateral social dilemmas?",
"First, we must define what it means to 'solve' a social dilemma.",
"The simplest social dilemma is the two player, repeated Prisoner's Dilemma (PD).",
"Here each player chooses to either cooperate or defect each turn.",
"Mutual cooperation earns high rewards for both players.",
"Defection improves one's payoff but only at a larger cost to one's partner.",
"For the PD, BID2 suggest the strategy of tit-for-tat (TFT): begin by cooperating and in later turns copy whatever your partner did in the last turn.TFT and its variants (eg. Win-Stay-Lose-Shift, BID37 ) have been studied extensively across many domains including the social and behavioral sciences, biology, and computer science.",
"TFT is popular for several reasons.",
"First, it is able to avoid exploitation by defectors while reaping the benefits of cooperation with cooperators.",
"Second, when TFT is paired with other conditionally cooperative strategies (eg. itself) it achieves cooperative payoffs.",
"Third, it is error correcting because after an accidental defection is provides a way to return to cooperation.",
"Fourth, it is simple to explain to a partner and creates good incentives: if one person commits to using TFT, their partner's best choice is to cooperate rather than try to cheat.Our contribution is to expand the idea behind to TFT to a different environment: one shot Markov social dilemmas that require function approximation (eg. deep reinforcement learning).",
"We will work with the standard deep RL setup: at training time, our agent is given access to the Markov social dilemma and can use RL to compute a strategy.",
"At test time the agent is matched with an unknown partner and gets to play the game with that partner once.We will say that the agent can solve a social dilemma if it can satisfy the four TFT properties listed above.",
"We call our strategy approximate (because we use RL function approximation) Markov (because the game is Markov) tit-for-tat (amTFT) which we show can solve more complex Markov social dilemmas.The first issue amTFT needs to tackle is that unlike in the PD 'cooperation' and 'defection' are no longer simple labeled strategies, but rather sequences of choices.",
"amTFT uses modified self-play 1 to learn two policies at training time: a fully cooperative policy and a 'safe' policy (we refer to this as defection).",
"Humans are remarkably adapted to solving bilateral social dilemmas.",
"We have focused on how to give artificial agents this capability.",
"We have shown that amTFT can maintain cooperation and avoid exploitation in Markov games.",
"In addition we have provided a simple construction for this strategy that requires no more than modified self-play.",
"Thus, amTFT can be applied to social dilemmas in many environments.Our results emphasize the importance of treating agents as fundamentally different than other parts of the environment.",
"In particular, agents have beliefs, desires, learn, and use some form of optimization while objects follow simple fixed rules.",
"An important future direction for constructing cooperative agents is to continue to incorporate ideas from inverse reinforcement learning BID0 BID36 and cognitive science BID5 BID26 to construct agents that exhibit some theory of mind.There is a growing literature on hybrid systems which include both human and artificial agents BID10 BID50 .",
"In this work we have focused on defining 'cooperation' as maximizing the joint payoff.",
"This assumption seems reasonable in symmetric situations such as those we have considered, however, as we discuss in the introduction it may not always be appropriate.",
"The amTFT construction can be easily modified to allow other types of focal points simply by changing the modified reward function used in the training of the cooperative strategies (for example by using the inequity averse utility functions of BID15 ).",
"However moving forward in constructing agents that can interact in social dilemmas with humans will require AI designers (and their agents) to understand and adapt to human cooperative and moral intutions BID27 Yoeli et al., 2013; BID21 BID39 In a social dilemma there exists an equilibrium of mutual defection, and there may exist additional equilibria of conditional cooperation.",
"Standard self-play may converge to any of these equilibria.",
"When policy spaces are large, it is often the case that simple equilibria of constant mutual defection have larger basins of attraction than policies which maintain cooperation.We can illustrate this with the simple example of the repeated Prisoner's Dilemma.",
"Consider a PD with payoffs of 0 to mutual defection, 1 for mutual cooperation, w > 1 for defecting on a cooperative partner and −s for being defected on while cooperating.",
"Consider the simplest possible state representation where the set of states is the pair of actions played last period and let the initial state be (C, C) (this is the most optimistic possible setup).",
"We consider RL agents that use policy gradient (results displayed here come from using Adam BID25 , similar results were obtained with SGD though convergence speed was much more sensitive to the setting of the learning rate) to learn policies from states (last period actions) to behavior.Note that this policy space contains TFT (cooperate after (C, C), (D, C), defect otherwise), Grim Trigger (cooperate after (C, C), defect otherwise) and Pavlov or Win-Stay-Lose-Shift (cooperate after (C, C), (D, D), defect otherwise BID37 ) which are all cooperation maintaining strategies (though only Grim and WSLS are themselves full equilibria).Each",
"episode is defined as one repeated PD game which lasts a random number of periods with stopping probability of stopping .05 after each period. Policies",
"in the game are maps from the onememory state space {(C, C), (D, C), (C, D), (D, D)} to either cooperation or not. These policies",
"are trained using policy gradient and the REINFORCE algorithm (Williams, 1992) . We vary w and",
"set s = 1.5w such that (C, C) is the most efficient strategy always. Note that all",
"of these parameters are well within the range where humans discover cooperative strategies in experimental applications of the repeated PD BID7 . Figure 4 shows",
"that cooperation only robustly occurs when it is a dominant strategy for both players (w < 0) and thus the game is no longer a social dilemma. 10 .10 Note that",
"these results use pairwise learning and therefore are different from evolutionary game theoretic results on the emergence of cooperation BID38 . Those results show",
"that indeed cooperation can robustly emerge in these kinds of strategy spaces under evolutionary processes. Those results differ",
"because they rely on the following argument: suppose we have a population of defectors. This can be invaded",
"by mutants of TFT because TFT can try cooperation in the first round. If it is matched with",
"a defector, it loses once but it then defects for the rest of the time, if it is matched with another TFT then they cooperate for a long time. Thus, for Figure 4: Results",
"from training one-memory strategies using policy gradient in the repeated Prisoner's Dilemma. Even in extremely favorable",
"conditions self-play fails to discover cooperation maintaining strategies. Note that temptation payoff",
".5 is not a PD and here C is a dominant strategy in the stage game."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6341463327407837,
0.11320754140615463,
0.1071428507566452,
0.25,
0.25806450843811035,
0.21739129722118378,
0.13333332538604736,
0.1304347813129425,
0.3478260934352875,
0.2222222238779068,
0.0555555522441864,
0.05882352590560913,
0,
0.2222222238779068,
0.029411759227514267,
0,
0.04878048226237297,
0.05128204822540283,
0.09999999403953552,
0.1666666567325592,
0.19607841968536377,
0.20689654350280762,
0.18666666746139526,
0.12765957415103912,
0.1818181723356247,
0.17142856121063232,
0.10526315122842789,
0.1428571343421936,
0.19999998807907104,
0.04651162400841713,
0.14492753148078918,
0.052631575614213943,
0.04255318641662598,
0.06896550953388214,
0.21052631735801697,
0.060606054961681366,
0.06779660284519196,
0.12244897335767746,
0.03999999538064003,
0.058252424001693726,
0.04255318641662598,
0.04347825422883034,
0,
0.04878048226237297,
0.04347825422883034,
0.11764705181121826,
0,
0.0952380895614624,
0.1428571343421936,
0.04878048226237297,
0.039215680211782455,
0,
0.1666666567325592,
0.052631575614213943
] | rJIN_4lA- | true | [
"How can we build artificial agents that solve social dilemmas (situations where individuals face a temptation to increase their payoffs at a cost to total welfare)?"
] |
[
"The reparameterization trick has become one of the most useful tools in the field of variational inference.",
"However, the reparameterization trick is based on the standardization transformation which restricts the scope of application of this method to distributions that have tractable inverse cumulative distribution functions or are expressible as deterministic transformations of such distributions.",
"In this paper, we generalized the reparameterization trick by allowing a general transformation.",
"Unlike other similar works, we develop the generalized transformation-based gradient model formally and rigorously.",
"We discover that the proposed model is a special case of control variate indicating that the proposed model can combine the advantages of CV and generalized reparameterization.",
"Based on the proposed gradient model, we propose a new polynomial-based gradient estimator which has better theoretical performance than the reparameterization trick under certain condition and can be applied to a larger class of variational distributions.",
"In studies of synthetic and real data, we show that our proposed gradient estimator has a significantly lower gradient variance than other state-of-the-art methods thus enabling a faster inference procedure.",
"Most machine learning objective function can be rewritten in the form of an expectation:",
"where θ is a parameter vector.",
"However, due to the intractability of the expectation, it's often impossible or too expensive to calculate the exact gradient w.r.t θ, therefore it's inevitable to estimate the gradient ∇ θ L in practical applications.",
"Stochastic optmization methods such as reparameterization trick and score function methods have been widely applied to address the stochastic gradient estimation problem.",
"Many recent advances in large-scale machine learning tasks have been brought by these stochastic optimization tricks.",
"Like in other stochastic optimzation related works, our paper mainly focus on variational inference tasks.",
"The primary goal of variational inference (VI) task is to approximate the posterior distribution in probabilistic models (Jordan et al., 1999; Wainwright & Jordan, 2008) .",
"To approximate the intractable posterior p(z|x) with the joint probability distribution p(x, z) over observed data x and latent random variables z given, VI introduces a parameteric family of distribution q θ (z) and find the best parameter θ by optimizing the Kullback-Leibler (KL) divergence D KL (q(z; θ) p(z|x)).",
"The performance of VI methods depends on the capacity of the parameteric family of distributions (often measured by Rademacher complexity) and the ability of the optimizer.",
"In this paper, our method tries to introduce a better optimizer for a larger class of parameteric family of distributions.",
"The main idea of our work is to replace the parameter-independent transformation in reparameterization trick with generalized transformation and construct the generalized transformation-based (G-TRANS) gradient with the velocity field which is related to the characteristic curve of the sublinear partial differential equation associated with the generalized transformation.",
"Our gradient model further generalizes the G-REP (Ruiz et al., 2016) and provides a more elegant and flexible way to construct gradient estimators.",
"We mainly make the following contributions:",
"1. We develop a generalized transformation-based gradient model based on the velocity field related to the generalized transformation and explicitly propose the unbiasedness constraint on the G-TRANS gradient.",
"The proposed gradient model provides a more poweful and flexible way to construct gradient estimators.",
"2. We show that our model is a generalization of the score function method and the reparameterization trick.",
"Our gradient model can reduce to the reparameterization trick by enforcing a transport equation constraint on the velocity field.",
"We also show our model's connection to control variate method.",
"3. We propose a polynomial-based gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework, and show its superiority over similar works on several experiments.",
"The rest of this paper is organized as follows.",
"In Sec.2 we review the stochastic gradient variational inference (SGVI) and stochastic gradient estimators.",
"In Sec.3 we propose the generalized transformation-based gradient.",
"In Sec.4 we propose the polynomial-based G-TRANS gradient estimator.",
"In Sec.5 we study the performance of our gradient estimator on synthetic and real data.",
"In Sec.6 we review the related works.",
"In Sec.7 we conclude this paper and discuss future work.",
"We proposed a generalized transformation-based (G-TRANS) gradient model which extends the reparameterization trick to a larger class of variational distributions.",
"Our gradient model hides the details of transformation by introducing the velocity field and provides a flexible way to construct gradient estimators.",
"Based on the proposed gradient model, we introduced a polynomial-based G-TRANS gradient estimator that cannot be induced by any other existing generalized reparameterization gradient framework.",
"In practice, our gradient estimator provides a lower gradient variance than other state-of-the-art methods, leading to a fast converging process.",
"For future work, We can consider how to construct G-TRANS gradient estimators for distributions that don't have analytical high-order moments.",
"We can also utilize the results from the approximation theory to find certain kinds of high-order polynomial functions that can approximate the test function effectively with cheap computations for the coefficients.",
"Constructing velocity fields with the optimal transport theory is also a promising direction."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06896550953388214,
0.08695651590824127,
0.2222222238779068,
0.4285714328289032,
0.34285715222358704,
0.2978723347187042,
0.1904761791229248,
0.0714285671710968,
0.09999999403953552,
0.09302324801683426,
0.17142856121063232,
0,
0,
0.04999999701976776,
0.10344827175140381,
0.11764705181121826,
0.0624999962747097,
0.21276594698429108,
0.277777761220932,
0.20000000298023224,
0.555555522441864,
0.2857142686843872,
0.32258063554763794,
0.25,
0.0833333283662796,
0.380952388048172,
0,
0.2222222238779068,
0.43478259444236755,
0.4166666567325592,
0.2666666507720947,
0.09090908616781235,
0.07999999821186066,
0.42424240708351135,
0.29411762952804565,
0.3243243098258972,
0.1875,
0.11764705181121826,
0.09756097197532654,
0.14814814925193787
] | H1lqSC4YvB | true | [
"We propose a novel generalized transformation-based gradient model and propose a polynomial-based gradient estimator based upon the model."
] |
[
"The fault diagnosis in a modern communication system is traditionally supposed to be difficult, or even impractical for a purely data-driven machine learning approach, for it is a humanmade system of intensive knowledge.",
"A few labeled raw packet streams extracted from fault archive can hardly be sufficient to deduce the intricate logic of underlying protocols.",
"In this paper, we supplement these limited samples with two inexhaustible data sources: the unlabeled records probed from a system in service, and the labeled data simulated in an emulation environment.",
"To transfer their inherent knowledge to the target domain, we construct a directed information flow graph, whose nodes are neural network components consisting of two generators, three discriminators and one classifier, and whose every forward path represents a pair of adversarial optimization goals, in accord with the semi-supervised and transfer learning demands.",
"The multi-headed network can be trained in an alternative approach, at each iteration of which we select one target to update the weights along the path upstream, and refresh the residual layer-wisely to all outputs downstream.",
"The actual results show that it can achieve comparable accuracy on classifying Transmission Control Protocol (TCP) streams without deliberate expert features.",
"The solution has relieved operation engineers from massive works of understanding and maintaining rules, and provided a quick solution independent of specific protocols.",
"A telecommunications network is a collection of distributed devices, entirely designed and manufactured by humans for a variety of transmission, control and management tasks, striving to provide a transparent channel between external terminals, via an actual internal relay process node by node.",
"As a typical conversation in the style of client and server, the two linked nodes send their messages in the form of packets, encapsulated the load with miscellaneous attributes in headers to ensure the correctness, consistency, and smoothness of the entire process.",
"A typical header includes packet sequence number, source and destination addresses, control bits, error detection codes, etc.The large-scale network cannot always work ideally, due to its inherent complexity inside massive devices and their interactions.",
"When there is a malfunction of a device, either caused by the traffic overload, or software bugs, or hardware misconfiguration, or malicious attacks, it will be reflected on the packet streams that pass through, such as packet loss, timeout, out of order, etc.",
"System administrators captured those suspicious streams and sent back to the service center for cautious offline analysis, which is time-consuming and domain-specific.The primary challenge of automatic diagnosis is that, it is almost impossible to formalize all the logic inside the system and make them available to artificial intelligence.",
"A typical modern communication system consists of tens of thousands devices end-to-end and runs based on a list of hundreds of protocols layer-by-layer BID6 ).",
"If we could figure out the latent states of protocols by constructing specific features from raw bytes, the subsequent classification tasks would be quite straightforward and easy to implement.",
"For instance, the Transmission Control Protocol (TCP) relies on sequence numbers to judge the receiving order of packets, which may be just big integers roughly linearly growing from the view of machine learning models.",
"Another example is a few critical control bits may reside among much more useless bits, such as checksum codes, which is harmful noises for models.",
"Even we have the patience to dive into all the industrial protocols and build up an exhausted feature library; eventually, we will fail again to achieve the target of automation, one of the main advantages of the modern data-driven approach.Another difficulty is scarce of labeled samples.",
"In spite of there are seemingly numerous packet flows running through the Internet all the time, the real valid faults occur at random and occupy only a tiny portion of whole traffic volume.",
"The actual labeled data are usually collected from the archive of fault cases, which is hard to have enough samples for all possible categories, or cannot at least cover them completely.The previous works on this issue mainly follow two technical routes:",
"1) a traditional two-phase framework, using expert features and some general-propose classifiers BID1 );",
"2) an end-to-end approach based on deep learning for automatic feature extraction (Javaid et al. (2016) ).",
"All these prior arts seldom use the generative models, which is usually more promising for expressing structural relationship among random variables.",
"And they may fuse 1-2 data sources in semi-supervised setting (Javaid et al. (2016) ), but not scale to even more data sources.In this paper, we resort to a generative model to mimic the messages in a terminals conversation and enrich the target data domain from two abundant but different information sources: labeled but from simulation, and genuine but unlabeled.",
"The transfer and semi-supervised demands are integrated into an intuitive framework, composed of a connected graph of multiple simple Generative Adversarial Networks (GANs)' components, trained in an alternative optimization approach.",
"The contribution of this paper includes:",
"1) combine three kinds of data sources in a generative approach, to solve the small-sample problem with a simulation environment;",
"2) extend the two players in usual GANs to a system of multiple ones, still keeping its merit of end-to-end training;",
"3) verify its effect on our practice problem of packet sequence classification.The left of paper is organized as below: first, we introduce the previous work selectively in network anomaly detection and the research frontier in the generative neural network.",
"Next, we present the model and algorithm in detail with feature design at different levels.",
"The results of experiments are followed in Section 4.",
"Finally, we conclude the whole article.",
"In this paper, the widely used semi-supervised and transfer learning requirements have been implemented in an integrated way, via a system of cooperative or adversarial neural blocks.",
"Its effectiveness has been verified in our application of packet flow classification, and it is hopeful to be a widely adopted method in this specific domain.",
"The work also prompts us that, complex machine learning tasks and their compound loss functions can be directly mapped into connected networks, and their optimization process can be designed over an entire graph, rather than each individual's hierarchical layers.",
"In future work, we may study how to apply this approach to even larger scale tasks, and make a theoretical analysis of the existence of equilibrium and why we can always reach it."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.2222222238779068,
0.10256409645080566,
0.13333332538604736,
0.29032257199287415,
0.07999999821186066,
0.052631575614213943,
0.1621621549129486,
0.15094339847564697,
0.12244897335767746,
0.07843136787414551,
0.18518517911434174,
0.10344827175140381,
0.25641024112701416,
0.08888888359069824,
0.1249999925494194,
0.04878048226237297,
0.072727270424366,
0.1702127605676651,
0.10344827175140381,
0.12903225421905518,
0.11764705181121826,
0,
0.0923076868057251,
0.2222222238779068,
0.08695651590824127,
0.1111111044883728,
0.1621621549129486,
0.19230768084526062,
0.0624999962747097,
0.07692307233810425,
0,
0.5909090638160706,
0.2857142686843872,
0.07692307233810425,
0.1304347813129425
] | SJjADecmf | true | [
"semi-supervised and transfer learning on packet flow classification, via a system of cooperative or adversarial neural blocks"
] |
[
"Our work addresses two important issues with recurrent neural networks: (1) they are over-parameterized, and (2) the recurrent weight matrix is ill-conditioned.",
"The former increases the sample complexity of learning and the training time.",
"The latter causes the vanishing and exploding gradient problem.",
"We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU).",
"KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix.",
"It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors.",
"Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient.",
"Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance.",
"These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.",
"Deep neural networks have defined the state-of-the-art in a wide range of problems in computer vision, speech analysis, and natural language processing BID28 BID36 .",
"However, these models suffer from two key issues.",
"(1) They are over-parametrized; thus it takes a very long time for training and inference.",
"(2) Learning deep models is difficult because of the poor conditioning of the matrices that parameterize the model.",
"These difficulties are especially relevant to recurrent neural networks.",
"Indeed, the number of distinct parameters in RNNs grows as the square of the size of the hidden state conversely to convolutional networks which enjoy weight sharing.",
"Moreover, poor conditioning of the recurrent matrices results in the gradients to explode or vanish exponentially fast along the time horizon.",
"This problem prevents RNN from capturing long-term dependencies BID22 BID5 .There",
"exists an extensive body of literature addressing over-parametrization in neural networks. BID31",
"first studied the problem and proposed to remove unimportant weights in neural networks by exploiting the second order information. Several",
"techniques which followed include low-rank decomposition BID13 , training a small network on the soft-targets predicted by a big pre-trained network BID2 , low bit precision training BID12 , hashing BID8 , etc. A notable",
"exception is the deep fried convnets BID44 which explicitly parameterizes the fully connected layers in a convnet with a computationally cheap and parameter-efficient structured linear operator, the Fastfood transform BID29 . These techniques",
"are primarily aimed at feed-forward fully connected networks and very few studies have focused on the particular case of recurrent networks BID1 .The problem of vanishing",
"and exploding gradients has also received significant attention. BID23 proposed an effective",
"gating mechanism in their seminal work on LSTMs. Later, this technique was adopted",
"by other models such as the Gated Recurrent Units (GRU) BID10 and the Highway networks BID39 for recurrent and feed-forward neural networks respectively. Other popular strategies include",
"gradient clipping BID37 , and orthogonal initialization of the recurrent weights . More recently BID1 proposed to use",
"a unitary recurrent weight matrix. The use of norm preserving unitary",
"maps prevent the gradients from exploding or vanishing, and thus help to capture long-term dependencies. The resulting model called unitary",
"RNN (uRNN) is computationally efficient since it only explores a small subset of general unitary matrices. Unfortunately, since uRNNs can only",
"span a reduced subset of unitary matrices their expressive power is limited BID42 . We denote this restricted capacity",
"unitary RNN as RC uRNN. Full capacity unitary RNN (FC uRNN",
") BID42 proposed to overcome this issue by parameterizing the recurrent matrix with a full dimensional unitary matrix, hence sacrificing computational efficiency. Indeed, FC uRNN requires a computationally",
"expensive projection step which takes O(N 3 ) time (N being the size of the hidden state) at each step of the stochastic optimization to maintain the unitary constraint on the recurrent matrix. BID35 in their orthogonal RNN (oRNN) avoided",
"the expensive projection step in FC uRNN by parametrizing the orthogonal matrices using Householder reflection vectors, it allows a fine-grained control over the number of parameters by choosing the number of Householder reflection vectors. When the number of Householder reflection vector",
"approaches N this parametrization spans the full reflection set, which is one of the disconnected subset of the full orthogonal set. BID25 also presented a way of parametrizing unitary",
"matrices which allows fine-grained control on the number of parameters. This work called as Efficient Unitary RNN (EURNN),",
"exploits the continuity of unitary set to have a tunable parametrization ranging from a subset to the full unitary set.Although the idea of parametrizing recurrent weight matrices with strict unitary linear operator is appealing, it suffers from several issues: (1) Strict unitary constraints severely restrict the search space of the model, thus making the learning process unstable. (2) Strict unitary constraints make forgetting irrelevant",
"information difficult. While this may not be an issue for problems with non-vanishing",
"long term influence, it causes failure when dealing with real world problems that have vanishing long term influence 4.7. BID20 have previously pointed out that the good performance of",
"strict unitary models on certain synthetic problems is because it exploits the biases in these data-sets which favors a unitary recurrent map and these models may not generalize well to real world data-sets. More recently BID41 have also studied this problem of unitary",
"RNNs and the authors found out that relaxing the strict unitary constraint on the recurrent matrix to a soft unitary constraint improved the convergence speed as well as the generalization performance.Our motivation is to address the problems of existing recurrent networks mentioned above. We present a new model called Kronecker Recurrent Units (KRU)",
". At the heart of KRU is the use of Kronecker factored recurrent",
"matrix which provide an elegant way to adjust the number of parameters to the problem at hand. This factorization allows us to finely modulate the number of",
"parameters required to encode N × N matrices, from O(log(N )) when using factors of size 2 × 2, to O(N 2 ) parameters when using a single factor of the size of the matrix itself. We tackle the vanishing and exploding gradient problem through",
"a soft unitary constraint BID26 BID20 BID11 BID41 . Thanks to the properties of Kronecker matrices BID40 , this constraint",
"can be enforced efficiently. Please note that KRU can readily be plugged into vanilla real space RNN",
", LSTM and other variants in place of standard recurrent matrices. However in case of LSTMs we do not need to explicitly enforce the approximate",
"orthogonality constraints as the gating mechanism is designed to prevent vanishing and exploding gradients. Our experimental results on seven standard data-sets reveal that KRU and KRU",
"variants of real space RNN and LSTM can reduce the number of parameters drastically (hence the training and inference time) without trading the statistical performance. Our core contribution in this work is a flexible, parameter efficient and expressive",
"recurrent neural network model which is robust to vanishing and exploding gradient problem.The paper is organized as follows, in section 2 we restate the formalism of RNN and detail the core motivations for KRU. In section 3 we present the Kronecker recurrent units (KRU). We present our experimental",
"findings in section 4 and section 5 concludes our work. DISPLAYFORM0",
"We have presented a new recurrent neural network model based on its core a Kronecker factored recurrent matrix.",
"Our core reason for using a Kronecker factored recurrent matrix stems from it's elegant algebraic and spectral properties.",
"Kronecker matrices are neither low-rank nor block-diagonal but it is multi-scale like the FFT matrix.",
"Kronecker factorization provides a fine control over the model capacity and it's algebraic properties enable us to design fast matrix multiplication algorithms.",
"It's spectral properties allow us to efficiently enforce constraints like positive semi-definitivity, unitarity and stochasticity.",
"As we have shown, we used the spectral properties to efficiently enforce a soft unitary constraint.Experimental results show that our approach out-perform classical methods which uses O(N 2 ) parameters in the recurrent matrix.",
"Maybe as important, these experiments show that both on toy problems ( § 4.1 and 4.2), and on real ones ( § 4.3, 4.4, , and § 4.6) , while existing methods require tens of thousands of parameters in the recurrent matrix, competitive or better than state-of-the-art performance can be achieved with far less parameters in the recurrent weight matrix.",
"These surprising results provide a new and counter-intuitive perspective on desirable memory-capable architectures: the state should remain of high dimension to allow the use of high-capacity networks to encode the input into the internal state, and to extract the predicted value, but the recurrent dynamic itself can, and should, be implemented with a low-capacity model.From a practical standpoint, the core idea in our method is applicable not only to vanilla recurrent neural networks and LSTMS as we showed, but also to a variety of machine learning models such as feed-forward networks BID46 , random projections and boosting weak learners.",
"Our future work encompasses exploring other machine learning models and on dynamically increasing the capacity of the models on the fly during training to have a perfect balance between computational efficiency and sample complexity."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25641024112701416,
0.13793103396892548,
0.07407406717538834,
0.25806450843811035,
0.2666666507720947,
0.1249999925494194,
0.12903225421905518,
0.11999999731779099,
0.1395348757505417,
0.24390242993831635,
0,
0.1818181723356247,
0.12121211737394333,
0.2222222238779068,
0.14999999105930328,
0.1621621549129486,
0,
0.19999998807907104,
0.1621621549129486,
0.04255318641662598,
0.08510638028383255,
0.1904761791229248,
0.06666666269302368,
0.06451612710952759,
0.2380952388048172,
0.17142856121063232,
0.2857142686843872,
0.052631575614213943,
0.21621620655059814,
0.1666666567325592,
0,
0.08888888359069824,
0.07692307233810425,
0.1304347813129425,
0.0952380895614624,
0.1666666567325592,
0.1515151411294937,
0.06451612710952759,
0.045454539358615875,
0.178571417927742,
0.2295081913471222,
0.2142857164144516,
0.10256409645080566,
0.11764705181121826,
0.2222222238779068,
0,
0.19512194395065308,
0.0476190410554409,
0.23529411852359772,
0.20338982343673706,
0.1428571343421936,
0.23529411852359772,
0.277777761220932,
0.12121211737394333,
0.19999998807907104,
0.060606054961681366,
0.07843136787414551,
0.11940298229455948,
0.12765957415103912,
0.1702127605676651
] | H1YynweCb | true | [
"Out work presents a Kronecker factorization of recurrent weight matrices for parameter efficient and well conditioned recurrent neural networks."
] |
[
"This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data.",
"We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO).",
"We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation.",
"This is a key hurdle in the effective use of representations for data-efficient learning and transfer.",
"To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations.",
"To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point.",
"For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations.",
"We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure. \n",
"Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks.",
"Researchers working in domains like Computer Vision (Krizhevsky et al., 2012) and Natural Language Processing (Devlin et al., 2018) have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks.",
"A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems (Krizhevsky et al., 2012) .",
"The effectiveness of learned representations has given new impetus to research in representation learning, leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness (Burgess et al., 2018; Achille & Soatto, 2017; Bengio, 2013; Locatello et al., 2019) .",
"Many popular techniques for generating representation are based on the Variational AutoEncoders (VAE) model (Kingma & Welling, 2013; Rezende et al., 2014) .",
"The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data.",
"While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution, it is far from a formal guarantee that the representation is fit for other purposes.",
"In fact, if the actual goal is learning good latent representations, evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary, and can be even misleading.",
"In this paper, we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data.",
"One example of such problematic behaviour can be seen in Figure 1 .",
"We identify a cause for this shortcoming in the classical Vari-ational Auto-encoder (VAE) objective, the evidence lower bound (ELBO) , that fails to control the behaviour of the encoder out of the support of the empirical data distribution.",
"We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer.",
"To address this problem, we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations.",
"To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point.",
"For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations.",
"We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.",
"Figure 1: An illustration of the intrinsic fragility of VAE representations.",
"Outputs from a Variational Autoencoder with encoder f and decoder g parametrized by η and θ, respectively, trained on CelebA.",
"Conditioned on the encoder input X a = x a the decoder output X = g(f (x a )) = (g • f )(x a ) is shown on the top row.",
"When the original example is perturbed with a carefully selected vector d such that X b = X a + d with d ≤ , the output X turns out to be perceptually very different.",
"Such examples suggest that either the representations Z a and Z b are very different (the encoder is not smooth), or the decoder is very sensitive to small changes in the representation (the decoder is not smooth), or both.",
"We identify the source of the problem primarily as the encoder and propose a practical solution.",
"It is clear that if learned representations are overly sensitive to irrelevant changes in the input (for example, small changes in the pixels of an image or video, or inaudible frequencies added to an audio signal), models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed.",
"We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations.",
"Based on these observations, we make the following contributions:",
"1. We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal.",
"We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials, this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space.",
"This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs.",
"2. We develop a modification to training algorithms for VAEs to improve robustness of learned representations, using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close.",
"As a particular selection mechanism, we adopt attacks in adversarial supervised learning (Madry et al., 2017) to attacks to the latent representation.",
"Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks.",
"3. We show that alternative models proposed in the literature, in particular β-VAE model used for explicitly controlling the learned representations, or Wasserstein Generative Adversarial Networks (GANs) can also be interpreted in our framework as variational lower bound maximization.",
"4. We show empirically using simulation studies on MNIST, color MNIST and CelebA datasets, that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training.",
"In this paper, we have introduced a method for improving robustness of latent representations learned by a VAE.",
"It must be stressed that our goal is not building the most powerful adversarially robust supervised classifier, but obtaining a method for learning generic representations that can be used for several tasks; the tasks can be even unknown at the time of learning the representations.",
"While the nominal accuracy of an unsupervised approach is expected to be inferior to a supervised training method that is informed by extra label information, we observe that significant improvements in adversarial robustness can be achieved by our approach that forces smooth representations."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1249999925494194,
0.25,
0.09999999403953552,
0.2666666507720947,
0.05882352590560913,
0.20512820780277252,
0.0624999962747097,
0.25925925374031067,
0.1249999925494194,
0.1249999925494194,
0.09302324801683426,
0.12903225421905518,
0.05405404791235924,
0,
0.1249999925494194,
0.0416666641831398,
0.11428570747375488,
0.07692307233810425,
0.17777776718139648,
0.23255813121795654,
0.05882352590560913,
0.20512820780277252,
0.0624999962747097,
0.2641509473323822,
0.0833333283662796,
0.060606054961681366,
0.05405404791235924,
0.0476190447807312,
0.1428571343421936,
0.2142857164144516,
0.1071428507566452,
0.2222222238779068,
0,
0.3333333432674408,
0.13333332538604736,
0.19999998807907104,
0.2222222238779068,
0.11428570747375488,
0.13636362552642822,
0.11999999731779099,
0.17391303181648254,
0.25806450843811035,
0.2448979616165161,
0.23999999463558197
] | H1gfFaEYDS | true | [
"We propose a method for computing adversarially robust representations in an entirely unsupervised way."
] |
[
"We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data.",
"We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks.",
"This extension allows to model complex interactions while being more global in its search compared to other greedy approaches.",
"In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods.",
"On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks while being competitive with existing greedy search methods on important metrics for causal inference.",
"Structure learning and causal inference have many important applications in different areas of science such as genetics [5, 12] , biology [13] and economics [7] .",
"Bayesian networks (BN), which encode conditional independencies using directed acyclic graphs (DAG), are powerful models which are both interpretable and computationally tractable.",
"Causal graphical models (CGM) [12] are BNs which support interventional queries like: What will happen if someone external to the system intervene on variable X?",
"Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples, adaptability and explainability [8, 6] .",
"However, structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures and the question of structure identifiability [12] .",
"Nevertheless, these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing.",
"The problem of structure learning can be seen as an inverse problem in which the learner tries to infer the causal structure which has generated the observation.",
"In this work, we propose a novel score-based method [5, 12] for structure learning named GraN-DAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem.",
"In the original method named NOTEARS [18] , the directed graph is encoded as a weighted adjacency matrix W which represents coefficients in a linear structural equation model (SEM) [7] .",
"To enforce acyclicity, the authors propose a constraint which is both efficiently computable and easily differentiable.",
"Most popular score-based methods for DAG learning usually tackle the combinatorial nature of the problem via greedy search procedures relying on multiple heuristics [3, 2, 11] .",
"Moving toward the continuous paradigm allows one to use gradient-based optimization algorithms instead of handdesigned greedy search algorithms.",
"Our first contribution is to extend the work of [18] to deal with nonlinear relationships between variables using neural networks (NN) [4] .",
"GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions.",
"To adapt the acyclicity constraint to our nonlinear model, we use an argument similar to what is used in [18] and apply it first at the level of neural network paths and then at the level of graph paths.",
"Our adapted constraint allows us to exploit the full flexibility of NNs.",
"On both synthetic and real-world tasks, we show GraN-DAG outperforms other approaches which leverage the continuous paradigm, including DAG-GNN [16] , a recent nonlinear extension of [18] independently developed which uses an evidence lower bound as score.",
"Our second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures such as CAM [2] .",
"We show that GraN-DAG is competitive on the wide range of tasks we considered.",
"We suppose the natural phenomenon of interest can be described by a random vector X ∈ R d entailed by an underlying CGM (P X , G) where P X is a probability distribution over X and G = (V, E) is a DAG [12] .",
"Each node i ∈ V corresponds to exactly one variable in the system.",
"Let π G i denote the set of parents of node i in G and let X π G i denote the random vector containing the variables corresponding to the parents of i in G. We assume there are no hidden variables.",
"In a CGM, the distribution P X is said to be Markov to G which means we can write the probability density function (pdf) as p(",
". A CGM can be thought of as a BN in which directed edges are given a causal meaning, allowing it to answer queries regarding interventional distributions [5] ."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.34285715222358704,
0.41025641560554504,
0.052631575614213943,
0.10256409645080566,
0.16326530277729034,
0.09090908616781235,
0.14999999105930328,
0.08888888359069824,
0.1304347813129425,
0.19999998807907104,
0.1538461446762085,
0.1428571343421936,
0.29629629850387573,
0.0416666604578495,
0.1111111044883728,
0.13333332538604736,
0.10810810327529907,
0.1463414579629898,
0.1111111044883728,
0.11764705181121826,
0.0624999962747097,
0.1428571343421936,
0.11320754140615463,
0.05882352590560913,
0.10344827175140381,
0.060606054961681366,
0.17391303181648254,
0.09090908616781235,
0.1304347813129425
] | ryl6nX398r | true | [
"We are proposing a new score-based approach to structure/causal learning leveraging neural networks and a recent continuous constrained formulation to this problem"
] |
[
"We study the problem of designing provably optimal adversarial noise algorithms that induce misclassification in settings where a learner aggregates decisions from multiple classifiers.",
"Given the demonstrated vulnerability of state-of-the-art models to adversarial examples, recent efforts within the field of robust machine learning have focused on the use of ensemble classifiers as a way of boosting the robustness of individual models.",
"In this paper, we design provably optimal attacks against a set of classifiers.",
"We demonstrate how this problem can be framed as finding strategies at equilibrium in a two player, zero sum game between a learner and an adversary and consequently illustrate the need for randomization in adversarial attacks.",
"The main technical challenge we consider is the design of best response oracles that can be implemented in a Multiplicative Weight Updates framework to find equilibrium strategies in the zero-sum game.",
"We develop a series of scalable noise generation algorithms for deep neural networks, and show that it outperforms state-of-the-art attacks on various image classification tasks.",
"Although there are generally no guarantees for deep learning, we show this is a well-principled approach in that it is provably optimal for linear classifiers.",
"The main insight is a geometric characterization of the decision space that reduces the problem of designing best response oracles to minimizing a quadratic function over a set of convex polytopes.",
"In this paper, we study adversarial attacks that induce misclassification when a learner has access to multiple classifiers.",
"One of the most pressing concerns within the field of AI has been the welldemonstrated sensitivity of machine learning algorithms to noise and their general instability.",
"Seminal work by has shown that adversarial attacks that produce small perturbations can cause data points to be misclassified by state-of-the-art models, including neural networks.",
"In order to evaluate classifiers' robustness and improve their training, adversarial attacks have become a central focus in machine learning and security BID21 BID17 BID23 .Adversarial",
"attacks induce misclassification by perturbing data points past the decision boundary of a particular class. In the case",
"of binary linear classifiers, for example, the optimal perturbation is to push points in the direction perpendicular to the separating hyperplane. For non-linear",
"models there is no general characterization of an optimal perturbation, though attacks designed for linear classifiers tend to generalize well to deep neural networks BID21 .Since a learner",
"may aggregate decisions using multiple classifiers, a recent line of work has focused on designing attacks on an ensemble of different classifiers BID31 BID0 BID13 . In particular,",
"this line of work shows that an entire set of state-of-the-art classifiers can be fooled by using an adversarial attack on an ensemble classifier that averages the decisions of the classifiers in that set. Given that attacking",
"an entire set of classifiers is possible, the natural question is then:What is the most effective approach to design attacks on a set of multiple classifiers?The main challenge when",
"considering attacks on multiple classifiers is that fooling a single model, or even the ensemble classifier (i.e. the model that classifies a data point by averaging individual predictions), provides no guarantees that the learner will fail to classify correctly. Models may have different",
"decision boundaries, and perturbations that affect one may be ineffective on another. Furthermore, a learner can",
"randomize over classifiers and avoid deterministic attacks (see Figure 1 ). c 2 c 1 Figure 1 : Illustration",
"of why randomization is necessary to compute optimal adversarial attacks. In this example using binary linear",
"classifiers, there is a single point that is initially classified correctly by two classifiers c1, c2, and a fixed noise budget α in the ℓ2 norm. A naive adversary who chooses a noise",
"perturbation deterministically will always fail to trick the learner since she can always select the remaining classifier. An optimal adversarial attack in this",
"scenario consists of randomizing with equal probability amongst both noise vectors.In this paper, we present a principled approach for attacking a set of classifiers which proves to be highly effective. We show that constructing optimal adversarial",
"attacks against multiple classifiers is equivalent to finding strategies at equilibrium in a zero sum game between a learner and an adversary. It is well known that strategies at equilibrium",
"in a zero sum game can be obtained by applying the celebrated Multiplicative Weights Update framework, given an oracle that computes a best response to a randomized strategy. The main technical challenge we address pertains",
"to the characterization and implementation of such oracles. Our main contributions can be summarized as follows:•",
"We describe the Noise Synthesis FrameWork (henceforth NSFW) for generating adversarial attacks. This framework reduces the problem of designing optimal",
"adversarial attacks for a general set of classifiers to constructing a best response oracle in a two player, zero sum game between a learner and an adversary; • We show that NSFW is an effective approach for designing adversarial noise that fools neural networks. In particular, applying projected gradient descent on an",
"appropriately chosen loss function as a proxy for a best response oracle achieves performance that significantly improves upon current state-of-the-art attacks (see results in Figure 2 ); • We show that applying projected gradient descent on an appropriately chosen loss function is a well-principled approach. We do so by proving that for linear classifiers such an",
"approach yields an optimal adversarial attack if the equivalent game has a pure Nash equilibrium. This result is shown via a geometric characterization of",
"the decision boundary space which reduces the problem of designing optimal attacks to a convex program; • If the game does not have a pure Nash equilibrium, there is an algorithm for finding an optimal adversarial attack for linear classifiers whose runtime is exponential in the number of classifiers. We show that finding an optimal strategy in this case is",
"NP-hard.Paper organization. Following a discussion on related work, in Section 2 we",
"formulate the problem of designing optimal adversarial noise and show how it can be modeled as finding strategies at equilibrium in a two player, zero sum game. Afterwards, we discuss our approach for finding such strategies",
"using MWU and proxies for best response oracles. In Section 2 .1, we justify our approach by proving guarantees",
"for linear classifiers. Lastly, in Section 3, we present our experiments.Additional related",
"work. The field of adversarial attacks on machine learning classifiers has",
"recently received widespread attention from a variety of perspectives BID1 BID9 BID25 BID3 . In particular, a significant amount of effort has been devoted to computing",
"adversarial examples that induce misclassification across multiple models BID22 BID21 . There has been compelling evidence which empirically demonstrates the effectiveness",
"of ensembles as way of both generating and defending against adversarial attacks. For example, BID31 establish the strengths of ensemble training as a defense against",
"adversarial attacks. Conversely, provide the first set of experiments showing that attacking an ensemble",
"classifier is an effective way of generating adversarial examples that transfer to the underlying models. Relative to their investigation, our work differs in certain key aspects. Rather than",
"analyzing adversarial noise from a security perspective and developing methods",
"for black-box attacks, we approach the problem from a theoretical point of view and introduce a formal characterization of the optimal attack against a set of classifiers. Furthermore, by analyzing noise in the linear setting, we design algorithms for this task",
"that have strong guarantees of performance. Through our experiments, we demonstrate how these algorithms motivate a natural extension",
"for noise in deep learning that achieves state-of-the-art results.",
"Designing adversarial attacks when a learner has access to multiple classifiers is a non-trivial problem.",
"In this paper we introduced NSFW which is a principled approach that is provably optimal on linear classifiers and empirically effective on neural networks.",
"The main technical crux is in designing best response oracles which we achieve through a geometrical characterization of the optimization landscape.",
"We believe NSFW can generalize to domains beyond those in this paper."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4000000059604645,
0.2181818187236786,
0.25641024112701416,
0.20338982343673706,
0.1090909019112587,
0.31372547149658203,
0.2857142686843872,
0.19230768084526062,
0.22727271914482117,
0.2083333283662796,
0.16326530277729034,
0.15686273574829102,
0.1395348757505417,
0.260869562625885,
0.2641509473323822,
0.23529411852359772,
0.22641508281230927,
0.19999998807907104,
0.1538461446762085,
0.0952380895614624,
0.1463414579629898,
0.2380952388048172,
0.1818181723356247,
0.12765957415103912,
0.2295081913471222,
0.23529411852359772,
0.06779660284519196,
0.1428571343421936,
0.35555556416511536,
0.22857142984867096,
0.19718308746814728,
0.16326530277729034,
0.3380281627178192,
0.05128204822540283,
0.2666666507720947,
0.08888888359069824,
0.1538461446762085,
0.2702702581882477,
0.04081632196903229,
0.21739129722118378,
0.25531914830207825,
0.29999998211860657,
0.15094339847564697,
0.1111111044883728,
0.33898305892944336,
0.13636362552642822,
0.34285715222358704,
0.25,
0.25,
0.1702127605676651,
0
] | rkl4M3R5K7 | true | [
"Paper analyzes the problem of designing adversarial attacks against multiple classifiers, introducing algorithms that are optimal for linear classifiers and which provide state-of-the-art results for deep learning."
] |
[
"Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games.",
"We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource.",
"We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards.",
"Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments.",
"We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions.",
"We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP).",
"This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem.",
"This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms.",
"We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game.",
"We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game.",
"In a noncooperative stochastic dynamic game, the agents compete in a time-varying environment, which is characterized by a discrete-time dynamical system equipped with a set of states and a state-transition probability distribution.",
"Each agent has an instantaneous reward function, which can be stochastic and depends on agents' actions and current system state.",
"We consider that both the state and action sets are subsets of real vector spaces and subject to coupled constraints, as usually required by engineering applications.A dynamic game starts at some initial state.",
"Then, the agents take some action and the game moves to another state and gives some reward values to the agents.",
"This process is repeated at every time step over a (possibly) infinite time horizon.",
"The aim of each agent is to find the policy that maximizes its expected long term return given other agents' policies.",
"Thus, a game can be represented as a set of coupled optimal-control-problems (OCPs), which are difficult to solve in general.OCPs are usually analyzed for two cases namely open-loop (OL) or closed-loop (CL), depending on the information that is available to the agents when making their decisions.",
"In the OL analysis, the action is a function of time, so that we find an optimal sequence of actions that will be executed in order, without feedback after any action.",
"In the CL setting, the action is a mapping from the state, usually referred as feedback policy or simply policy, so the agent can adapt its actions based on feedback from the environment (the state transition) at every time step.",
"For deterministic systems, both OL and CL solutions can be optimal and coincide in value.",
"But for stochastic system, an OL strategy consisting in a precomputed sequence of actions cannot adapt to the stochastic dynamics so that it is unlikely to be optimal.",
"Thus, CL are usually preferred over OL solutions.For dynamic games, the situation is more involved than for OCPs, see, e.g., BID1 .",
"In an OL dynamic game, agents' actions are functions of time, so that an OL equilibrium can be visualized as a set of state-action trajectories.",
"In a CL dynamic game, agents' actions depend on the current state variable, so that, at every time step, they have to consider how their opponents would react to deviations from the equilibrium trajectory that they have followed so far, i.e., a CL equilibrium might be visualized as a set of trees of state-action trajectories.",
"The sets of OL and CL equilibria are generally different even for deterministic dynamic games BID10 BID5 .The",
"CL analysis of dynamic games with continuous variables is challenging and has only be addressed for simple cases.The situation is even more complicated when we consider coupled constraints, since each agent's actions must belong to a set that depends on the other agents' actions. These",
"games, where the agents interact strategically not only with their rewards but also at the level of the feasible sets, are known as generalized Nash equilibrium problems BID3 .There",
"is a class of games, named Markov potential games (MPGs), for which the OL analysis shows that NE can be found by solving a single OCP; see BID6 BID25 for recent surveys on MPGs. Thus,",
"the benefit of MPGs is that solving a single OCP is generally simpler than solving a set of coupled OCPs. MPGs",
"appear often in economics and engineering applications, where multiple agents share a common resource (a raw material, a communication link, a transportation link, an electrical transmission line) or limitations (a common limit on the total pollution in some area). Nevertheless",
", to our knowledge, none previous study has provided a practical method for finding CL Nash equilibrium (CL-NE) for continuous MPGs.Indeed, to our knowledge, no previous work has proposed a practical method for finding or approximating CL-NE for any class of Markov games with continuous variables and coupled constraints. State-of-the-art",
"works on learning CL-NE for general-sum Markov games did not consider coupled constraints and assumed finite state-action sets BID18 BID16 .In this work, we",
"extend previous OL analysis due to BID26 BID23 and tackle the CL analysis of MPGs with coupled constraints. We assume that the",
"agents' policies lie in a parametric set. This assumption makes",
"derivations simpler, allowing us to prove that, under some potentiality conditions on the reward functions, a game is an MPG. We also show that, similar",
"to the OL case, the Nash equilibrium (NE) for the approximate game can be found as an optimal policy of a related OCP. This is a practical approach",
"for finding or at least approximating NE, since if the parametric family is expressive enough to represent the complexities of the problem under study, we can expect that the parametric solution will approximate an equilibrium of the original MPG well (under mild continuity assumptions, small deviations in the parametric policies should translate to small perturbations in the value functions). We remark that this parametric",
"policy assumption has been widely used for learning the solution of single-agent OCPs with continuous state-action sets; see, e.g., BID9 Melo and Lopes, 2008; BID17 BID24 BID20 . Here, we show that the same idea",
"can be extended to MPGs in a principled manner.Moreover, once we have formulated the related OCP, we can apply reinforcement learning techniques to find an optimal solution. Some recent works have applied deep",
"reinforcement learning (DRL) to cooperative Markov games BID4 BID22 , which are a particular case of MPGs. Our results show that similar approaches",
"can be used for more general MPGs.",
"We have extended previous results on MPGs with constrained continuous state-action spaces providing practical conditions and a detailed analysis of Nash equilibrium with parametric policies, showing that a PCL-NE can be found by solving a related OCP.",
"Having established a relationship between a MPG and an OCP is a significant step for finding an NE, since we can apply standard optimal control and reinforcement learning techniques.",
"We illustrated the theoretical results by applying TRPO (a well known DRL method) to an example engineering application, obtaining a PCL-NE that yields near optimal results, very close to an exact variational equilibrium.A EXAMPLE: THE \"GREAT FISH WAR\" GAME -STANDARD APPROACH Let us illustrate the standard approach described in Section 3 with a well known resource-sharing game named \"the great fish war\" due to BID11 .",
"We follow (González-Sánchez and Hernández-Lerma, 2013, Sec. 4.2).",
"Example 1.",
"Let x i be the stock of fish at time i, in some fishing area.",
"Suppose there are N countries obtaining reward from fish consumption, so that they aim to solve the following game: DISPLAYFORM0 where x 0 ≥ 0 and 0 < α < 1 are given.In order to solve G fish , let us express each agent's action as: DISPLAYFORM1 so that the rewards can be also expressed in reduced form, as required by the standard-approach: DISPLAYFORM2 Thus, the Euler equations for every agent k ∈ N and all t = 0, . . . , ∞ become: DISPLAYFORM3 Now, the standard method consists in guessing a family of parametric functions that replaces the policy, and checking whether such parametric policy satisfies (32) for some parameter vector.",
"Let us try with policies that are linear mappings of the state: DISPLAYFORM4 By replacing (33) in (32), we obtain the following set of equations: DISPLAYFORM5 Fortunately, it turns out that (34) has solution (which might not be the case for other policy parametrization), with parameters given by: DISPLAYFORM6 Since 0 < α < 1 and 0 ≤ γ < 1, it is apparent that w k > 0 and the constraint π k (x i ) ≥ 0 holds for all x i ≥ 0.",
"Moreover, since k∈N w k < 1, we have that x i+1 ≥ 0 for any x 0 ≥ 0.",
"In addition, since x i is a resource and the actions must be nonnegative, it follows that lim i→∞ x i = 0 (there is no reason to save some resource).",
"Therefore, the transversality condition holds.",
"Since the rewards are concave, the states are non-negative and the linear policies with these coefficients satisfy the Euler and transversality equations, we conclude that they constitute an equilibrium (González-Sánchez and Hernández-Lerma, 2013, Theorem 4.1).B",
"EXAMPLE: \"GREAT FISH WAR\" GAME -PROPOSED APPROACHIn this section, we illustrate how to apply the proposed approach with the same \"the great fish war\" example, obtaining the same results as with the standard approach.Example 2. Consider",
"\"the great fish war\" game described in Example 1. In order",
"to use our approach, we replace the generic policy with the specific policy mapping of our preference. We choose",
"the linear mapping, π k (x i ) = w k x i , to be able to compare the results with those obtained with the standard approach. Thus, we",
"have the following game: DISPLAYFORM7 Let us verify conditions FORMULA9 - FORMULA9 . For all",
"k, j ∈ N we have: DISPLAYFORM8 DISPLAYFORM9 Since conditions FORMULA9 - FORMULA9 hold, we conclude that FORMULA5 is an MPG. By applying",
"the line integral FORMULA2 , we obtain: DISPLAYFORM10 Now, we can solve OCP (16) with potential function (43). For this particular",
"problem, it is easy to solve the KKT system in closed form. Introduce a shorthand",
": DISPLAYFORM11 The Euler-Lagrange equation (62) for this problem becomes: DISPLAYFORM12 The optimality condition (64) with respect to the policy parameter becomes: DISPLAYFORM13 Let us solve for β i in (46): DISPLAYFORM14 Replacing FORMULA6 and the state-transition dynamics in FORMULA6 , we obtain the following set of equations: DISPLAYFORM15 Hence, the parameters can be obtained as: DISPLAYFORM16 This is exactly the same solution that we obtained in Example 1 with the standard approach. We remark that for the",
"standard approach, we were able to obtain the policy parameters since we put the correct parametric form of the policy in the Euler equation. If we had used another",
"parametric family without a linear term, the Euler equations (32) might have no solution and we would have got stuck. In contrast, with our",
"approach, we could freely choose any other form of the parametric policy, and always solve the KKT system of the approximate game. Broadly speaking, we",
"can say that the more expressive the parametric family, the more likely that the optimal policy of the original game will be accurately approximated by the optimal solution of the approximate game."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19512194395065308,
0.23999999463558197,
0.11428570747375488,
0.12121211737394333,
0.2978723347187042,
0.3235293924808502,
0.04081632196903229,
0.145454540848732,
0.04878048226237297,
0.22727271914482117,
0.039215680211782455,
0.1428571343421936,
0.1090909019112587,
0.052631575614213943,
0,
0.045454539358615875,
0.1818181723356247,
0.07999999821186066,
0.035087715834379196,
0.1621621549129486,
0.12244897335767746,
0.04255318641662598,
0.17777776718139648,
0.08571428060531616,
0.14999999105930328,
0.17910447716712952,
0.07999999821186066,
0.2857142686843872,
0.05128204822540283,
0.035087715834379196,
0.19672130048274994,
0.21276594698429108,
0.1860465109348297,
0,
0.08510638028383255,
0.25,
0.16438356041908264,
0.20689654350280762,
0.18867923319339752,
0.260869562625885,
0.3333333432674408,
0.28070175647735596,
0.2083333283662796,
0.0731707289814949,
0.1249999925494194,
0.052631575614213943,
0.08771929144859314,
0.08888888359069824,
0.10256409645080566,
0.11764705181121826,
0,
0.1111111044883728,
0,
0,
0.05128204822540283,
0.04255318641662598,
0,
0.045454539358615875,
0.09302324801683426,
0.052631575614213943,
0.1428571343421936,
0.04347825422883034,
0.04347825422883034,
0.09090908616781235,
0.1818181723356247
] | rJm7VfZA- | true | [
"We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium."
] |
[
"We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model.",
"We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images.",
"We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.",
"Bits back coding (Wallace, 1990; Hinton & van Camp, 1993 ) is a method for performing lossless compression using a latent variable model.",
"In an ideal implementation, the method can achieve an expected message length equal to the variational free energy, often referred to as the negative evidence lower bound (ELBO) of the model.",
"Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning (Hinton & van Camp, 1993) .",
"The first implementation of bits back coding (Frey, 1997; Frey & Hinton, 1996) made use of first-infirst-out (FIFO) arithmetic coding (AC) (Witten et al., 1987) .",
"However, the implementation did not achieve optimal compression, due to an incompatibility between a FIFO coder and bits back coding, and its use was only demonstrated on a small dataset of 8×8 binary images.",
"Compression' (HiLLoC).",
"In our experiments (Section 4), we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO, outperforming all of the other codecs which we benchmark.",
"We also demonstrate the speedup, of nearly three orders of magnitude, resulting from vectorization.",
"We release an open source implementation based on 'Craystack', a Python package which we have written for general prototyping of lossless compression with ANS.",
"Our experiments demonstrate HiLLoC as a bridge between large scale latent variable models and compression.",
"To do this we use simple variants of pre-existing VAE models.",
"Having shown that bits back coding is flexible enough to compress well with large, complex models, we see plenty of work still to be done in searching model structures (i.e. architecture search), optimizing with a trade-off between compression rate, encode/decode time and memory usage.",
"Particularly pertinent for HiLLoC is latent dimensionality, since compute time and memory usage both scale with this.",
"Since the model must be stored/transmitted to use HiLLoC, weight compression is also highly relevant.",
"This is a well-established research area in machine learning (Han et al., 2016; Ullrich et al., 2017) .",
"Our experiments also demonstrated that one can achieve good performance on a dataset of large images by training on smaller images.",
"This result is promising, but future work should be done to discover what the best training datasets are for coding generic images.",
"One question in particular is whether results could be improved by training on larger images and/or images of varying size.",
"We leave this to future work.",
"Another related direction for improvement is batch compression of images of different sizes using masking, analogous to how samples of different length may be processed in batches by recurrent neural nets.",
"Whilst this work has focused on latent variable models, there is also promise in applying state of the art fully observed auto-regressive models to lossless compression.",
"We look forward to future work investigating the performance of models such as WaveNet (van den Oord et al., 2016) for lossless audio compression as well as PixelCNN++ (Salimans et al., 2017) and the state of the art models in Menick & Kalchbrenner (2019) for images.",
"Sampling speed for these models, and thus decompression, scales with autoregressive sequence length, and can be very slow.",
"This could be a serious limitation, particularly in common applications where encoding is performed once but decoding is performed many times.",
"This effect can be mitigated by using dynamic programming (Le Paine et al., 2016; Ramachandran et al., 2017) , and altering model architecture (Reed et al., 2017) , but on parallel architectures sampling/decompression is still significantly slower than with VAE models.",
"On the other hand, fully observed models, as well as the flow based models of Hoogeboom et al. (2019) and , do not require bits back coding, and therefore do not have to pay the one-off cost of starting a chain.",
"Therefore they may be well suited to situations where one or a few i.i.d. samples are to be communicated.",
"Similar to the way that we use FLIF to code the first images for our experiments, one could initially code images using a fully observed model then switch to a faster latent variable model once a stack of bits has been built up.",
"We presented HiLLoC, an extension of BB-ANS to hierarchical latent variable models, and show that HiLLoC can perform well with large models.",
"We open-sourced our implementation, along with the Craystack package for prototyping lossless compression.",
"We have also explored generalization of large VAE models, and established that fully convolutional VAEs can generalize well to other datasets, including images of very different size to those they were trained on.",
"We have described how to compress images of arbitrary size with HiLLoC, achieving a compression rate superior to the best available codecs on ImageNet images.",
"We look forward to future work reuniting machine learning and lossless compression."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.17777776718139648,
0.2800000011920929,
0.21052631735801697,
0.1621621549129486,
0,
0,
0,
0.08510638028383255,
0.08510638028383255,
0.0714285671710968,
0.25641024112701416,
0.19999998807907104,
0,
0.06896551698446274,
0.1875,
0.06666666269302368,
0,
0.11764705181121826,
0.05405404791235924,
0.11764705181121826,
0.0952380895614624,
0.09302324801683426,
0.19512194395065308,
0.15094339847564697,
0.0624999962747097,
0,
0.07999999821186066,
0,
0,
0.11999999731779099,
0.1621621549129486,
0.2857142686843872,
0.1304347813129425,
0.31578946113586426,
0.2222222238779068
] | r1lZgyBYwS | true | [
"We scale up lossless compression with latent variables, beating existing approaches on full-size ImageNet images."
] |
[
" State of the art computer vision models have been shown to be vulnerable to small adversarial perturbations of the input.",
"In other words, most images in the data distribution are both correctly classified by the model and are very close to a visually similar misclassified image.",
"Despite substantial research interest, the cause of the phenomenon is still poorly understood and remains unsolved.",
"We hypothesize that this counter intuitive behavior is a naturally occurring result of the high dimensional geometry of the data manifold.",
"As a first step towards exploring this hypothesis, we study a simple synthetic dataset of classifying between two concentric high dimensional spheres.",
"For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error.",
"In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size $O(1/\\sqrt{d})$.",
"Surprisingly, when we train several different architectures on this dataset, all of their error sets naturally approach this theoretical bound.",
"As a result of the theory, the vulnerability of neural networks to small adversarial perturbations is a logical consequence of the amount of test error observed.",
"We hope that our theoretical analysis of this very simple case will point the way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.",
"There has been substantial work demonstrating that standard image models exhibit the following phenomenon: most randomly chosen images from the data distribution are correctly classified and yet are close to a visually similar nearby image which is incorrectly classified BID22 .",
"This is often referred to as the phenomenon of adversarial examples.",
"These adversarially found errors can be constructed to be surprisingly robust, invariant to viewpoint, orientation and scale BID3 .",
"Despite some theoretical work and many proposed defense strategies BID6 BID18 BID20 ) the cause of this phenomenon is still poorly understood.There have been several hypotheses proposed regarding the cause of adversarial examples.",
"We briefly survey some of them here.",
"One common hypothesis is that neural network classifiers are too linear in various regions of the input space, BID17 .",
"Another hypothesis is that adversarial examples are off the data manifold BID2 a; BID16 .",
"BID6 argue that large singular values of internal weight matrices may cause the classifier to be vulnerable to small perturbations of the input.Alongside works endeavoring to explain adversarial examples, others have proposed defenses in order to increase robustness.",
"Some works increase robustness to small perturbations by changing the non-linearities used BID14 , distilling a large network into a small network BID20 , or using regularization BID6 .",
"Other works explore detecting adversarial examples using a second statistical model BID7 BID0 BID11 BID19 .",
"However, many of these methods have been shown to fail BID4 BID7 .",
"Finally, adversarial training has been shown in many instances to increase robustness BID18 BID15 BID22 .",
"Despite some progress on increasing robustness to adversarial perturbations, local errors have still been shown to appear for distances just beyond what is adversarially trained for BID21 .",
"This phenomenon is quite intriguing given that these models are highly accurate on the test set.",
"We hypothesize that this behavior is a naturally occurring result of the high dimensional nature of the data manifold.",
"In order to begin to investigate this hypothesis, we define a simple synthetic task of classifying between two concentric high dimensional spheres.",
"This allows us to study adversarial examples in a setting where the data manifold is well defined mathematically and where we have an analytic characterization of the decision boundary learned by the model.",
"Even more importantly, we can naturally vary the dimension of the data manifold and study the effect of the input dimension on the geometry of the generalization error of neural networks.",
"Our experiments and theoretical analysis on this dataset demonstrate the following:• A similar behavior to that of image models occurs: most randomly chosen points from the data distribution are correctly classified and yet are \"close\" to an incorrectly classified input.",
"This behavior occurs even when the test error rate is less than 1 in 10 million.•",
"For this dataset, there is a fundamental tradeoff between the amount of generalization error and the average distance to the nearest error. In",
"particular, we show that any model which misclassifies a small constant fraction of the sphere will be vulnerable to adversarial perturbations of size O(1 DISPLAYFORM0 • Neural networks trained on this dataset naturally approach this theoretical optimal tradeoff between the measure of the error set and the average distance to nearest error. This",
"implies that in order to linearly increase the average distance to nearest error, the error rate of the model must decrease exponentially.• We",
"also show that models trained on this dataset may become extremely accurate even when ignoring a large fraction of the input.We conclude with a detailed discussion about the connection between adversarial examples for the sphere and those for image models.",
"In this work we attempted to gain insight into the existence of adversarial examples for image models by studying a simpler synthetic dataset.",
"After training different neural network architectures on this dataset we observe a similar phenomenon to that of image models -most random points in the data distribution are both correctly classified and are close to a misclassified point.",
"We then explained this phenomenon for this particular dataset by proving a theoretical tradeoff between the error rate of a model and the average distance to nearest error independently of the model.",
"We also observed that several different neural network architectures closely match this theoretical bound.Theorem 5.1 is significant because it reduces the question of why models are vulnerable to adversarial examples to the question of why is there a small amount of classification error.",
"It is unclear if anything like theorem 5.1 would hold for an image manifold, and future work should investigate if a similar principal applies.",
"Our work suggests that even a small amount of classification error may sometimes logically force the existence of many adversarial examples.",
"This could explain why fixing the adversarial example problem has been so difficult despite substantial research interest.",
"For example, one recent work uses adversarial training to increase robustness in the L ∞ metric BID18 .",
"Although this did increase the size, , of the perturbation needed to reliably produce an error, local errors still remain for larger than those adversarially trained for BID21 .Several",
"defenses against adversarial examples have been proposed recently which are motivated by the assumption that adversarial examples are off the data manifold BID2 a; BID16 . Our results",
"challenge whether or not this assumption holds in general. As shown in",
"section 3 there are local errors both on and off the data manifold. Our results",
"raise many questions as to whether or not it is possible to completely solve the adversarial example problem without reducing test error to 0. The test error",
"rate of state of the art image models is non-zero, this implies that a constant fraction of the data manifold is misclassified and is the unbiased estimate of µ(E). Perhaps this alone",
"is an indication that local adversarial errors exist.The concentric spheres dataset is an extremely simple problem which is unlikely to capture all of the complexities of the geometry of a natural image manifold. Thus we cannot reach",
"the same conclusions about the nature of adversarial examples for real-world datasets. However, we hope that",
"the insights gained from this very simple case will point the way forward to explore how complex real-world data sets leads to adversarial examples."
] | [
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2448979616165161,
0.2142857164144516,
0.1702127605676651,
0.6274510025978088,
0.30188679695129395,
0.2745097875595093,
0.2545454502105713,
0.11764705181121826,
0.3461538553237915,
0.37288135290145874,
0.2647058665752411,
0.23255813121795654,
0.0833333283662796,
0.19354838132858276,
0.10256409645080566,
0.19607841968536377,
0.30434781312942505,
0.1818181723356247,
0.1428571343421936,
0.12765957415103912,
0.09090908616781235,
0.08510638028383255,
0.14035087823867798,
0.1666666567325592,
0.6122449040412903,
0.30188679695129395,
0.29032257199287415,
0.25925925374031067,
0.2985074520111084,
0.08163265138864517,
0.26923075318336487,
0.2857142686843872,
0.18867923319339752,
0.3529411852359772,
0.4000000059604645,
0.3333333134651184,
0.31578946113586426,
0.3142856955528259,
0.178571417927742,
0.23076923191547394,
0.08163265138864517,
0.12244897335767746,
0.20338982343673706,
0.1818181723356247,
0.04651162400841713,
0.1702127605676651,
0.145454540848732,
0.3928571343421936,
0.375,
0.21276594698429108,
0.25925925374031067
] | SyUkxxZ0b | true | [
"We hypothesize that the vulnerability of image models to small adversarial perturbation is a naturally occurring result of the high dimensional geometry of the data manifold. We explore and theoretically prove this hypothesis for a simple synthetic dataset."
] |
[
"It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies.",
"However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks.",
"Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space.",
"In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation.",
"To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework.",
"We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase.",
"Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.",
"Unsupervised learning has played a major role in the recent progress of deep learning.",
"Some of the earliest work of the present deep learning era posited unsupervised pre-training as a method for overcoming optimization difficulties inherent in contemporary supervised deep neural networks (Hinton et al., 2006; Bengio et al., 2007) .",
"Since then, modern deep neural networks have enabled a renaissance in generative models, with neural decoders allowing for the training of large scale, highly expressive families of directed models (Goodfellow et al., 2014; Van den Oord et al., 2016) as well as enabling powerful amortized variational inference over latent variables (Kingma and Welling, 2013) .",
"We have repeatedly seen how representations from unsupervised learning can be leveraged to dramatically improve sample efficiency in a variety of supervised learning domains (Rasmus et al., 2015; Salimans et al., 2016) .",
"In the reinforcement learning (RL) setting, the coupling between behavior, state visitation, and the algorithmic processes that give rise to behavior complicate the development of \"unsupervised\" methods.",
"The generation of behaviors by means other than seeking to maximize an extrinsic reward has long been studied under the psychological auspice of intrinsic motivation (Barto et al., 2004; Barto, 2013; Mohamed and Rezende, 2015) , often with the goal of improved exploration (Şimşek and Barto, 2006; Oudeyer and Kaplan, 2009; Bellemare et al., 2016) .",
"However, while exploration is classically concerned with the discovery of rewarding states, the acquisition of useful state representations and behavioral skills can also be cast as an unsupervised (i.e. extrinsically unrewarded) learning problem for agents interacting with an environment.",
"In the traditional supervised learning setting, popular classification benchmarks have been employed (with labels removed) as unsupervised representation learning benchmarks, wherein the acquired representations are evaluated based on their usefulness for some downstream task (most commonly the original classification task with only a fraction of the labels reinstated).",
"Analogously, we propose removing the rewards from an RL benchmark environment for unsupervised pre-training of an agent, with their subsequent reinstatement testing for dataefficient adaptation.",
"This setup emulates scenarios where unstructured interaction with the environment, or a closely related environment, is relatively inexpensive to acquire and the agent is expected to perform one or more tasks defined in this environment in the form of rewards.",
"The current state-of-the-art for RL with unsupervised pre-training comes from a class of algorithms which, independent of reward, maximize the mutual information between latent variable policies and their behavior in terms of state visitation, an objective which we refer to as behavioral mutual information (Mohamed and Rezende, 2015; Gregor et al., 2016; Eysenbach et al., 2018; Warde-Farley et al., 2018) .",
"These objectives yield policies which exhibit a great deal of diversity in behavior, with variational intrinsic control (Gregor et al., 2016, VIC) and diversity is all you need (Eysenbach et al., 2018, DIAYN) even providing a natural formalism for adapting to the downstream RL problem.",
"However, both methods suffer from poor generalization and a slow inference process when the reward signal is introduced.",
"The fundamental problem faced by these methods is the requirement to effectively interpolate between points in the latent behavior space, as the most task-appropriate latent skill likely lies \"between\" those learnt during the unsupervised period.",
"The construction of conditional policies which efficiently and effectively generalize to latent codes not encountered during training is an open problem for such methods.",
"Our main contribution is to address this generalization and slow inference problem by making use of another recent advance in RL, successor features (Barreto et al., 2017) .",
"Successor features (SF) enable fast transfer learning between tasks that differ only in their reward function, which is assumed to be linear in some features.",
"Prior to this work, the automatic construction of these reward function features was an open research problem .",
"We show that, despite being previously cast as learning a policy space, behavioral mutual information (BMI) maximization provides a compelling solution to this feature learning problem.",
"Specifically, we show that the BMI objective can be adapted to learn precisely the features required by SF.",
"Together, these methods give rise to an algorithm, Variational Intrinsic Successor FeatuRes (VISR), which significantly improves performance in the RL with unsupervised pre-training scenario.",
"In order to illustrate the efficacy of the proposed method, we augment the popular 57-game Atari suite with such an unsupervised phase.",
"The use of this well-understood collection of tasks allows us to position our contribution more clearly against the current literature.",
"VISR achieves human-level performance on 12 games and outperforms all baselines, which includes algorithms that operate in three regimes: strictly unsupervised, supervised with limited data, and both.",
"Our results suggest that VISR is the first algorithm to achieve notable performance on the full Atari task suite in a setting of few-step RL with unsupervised pre-training, outperforming all baselines and buying performance equivalent to hundreds of millions of interaction steps compared to DQN on some games ( Figure 2c ).",
"As a suggestion for future investigations, the somewhat underwhelming results for the fully unsupervised version of VISR suggest that there is much room for improvement.",
"While curiosity-based methods are transient (i.e., asymptotically their intrinsic reward vanishes) and lack a fast adaptation mechanism, they do seem to encourage exploratory behavior slightly more than VISR.",
"A possible direction for future work would be to use a curiosity-based intrinsic reward inside of VISR, to encourage it to better explore the space of controllable policies.",
"Another interesting avenue for future investigation would be to combine the approach recently proposed by Ozair et al. (2019) to enforce the policies computed by VISR to be not only distinguishable but also far apart in a given metric space.",
"By using SFs on features that maximize BMI, we proposed an approach, VISR, that solves two open questions in the literature: how to compute features for the former and how to infer tasks in the latter.",
"Beyond the concrete method proposed here, we believe bridging the gap between BMI and SFs is an insightful contribution that may inspire other useful methods.",
"For convenience, we can refer to maximizing F(θ) as minimizing the loss function for parameters θ = (θ π , θ q ),",
"where θ π and θ q refer to the parameters of the policy π and variational approximation q, respectively."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2181818187236786,
0.11538460850715637,
0.19999998807907104,
0.1702127605676651,
0.8666666746139526,
0.1599999964237213,
0.07843136787414551,
0.09999999403953552,
0.06779660284519196,
0.07792207598686218,
0.21052631735801697,
0.11764705181121826,
0.05405404791235924,
0.0952380895614624,
0.0882352888584137,
0.03999999538064003,
0.09999999403953552,
0.1012658178806305,
0.11594202369451523,
0.13333332538604736,
0.06896550953388214,
0.07843136787414551,
0.145454540848732,
0.2800000011920929,
0.13636362552642822,
0.11764705181121826,
0.27272728085517883,
0.31372547149658203,
0.08510638028383255,
0.08695651590824127,
0.07547169178724289,
0.1666666567325592,
0.12244897335767746,
0.10526315122842789,
0.19230768084526062,
0.12903225421905518,
0.1428571343421936,
0.07843136787414551,
0.12244897335767746,
0.0952380895614624
] | BJeAHkrYDS | true | [
"We introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide fast task inference through the successor features framework."
] |
[
"Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn strong supervised models like convolutional neural networks.",
"However, these models trained on one data domain may not generalize well to other domains unequipped with annotations for model finetuning.",
"To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain.",
"To this end, we propose to learn discriminative feature representations of patches based on label histograms in the source domain, through the construction of a disentangled space.",
"With such representations as guidance, we then use an adversarial learning scheme to push the feature representations in target patches to the closer distributions in source ones.",
"In addition, we show that our framework can integrate a global alignment process with the proposed patch-level alignment and achieve state-of-the-art performance on semantic segmentation.",
"Extensive ablation studies and experiments are conducted on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.",
"Recent deep learning-based methods have made significant progress on vision tasks, such as object recognition BID17 and semantic segmentation BID19 , relying on large-scale annotations to supervise the learning process.",
"However, for a test domain different from the annotated training data, learned models usually do not generalize well.",
"In such cases, domain adaptation methods have been developed to close the gap between a source domain with annotations and a target domain without labels.",
"Along this line of research, numerous methods have been developed for image classification BID29 BID8 , but despite recent works on domain adaptation for pixel-level prediction tasks such as semantic segmentation BID14 , there still remains significant room for improvement.",
"Yet domain adaptation is a crucial need for pixel-level predictions, as the cost to annotate ground truth is prohibitively expensive.",
"For instance, road-scene images in different cities may have various appearance distributions, while conditions even within the same city may vary significantly over time or weather.Existing state-of-the-art methods use feature-level BID14 or output space adaptation BID31 to align the distributions between the source and target domains using adversarial learning BID11 BID37 .",
"These approaches usually exploit the global distribution alignment, such as spatial layout, but such global statistics may already differ significantly between two domains due to differences in camera pose or field of view.",
"Figure 1 illustrates one example, where two images share a similar layout, but the corresponding grids do not match well.",
"Such misalignment may introduce an incorrect bias during adaptation.",
"Instead, we consider to match patches that are more likely to be shared across domains regardless of where they are located.One way to utilize patch-level information is to align their distributions through adversarial learning.",
"However, this is not straightforward since patches may have high variation among each other and there is no guidance for the model to know which patch distributions are close.",
"Motivated by recent advances in learning disentangled representations BID18 BID24 , we adopt a similar approach by considering label histograms of patches as a factor and learn discriminative Figure 1 : Illustration of the proposed patch-level alignment against the global alignment that considers the spatial relationship between grids.",
"We first learn discriminative representations for source patches (solid symbols) and push a target representation (unfilled symbol) close to the distribution of source ones, regardless of where these patches are located in the image.representations for patches to relax the high-variation problem among them.",
"Then, we use the learned representations as a bridge to better align patches between source and target domains.Specifically, we utilize two adversarial modules to align both the global and patch-level distributions between two domains, where the global one is based on the output space adaptation BID31 , and the patch-based one is achieved through the proposed alignment by learning discriminative representations.",
"To guide the learning process, we first use the pixel-level annotations provided in the source domain and extract the label histogram as a patch-level representation.",
"We then apply K-means clustering to group extracted patch representations into K clusters, whose cluster assignments are then used as the ground truth to train a classifier shared across two domains for transferring a learned discriminative representation of patches from the source to the target domain.",
"Ideally, given the patches in the target domain, they would be classified into one of K categories.",
"However, since there is a domain gap, we further use an adversarial loss to push the feature representations of target patches close to the distribution of the source patches in this clustered space (see Figure 1 ).",
"Note that our representation learning can be viewed as a kind of disentanglement guided by the label histogram, but is different from existing methods that use pre-defined factors such as object pose BID18 .In",
"experiments, we follow the domain adaptation setting in BID14 and perform pixellevel road-scene image segmentation. We",
"conduct experiments under various settings, including the synthetic-to-real, i.e., GTA5 BID27 )/SYNTHIA BID28 to Cityscapes BID5 ) and cross-city, i.e., Cityscapes to Oxford RobotCar BID23 scenarios. In",
"addition, we provide extensive ablation studies to validate each component in the proposed framework. By",
"combining global and patch-level alignments, we show that our approach performs favorably against state-of-the-art methods in terms of accuracy and visual quality. We",
"note that the proposed framework is general and could be applicable to other forms of structured outputs such as depth, which will be studied in our future work.The contributions of this work are as follows. First",
", we propose a domain adaptation framework for structured output prediction by utilizing global and patch-level adversarial learning modules. Second",
", we develop a method to learn discriminative representations guided by the label histogram of patches via clustering and show that these representations help the patch-level alignment. Third",
", we demonstrate that the proposed adaptation method performs favorably against various baselines and state-of-the-art methods on semantic segmentation.",
"In this paper, we present a domain adaptation method for structured output via a general framework that combines global and patch-level alignments.",
"The global alignment is achieved by the output space adaptation, while the patch-level one is performed via learning discriminative representations of patches across domains.",
"To learn such patch-level representations, we propose to construct a clustered space of the source patches and adopt an adversarial learning scheme to push the target patch distributions closer to the source ones.",
"We conduct extensive ablation study and experiments to validate the effectiveness of the proposed method under numerous challenges on semantic segmentation, including synthetic-to-real and cross-city scenarios, and show that our approach performs favorably against existing algorithms."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.05882352590560913,
0.11764705181121826,
0.1875,
0.15789473056793213,
0.1666666567325592,
0.05405404791235924,
0,
0.0476190447807312,
0.12903225421905518,
0.11428570747375488,
0.11999999731779099,
0.1875,
0.09836065024137497,
0,
0,
0.09090908616781235,
0.09090908616781235,
0.04878048226237297,
0.145454540848732,
0.1249999925494194,
0.20338982343673706,
0.17142856121063232,
0.15094339847564697,
0,
0.13333332538604736,
0.04444444179534912,
0.13793103396892548,
0,
0,
0.05714285373687744,
0.043478257954120636,
0.42424240708351135,
0.25641024112701416,
0.1249999925494194,
0.47058823704719543,
0.34285715222358704,
0.09756097197532654,
0.043478257954120636
] | B1xFhiC9Y7 | true | [
"A domain adaptation method for structured output via learning patch-level discriminative feature representations"
] |
[
"Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs.",
"So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios.",
"In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against.",
"Here we emphasise the importance of attacks which solely rely on the final model decision.",
"Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.",
"Previous attacks in this category were limited to simple models or simple datasets.",
"Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial.",
"The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.",
"We apply the attack on two black-box algorithms from Clarifai.com.",
"The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems.",
"An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox).",
"Many high-performance machine learning algorithms used in computer vision, speech recognition and other areas are susceptible to minimal changes of their inputs BID26 .",
"As a concrete example, a modern deep neural network like VGG-19 trained on object recognition might perfectly recognize the main object in an image as a tiger cat, but if the pixel values are only slightly perturbed in a specific way then the prediction of the very same network is drastically altered (e.g. to bus).",
"These so-called adversarial perturbations are ubiquitous in many machine learning models and are often imperceptible to humans.",
"Algorithms that seek to find such adversarial perturbations are generally denoted as adversarial attacks.Adversarial perturbations have drawn interest from two different sides.",
"On the one side, they are worrisome for the integrity and security of deployed machine learning algorithms such as autonomous cars or face recognition systems.",
"Minimal perturbations on street signs (e.g. turning a stop-sign into a 200 km/h speed limit) or street lights (e.g. turning a red into a green light) can have severe consequences.",
"On the other hand, adversarial perturbations provide an exciting spotlight on the gap between the sensory information processing in humans and machines and thus provide guidance towards more robust, human-like architectures.Adversarial attacks can be roughly divided into three categories: gradient-based, score-based and transfer-based attacks (cp. Figure 1 ).",
"Gradient-based and score-based attacks are often denoted as white-box and oracle attacks respectively, but we try to be as explicit as possible as to what information is being used in each category 1 .",
"A severe problem affecting attacks in all of these categories is that they are surprisingly straight-forward to defend against:• Gradient-based attacks.",
"Most existing attacks rely on detailed model information including the gradient of the loss w.r.t. the input.",
"Examples are the Fast-Gradient Sign Method (FGSM), the Basic Iterative Method (BIM) BID11 , DeepFool (MoosaviDezfooli et al., 2015) , the Jacobian-based Saliency Map Attack (JSMA) BID20 , Houdini BID5 and the Carlini & Wagner attack BID2 .",
"Defence: A simple way to defend against gradient-based attacks is to mask the gradients, for example by adding non-differentiable elements either implicitly through means like defensive distillation BID21 or saturated non-linearities BID18 , or explicitly through means like non-differentiable classifiers BID15 ).•",
"Score-based attacks. A",
"few attacks are more agnostic and only rely on the predicted scores (e.g. class probabilities or logits) of the model. On",
"a conceptual level these attacks use the predictions to numerically estimate the gradient. This",
"includes black-box variants of JSMA BID17 and of the Carlini & Wagner attack BID4 as well as generator networks that predict adversarials BID8 . Defence",
": It is straight-forward to severely impede the numerical gradient estimate by adding stochastic elements like dropout into the model. Also,",
"many robust training methods introduce a sharp-edged plateau around samples BID28 which not only masks gradients themselves but also their numerical estimate.• Transfer-based",
"attacks. Transfer-based attacks",
"do not rely on model information but need information about the training data. This data is used to train",
"a fully observable substitute model from which adversarial perturbations can be synthesized BID22 . They rely on the empirical",
"observation that adversarial examples often transfer between models. If adversarial examples are",
"created on an ensemble of substitute models the success rate on the attacked model can reach up to 100% in certain scenarios BID13 . Defence: A recent defence method",
"against transfer attacks BID28 , which is based on robust training on a dataset augmented by adversarial examples from an ensemble of substitute models, has proven highly successful against basically all attacks in the 2017 Kaggle Competition on Adversarial Attacks 2 .The fact that many attacks can be",
"easily averted makes it often extremely difficult to assess whether a model is truly robust or whether the attacks are just too weak, which has lead to premature claims of robustness for DNNs BID3 .This motivates us to focus on a category",
"of adversarial attacks that has so far received fairly little attention:• Decision-based attacks. Direct attacks that solely rely on the final",
"decision of the model (such as the top-1 class label or the transcribed sentence).The delineation of this category is justified",
"for the following reasons: First, compared to score-based attacks decision-based attacks are much more relevant in real-world machine learning applications where confidence scores or logits are rarely accessible. At the same time decision-based attacks have",
"the potential to be much more robust to standard defences like gradient masking, intrinsic stochasticity or robust training than attacks from the other categories. Finally, compared to transferbased attacks they",
"need much less information about the model (neither architecture nor training data) and are much simpler to apply.There currently exists no effective decision-based attack that scales to natural datasets such as ImageNet and is applicable to deep neural networks (DNNs). The most relevant prior work is a variant of transfer",
"attacks in which the training set needed to learn the substitute model is replaced by a synthetic dataset (Papernot et al., 2017b) . This synthetic dataset is generated by the adversary",
"alongside the training of the substitute; the labels for each synthetic sample are drawn from the black-box model. While this approach works well on datasets for which",
"the intra-class variability is low (such as MNIST) it has yet to be shown that it scales to more complex natural datasets such as CIFAR or ImageNet. Other decision-based attacks are specific to linear",
"or convex-inducing classifiers BID6 BID14 BID19 and are not applicable to other machine learning models. The work by BID0 basically stands between transfer",
"attacks and decision-based attacks in that the substitute model is trained on a dataset for which the labels have been observed from the black-box model. This attack still requires knowledge about the data",
"distribution on which the black-box models was trained on and so we don't consider it a pure decision-based attack. Finally, some naive attacks such as a line-search along",
"a random direction away from the original sample can qualify as decision-based attacks but they induce large and very visible perturbations that are orders of magnitude larger than typical gradient-based, score-based or transfer-based attacks.Throughout the paper we focus on the threat scenario in which the adversary aims to change the decision of a model (either targeted or untargeted) for a particular input sample by inducing a minimal perturbation to the sample. The adversary can observe the final decision of the model",
"for arbitrary inputs and it knows at least one perturbation, however large, for which the perturbed sample is adversarial.The contributions of this paper are as follows:• We emphasise decision-based attacks as an important category of adversarial attacks that are highly relevant for real-world applications and important to gauge model robustness.• We introduce the first effective decision-based attack that",
"scales to complex machine learning models and natural datasets. The Boundary Attack is (1) conceptually surprisingly simple,",
"(2) extremely flexible, (3) requires little hyperparameter tuning and FORMULA6 is competitive with the best gradient-based attacks in both targeted and untargeted computer vision scenarios.• We show that the Boundary Attack is able to break previously",
"suggested defence mechanisms like defensive distillation.• We demonstrate the practical applicability of the Boundary Attack",
"on two black-box machine learning models for brand and celebrity recognition available on Clarifai.com.",
"In this paper we emphasised the importance of a mostly neglected category of adversarial attacksdecision-based attacks-that can find adversarial examples in models for which only the final decision can be observed.",
"We argue that this category is important for three reasons: first, attacks in this class are highly relevant for many real-world deployed machine learning systems like autonomous cars for which the internal decision making process is unobservable.",
"Second, attacks in this class do not rely on substitute models that are trained on similar data as the model to be attacked, thus making real-world applications much more straight-forward.",
"Third, attacks in this class have the potential to be much more robust against common deceptions like gradient masking, intrinsic stochasticity or robust training.We also introduced the first effective attack in this category that is applicable to general machine learning algorithms and complex natural datasets: the Boundary Attack.",
"At its core the Boundary Attack follows the decision boundary between adversarial and non-adversarial samples using a very simple rejection sampling algorithm in conjunction with a simple proposal distribution and a dynamic step-size adjustment inspired by Trust Region methods.",
"Its basic operating principlestarting from a large perturbation and successively reducing it-inverts the logic of essentially all previous adversarial attacks.",
"Besides being surprisingly simple, the Boundary attack is also extremely flexible in terms of the possible adversarial criteria and performs on par with gradient-based attacks on standard computer vision tasks in terms of the size of minimal perturbations.The mere fact that a simple constrained iid Gaussian distribution can serve as an effective proposal perturbation for each step of the Boundary attack is surprising and sheds light on the brittle information processing of current computer vision architectures.",
"Nonetheless, there are many ways in which the Boundary attack can be made even more effective, in particular by learning a suitable proposal distribution for a given model or by conditioning the proposal distribution on the recent history of successful and unsuccessful proposals.Decision-based attacks will be highly relevant to assess the robustness of machine learning models and to highlight the security risks of closed-source machine learning systems like autonomous cars.",
"We hope that the Boundary attack will inspire future work in this area."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.12903225421905518,
0.04878048226237297,
0,
0.13333332538604736,
0.07692307233810425,
0.1666666567325592,
0.08888888359069824,
0.1599999964237213,
0.1463414579629898,
0.07692307233810425,
0.10810810327529907,
0,
0.2666666507720947,
0.11428570747375488,
0.10526315122842789,
0.052631575614213943,
0.07017543166875839,
0,
0.11764705181121826,
0,
0.04444444179534912,
0.039215683937072754,
0.11764705926179886,
0,
0,
0.1666666567325592,
0,
0,
0,
0,
0.1249999925494194,
0.3333333432674408,
0.14999999105930328,
0.14035087823867798,
0,
0.1249999925494194,
0,
0.13636362552642822,
0,
0.09999999403953552,
0,
0.05405404791235924,
0.045454539358615875,
0.21621620655059814,
0.1428571343421936,
0.14999999105930328,
0.053333330899477005,
0.13114753365516663,
0.19354838132858276,
0.0416666641831398,
0,
0.2857142686843872,
0.1463414579629898,
0.1702127605676651,
0.1395348757505417,
0.14035087823867798,
0.0416666641831398,
0.05882352590560913,
0.10958904027938843,
0.14705881476402283,
0.14814814925193787
] | SyZI0GWCZ | true | [
"A novel adversarial attack that can directly attack real-world black-box machine learning models without transfer."
] |
[
"We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. ",
"Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent.",
"In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data.",
"Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work.",
"We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.",
"Deep recurrent models like long short-term memory (LSTM) recurrent neural networks (RNNs) have become a standard building block in modern approaches to language modeling, with applications in speech recognition, input decoding for mobile keyboards, and language translation.",
"Because language usage varies widely by problem domain and dataset, training a language model on data from the right distribution is critical.",
"For example, a model to aid typing on a mobile keyboard is better served by training data typed in mobile apps rather than from scanned books or transcribed utterances.",
"However, language data can be uniquely privacy sensitive.",
"In the case of text typed on a mobile phone, this sensitive information might include passwords, text messages, and search queries.",
"In general, language data may identify a speaker-explicitly by name or implicitly, for example via a rare or unique phrase-and link that speaker to secret or sensitive information.Ideally, a language model's parameters would encode patterns of language use common to many users without memorizing any individual user's unique input sequences.",
"However, we know convolutional NNs can memorize arbitrary labelings of the training data BID22 and recurrent language models are also capable of memorizing unique patterns in the training data BID5 .",
"Recent attacks on neural networks such as those of BID19 underscore the implicit risk.",
"The main goal of our work is to provide a strong guarantee that the trained model protects the privacy of individuals' data without undue sacrifice in model quality.We are motivated by the problem of training models for next-word prediction in a mobile keyboard, and use this as a running example.",
"This problem is well suited to the techniques we introduce, as differential privacy may allow for training on data from the true distribution (actual mobile usage) rather than on proxy data from some other source that would produce inferior models.",
"However, to facilitate reproducibility and comparison to non-private models, our experiments are conducted on a public dataset as is standard in differential privacy research.",
"The remainder of this paper is structured around the following contributions:1.",
"We apply differential privacy to model training using the notion of user-adjacent datasets, leading to formal guarantees of user-level privacy, rather than privacy for single examples.4.",
"In extensive experiments in §3, we offer guidelines for parameter tuning when training complex models with differential privacy guarantees.",
"We show that a small number of experiments can narrow the parameter space into a regime where we pay for privacy not in terms of a loss in utility but in terms of an increased computational cost.We now introduce a few preliminaries.",
"Differential privacy (DP) BID10 BID8 BID9 ) provides a well-tested formalization for the release of information derived from private data.",
"Applied to machine learning, a differentially private training mechanism allows the public release of model parameters with a strong guarantee: adversaries are severely limited in what they can learn about the original training data based on analyzing the parameters, even when they have access to arbitrary side information.",
"Formally, it says: Definition",
"1. Differential Privacy: A randomized mechanism M : D → R with a domain D (e.g., possible training datasets) and range R (e.g., all possible trained models) satisfies ( , δ)-differential privacy if for any two adjacent datasets d, d ∈ D and for any subset of outputs S ⊆ R it holds that DISPLAYFORM0 The definition above leaves open the definition of adjacent datasets which will depend on the application.",
"Most prior work on differentially private machine learning (e.g. BID7 BID4 ; BID0 BID21 BID16 ) deals with example-level privacy: two datasets d and d are defined to be adjacent if d can be formed by adding or removing a single training example from d.",
"We remark that while the recent PATE approach of BID16 can be adapted to give user-level privacy, it is not suited for a language model where the number of classes (possible output words) is large.For problems like language modeling, protecting individual examples is insufficient-each typed word makes its own contribution to the RNN's training objective, so one user may contribute many thousands of examples to the training data.",
"A sensitive word or phrase may be typed several times by an individual user, but it should still be protected.2",
"In this work, we therefore apply the definition of differential privacy to protect whole user histories in the training set.",
"This user-level privacy is ensured by using an appropriate adjacency relation:Definition",
"2. User-adjacent datasets: Let d and d be two datasets of training examples, where each example is associated with a user.",
"Then, d and d are adjacent if d can be formed by adding or removing all of the examples associated with a single user from d.Model training that satisfies differential privacy with respect to datasets that are user-adjacent satisfies the intuitive notion of privacy we aim to protect for language modeling: the presence or absence of any specific user's data in the training set has an imperceptible impact on the (distribution over) the parameters of the learned model.",
"It follows that an adversary looking at the trained model cannot infer whether any specific user's data was used in the training, irrespective of what auxiliary information they may have.",
"In particular, differential privacy rules out memorization of sensitive information in a strong information theoretic sense.",
"In this work, we introduced an algorithm for user-level differentially private training of large neural networks, in particular a complex sequence model for next-word prediction.",
"We empirically evaluated the algorithm on a realistic dataset and demonstrated that such training is possible at a negligible loss in utility, instead paying a cost in additional computation.",
"Such private training, combined with federated learning (which leaves the sensitive training data on device rather than centralizing it), shows the possibility of training models with significant privacy guarantees for important real world applications.",
"Much future work remains, for example designing private algorithms that automate and make adaptive the tuning of the clipping/noise tradeoff, and the application to a wider range of model families and architectures, for example GRUs and character-level models.",
"Our work also highlights the open direction of reducing the computational overhead of differentially private training of non-convex models."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4878048598766327,
0.10810810327529907,
0.11428570747375488,
0.2545454502105713,
0.2702702581882477,
0.2800000011920929,
0.1621621549129486,
0.09302324801683426,
0.1666666567325592,
0.0555555522441864,
0.10169491171836853,
0.1428571343421936,
0.06666666269302368,
0.16949151456356049,
0.19230768084526062,
0.25641024112701416,
0.07407406717538834,
0.14999999105930328,
0.2857142686843872,
0.11999999731779099,
0.1666666567325592,
0.06896550953388214,
0,
0.13513512909412384,
0.06896550953388214,
0.13698630034923553,
0,
0.11428570747375488,
0.14814814925193787,
0.1666666567325592,
0.1599999964237213,
0,
0.19354838132858276,
0.19999998807907104,
0.1904761791229248,
0.1702127605676651,
0.1304347813129425,
0.0624999962747097
] | BJ0hF1Z0b | true | [
"User-level differential privacy for recurrent neural network language models is possible with a sufficiently large dataset."
] |
[
"Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model.",
"Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps. \n",
"In this work, we describe and evaluate a novel mixed-size training regime that mixes several image sizes at training time.",
"We demonstrate that models trained using our method are more resilient to image size changes and generalize well even on small images.",
"This allows faster inference by using smaller images at test time.",
"For instance, we receive a 76.43% top-1 accuracy using ResNet50 with an image size of 160, which matches the accuracy of the baseline model with 2x fewer computations.\n",
"Furthermore, for a given image size used at test time, we show this method can be exploited either to accelerate training or the final test accuracy.",
"For example, we are able to reach a 79.27% accuracy with a model evaluated at a 288 spatial size for a relative improvement of 14% over the baseline.",
"Figure 1: Test accuracy per image size, models trained on specific sizes (ResNet50, ImageNet).",
"Convolutional neural networks are successfully used to solve various tasks across multiple domains such as visual (Krizhevsky et al., 2012; Ren et al., 2015) , audio (van den Oord et al., 2016) , language (Gehring et al., 2017) and speech (Abdel-Hamid et al., 2014) .",
"While scale-invariance is considered important for visual representations (Lowe, 1999) , convolutional networks are not scale invariant with respect to the spatial resolution of the image input, as a change in image dimension may lead to a non-linear change of their output.",
"Even though CNNs are able to achieve state-of-theart results in many tasks and domains, their sensitivity to the image size is an inherent deficiency that limits practical use cases and requires evaluation inputs to match training image size.",
"For example, Touvron et al. (2019) demonstrated that networks trained on specific image size, perform poorly on other image sizes at evaluation, as shown in Figure 1 .",
"Several works attempted to achieve scale invariance by modifying the network structure (Xu et al., 2014; Takahashi et al., 2017) .",
"However, the most common method is to artificially enlarge the dataset using a set of label-preserving transformations also known as \"data augmentation\" (Krizhevsky et al., 2012; Howard, 2013) .",
"Several of these transformations scale and crop objects appearing within the data, thus increasing the network's robustness to inputs of different scale.",
"Although not explicitly trained to handle varying image sizes, CNNs are commonly evaluated on multiple scales post training, such as in the case of detection (Lin et al., 2017; Redmon & Farhadi, 2018) and segmentation (He et al., 2017) tasks.",
"In these tasks, a network that was pretrained with fixed image size for classification is used as the backbone of a larger model that is expected to adapt to a wide variety of image sizes.",
"In this work, we will introduce a novel training regime, \"MixSize\" for convolutional networks that uses stochastic image and batch sizes.",
"The main contributions of the MixSize regime are:",
"• Reducing image size sensitivity.",
"We show that the MixSize training regime can improve model performance on a wide range of sizes used at evaluation.",
"• Faster inference.",
"As our mixed-size models can be evaluated at smaller image sizes, we show up to 2× reduction in computations required at inference to reach the same accuracy as the baseline model.",
"• Faster training vs. high accuracy.",
"We show that reducing the average image size at training leads to a trade-off between the time required to train the model and its final accuracy.",
"2 RELATED WORK"
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.12903225421905518,
0.2083333283662796,
0.1818181723356247,
0.1111111044883728,
0.07999999821186066,
0.14999999105930328,
0.20512820780277252,
0.14999999105930328,
0.1428571343421936,
0.08163265138864517,
0.07999999821186066,
0.1702127605676651,
0.1538461446762085,
0,
0,
0,
0.07547169178724289,
0.1904761791229248,
0.11428570747375488,
0,
0.21052631735801697,
0.29411762952804565,
0,
0.1428571343421936,
0,
0.1621621549129486,
0
] | HylUPnVKvH | true | [
"Training convnets with mixed image size can improve results across multiple sizes at evaluation"
] |
[
"We propose a simple technique for encouraging generative RNNs to plan ahead.",
"We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model.",
"The backward network is used only during training, and plays no role during sampling or inference.",
"We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states).",
"We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task.",
"Recurrent Neural Networks (RNNs) are the basis of state-of-art models for generating sequential data such as text and speech.",
"RNNs are trained to generate sequences by predicting one output at a time given all previous ones, and excel at the task through their capacity to remember past information well beyond classical n-gram models BID6 BID27 .",
"More recently, RNNs have also found success when applied to conditional generation tasks such as speech-to-text BID9 , image captioning BID61 and machine translation .RNNs",
"are usually trained by teacher forcing: at each point in a given sequence, the RNN is optimized to predict the next token given all preceding tokens. This",
"corresponds to optimizing one-stepahead prediction. As there",
"is no explicit bias toward planning in the training objective, the model may prefer to focus on the most recent tokens instead of capturing subtle long-term dependencies that could contribute to global coherence. Local correlations",
"are usually stronger than long-term dependencies and thus end up dominating the learning signal. The consequence is",
"that samples from RNNs tend to exhibit local coherence but lack meaningful global structure. This difficulty in",
"capturing long-term dependencies has been noted and discussed in several seminal works (Hochreiter, 1991; BID6 BID27 BID45 .Recent efforts to address",
"this problem have involved augmenting RNNs with external memory BID14 BID18 BID22 , with unitary or hierarchical architectures BID0 BID51 , or with explicit planning mechanisms BID23 . Parallel efforts aim to prevent",
"overfitting on strong local correlations by regularizing the states of the network, by applying dropout or penalizing various statistics BID41 BID64 BID15 BID32 BID39 . Figure 1: The forward and the backward",
"networks predict the sequence s = {x 1 , ..., x 4 } independently. The penalty matches the forward (or a",
"parametric function of the forward) and the backward hidden states. The forward network receives the gradient",
"signal from the log-likelihood objective as well as L t between states that predict the same token. The backward network is trained only by maximizing",
"the data log-likelihood. During the evaluation part of the network colored",
"with orange is discarded. The cost L t is either a Euclidean distance or a",
"learned metric ||g(h DISPLAYFORM0 with an affine transformation g. Best viewed in color.In this paper, we propose TwinNet",
", 1 a simple method for regularizing a recurrent neural network that encourages modeling those aspects of the past that are predictive of the long-term future. Succinctly, this is achieved as follows: in parallel",
"to the standard forward RNN, we run a \"twin\" backward RNN (with no parameter sharing) that predicts the sequence in reverse, and we encourage the hidden state of the forward network to be close to that of the backward network used to predict the same token. Intuitively, this forces the forward network to focus",
"on the past information that is useful to predicting a specific token and that is also present in and useful to the backward network, coming from the future (Fig. 1) .In practice, our model introduces a regularization term",
"to the training loss. This is distinct from other regularization methods that",
"act on the hidden states either by injecting noise BID32 or by penalizing their norm BID31 BID39 , because we formulate explicit auxiliary targets for the forward hidden states: namely, the backward hidden states. The activation regularizer (AR) proposed by BID39 , which",
"penalizes the norm of the hidden states, is equivalent to the TwinNet approach with the backward states set to zero. Overall, our model is driven by the intuition (a) that the",
"backward hidden states contain a summary of the",
"future of the sequence, and (b) that in order to predict the future more accurately, the",
"model will have to form a better representation of the past. We demonstrate the effectiveness of the TwinNet approach experimentally",
", through several conditional and unconditional generation tasks that include speech recognition, image captioning, language modelling, and sequential image generation. To summarize, the contributions of this work are as follows:• We introduce",
"a simple method for training generative recurrent networks that regularizes the hidden states of the network to anticipate future states (see Section 2);• The paper provides extensive evaluation of the proposed model on multiple tasks and concludes that it helps training and regularization for conditioned generation (speech recognition, image captioning) and for the unconditioned case (sequential MNIST, language modelling, see Section 4);• For deeper analysis we visualize the introduced cost and observe that it negatively correlates with the word frequency (more surprising words have higher cost).",
"In this paper, we presented a simple recurrent neural network model that has two separate networks running in opposite directions during training.",
"Our model is motivated by the fact that states of the forward model should be predictive of the entire future sequence.",
"This may be hard to obtain by optimizing one-step ahead predictions.",
"The backward path is discarded during the sampling and evaluation process, which makes the sampling process efficient.",
"Empirical results show that the proposed method performs well on conditional generation for several tasks.",
"The analysis reveals an interpretable behaviour of the proposed loss.One of the shortcomings of the proposed approach is that the training process doubles the computation needed for the baseline (due to the backward network training).",
"However, since the backward network is discarded during sampling, the sampling or inference process has the exact same computation steps as the baseline.",
"This makes our approach applicable to models that requires expensive sampling steps, such as PixelRNNs BID44 and WaveNet (Oord et al., 2016a) .",
"One of future work directions is to test whether it could help in conditional speech synthesis using WaveNet.We observed that the proposed approach yield minor improvements when applied to language modelling with PennTree bank.",
"We hypothesize that this may be linked to the amount of entropy of the target distribution.",
"In these high-entropy cases, at any time-step in the sequence, the distribution of backward states may be highly multi-modal (many possible futures may be equally likely for the same past).",
"One way of overcoming this problem would be to replace the proposed L2 loss (which implicitly assumes a unimodal distribution of the backward states) by a more expressive loss obtained by either employing an inference network BID30 or distribution matching techniques BID17 .",
"We leave that for future investigation."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2790697515010834,
0.4363636374473572,
0.1304347813129425,
0.27586206793785095,
0.15094339847564697,
0.07999999821186066,
0.0923076868057251,
0.072727270424366,
0.1428571343421936,
0.052631575614213943,
0.158730149269104,
0.0833333283662796,
0.1249999925494194,
0.11538460850715637,
0.03389830142259598,
0.20338982343673706,
0.1599999964237213,
0.2666666507720947,
0.18518517911434174,
0.04999999701976776,
0.09090908616781235,
0.03999999538064003,
0.19672130048274994,
0.2985074520111084,
0.2295081913471222,
0.1395348757505417,
0.1230769157409668,
0.1818181723356247,
0.20512820780277252,
0.22727271914482117,
0.1666666567325592,
0.13333332538604736,
0.2800000011920929,
0.22641508281230927,
0.1666666567325592,
0.0952380895614624,
0.1304347813129425,
0.08695651590824127,
0.20689654350280762,
0.039215680211782455,
0.1111111044883728,
0.1538461446762085,
0.17777776718139648,
0.14035087823867798,
0.11940298229455948,
0.10810810327529907
] | BydLzGb0Z | true | [
"The paper introduces a method of training generative recurrent networks that helps to plan ahead. We run a second RNN in a reverse direction and make a soft constraint between cotemporal forward and backward states."
] |
[
"Deep generative models seek to recover the process with which the observed data was generated.",
"They may be used to synthesize new samples or to subsequently extract representations.",
"Successful approaches in the domain of images are driven by several core inductive biases.",
"However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked.",
"In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition.",
"This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations.",
"We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level.",
"A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.",
"Generative modelling approaches to representation learning seek to recover the process with which the observed data was generated.",
"It is postulated that knowledge about the generative process exposes important factors of variation in the environment (captured in terms of latent variables) that may subsequently be obtained using an appropriate posterior inference procedure.",
"Therefore, the structure of the generative model is critical in learning corresponding representations.Deep generative models of images rely on the expressiveness of neural networks to learn the generative process directly from data BID11 BID24 BID38 .",
"Their structure is determined by the inductive bias of the neural network, which steers it to organize its computation in a way that allows salient features to be recovered and ultimately captured in a representation BID6 BID7 BID24 .",
"Recently, it has been shown that independent factors of variation, such as pose and lighting of human faces may be recovered in this way BID5 .A",
"promising but under-explored inductive bias in deep generative models of images is compositionality at the representational level of objects, which accounts for the compositional nature of the visual world and our perception thereof BID3 BID37 . It",
"allows a generative model to describe a scene as a composition of objects (entities), thereby disentangling visual information in the scene that can be processed largely independent of one another. It",
"provides a means to efficiently learn a more accurate generative model of real-world images, and by explicitly Figure 1: A scene (right) is generated as a composition of objects and background. considering",
"objects at a representational level, it serves as an important first step in recovering corresponding object representations.In this work we investigate object compositionality for Generative Adversarial Networks (GANs; BID11 ), and present a general mechanism that allows one to incorporate corresponding structure in the generator. Starting from",
"strong independence assumptions about the objects in images, we propose two extensions that provide a means to incorporate dependencies among objects and background. In order to efficiently",
"represent and process multiple objects with neural networks, we must account for the binding problem that arises when superimposing multiple distributed representations BID18 . Following prior work, we",
"consider different representational slots for each object BID13 BID34 , and a relational mechanism that preserves this separation accordingly .We evaluate our approach",
"1 on several multi-object image datasets, including three variations of Multi-MNIST, a multi-object variation of CIFAR10, and CLEVR. In particular the latter",
"two mark a significant improvement in terms of complexity, compared to datasets that have been considered in prior work on unconditional multi-object image generation and multi-object representation learning.In our experiments we find that our generative model learns about the individual objects and the background of a scene, without prior access to this information. By disentangling this information",
"at a representational level, it generates novel scenes efficiently through composing individual objects and background, as can be seen in Figure 1 . As a quantitative experiment we compare",
"to a strong baseline of popular GANs (Wasserstein and Non-saturating) with recent state-of-the-art techniques (Spectral Normalization, Gradient Penalty) optimized over multiple runs. A human study reveals that the proposed",
"generative model outperforms this baseline in generating better images that are more faithful to the reference distribution.",
"The experimental results confirm that the proposed structure is beneficial in generating images of multiple objects, and is utilized according to our own intuitions.",
"In order to benefit maximally from this structure it is desirable to be able to accurately estimate the (minimum) number of objects in the environment in advance.",
"This task is ill-posed as it relies on a precise definition of \"object\" that is generally not available.",
"In our experiments on CLEVR we encounter a similar situation in which the number of components does not suffice the potentially large number of objects in the environment.Here we find that it does not render the proposed structure useless, but instead each component considers \"primitives\" that correspond to multiple objects.One concern is in being able to accurately determine foreground, and background when combining the outputs of the object generators using alpha compositing.",
"On CLEVR we observe cases in which objects appear to be flying, which is the result of being unable to route the information content of a \"foreground\" object to the corresponding \"foreground\" generator as induced by the fixed order in which images are composed.",
"Although in principle the relational mechanism may account for this distinction, a more explicit mechanism may be preferred BID31 .We",
"found that the pre-trained Inception embedding is not conclusive in reasoning about the validity of multi-object datasets. Similarly",
", the discriminator may have difficulties in accurately judging images from real / fake without additional structure. Ideally",
"we would have a discriminator evaluate the correctness of each object individually, as well as the image as a whole. The use",
"of a patch discriminator BID20 , together with the alpha channel of each object generator to provide a segmentation, may serve a starting point in pursuing this direction.",
"We have argued for the importance of compositionality at the representational level of objects in deep generative models of images, and demonstrated how corresponding structure may be incorporated in the generator of a GAN.",
"On a benchmark of multi-object datasets we have shown that the proposed generative model learns about individual objects and background in the process of synthesizing samples.",
"A human study revealed that this leads to a better generative model of images.",
"We are hopeful that in disentangling information corresponding to different objects at a representational level these may ultimately be recovered.",
"Hence, we believe that this work is an important contribution towards learning object representations of complex real-world images without any supervision.A EXPERIMENT RESULTS The generator and discriminator neural network architectures in all our experiments are based on DCGAN BID35 ."
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11764705181121826,
0.0624999962747097,
0.23529411852359772,
0.2857142686843872,
0.8837209343910217,
0.17777776718139648,
0.30434781312942505,
0.14999999105930328,
0.1111111044883728,
0.07999999821186066,
0.20408162474632263,
0.25925925374031067,
0.08888888359069824,
0.1538461446762085,
0.25531914830207825,
0.3333333432674408,
0.21875,
0.31111109256744385,
0.13333332538604736,
0.1860465109348297,
0.19999998807907104,
0.1818181723356247,
0.12765957415103912,
0.20408162474632263,
0.1621621549129486,
0.2790697515010834,
0.23255813121795654,
0.10810810327529907,
0.1794871687889099,
0.29629629850387573,
0.15789473056793213,
0.10810810327529907,
0.15789473056793213,
0.15789473056793213,
0.2222222238779068,
0.375,
0.22727271914482117,
0.23529411852359772,
0.19999998807907104,
0.13333332538604736
] | BJgEjiRqYX | true | [
"We propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition"
] |
[
"Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel.",
"We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation.",
"We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it.",
"First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive.",
"Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios.",
"And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones.",
"First and foremost, we show evidence against the current notion that selfish agents do not learn to communicate, and we hope our findings encourage more research into communication under The comparison between discrete and continuous communication for both the REINFORCE-deterministic setup as well as 1-step LOLA agents is shown in Figure 4a .",
"We see that though overall continuous communication can achieve highest information transfer, the gains in performance seem to mostly from manipulation of the sender by the receiver.",
"Two examples are shown for REINFORCE agents in Figures 4b,4c .",
"To find a trend, we plot all 100 hyperparameter runs for b ∈ [3, 6, 9, 12] between continuous and discrete communication using 1-step LOLA agents in Figures 4d,4e ,4f,4g.",
"We find that manipulation is the common result in continuous communication though individual cooperative points can sometimes be found.",
"In general, continuous communication does not lend itself to cooperative communication competition.",
"We have shown three important properties of communication.",
"First, a game being more cooperative than competitive is sufficient to naturally emerge communication.",
"Second, we've clarified the distinction between information transfer, communication, and manipulation, providing motivation for a better quantitative metric to measure emergent communication in competitive environments.",
"Next, we've found that LOLA improves effective selfish communication and, using our metric, we find it does so by improving both agents' performance and stability.",
"Finally, we've shown that using a discrete communication channel encourages the learning of cooperative commu-nication in contrast to the continuous communication channel setting, where we find little evidence of cooperation.",
"In fully-cooperative emergent communication, both agents fully trust each other, so cooperatively learning a protocol is mutually beneficial.",
"In competitive MARL, the task is using an existing protocol (or action space) to compete with each other.",
"However, selfish emergent communication combines these two since the inherent competitiveness of using the protocol to win is tempered by the inherent cooperativeness of learning it; without somewhat agreeing to meanings, agents cannot use those meanings to compete (Searcy & Nowicki, 2005; Skyrms & Barrett, 2018) .",
"Thus, the agents must both learn a protocol and use that protocol simultaneously.",
"In this way, even while competing, selfish agents emerging a communication protocol must learn to cooperate."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1818181723356247,
0.1666666567325592,
0.29411762952804565,
0.1818181723356247,
0.060606054961681366,
0.13793103396892548,
0.20338982343673706,
0.25641024112701416,
0.0833333283662796,
0.09090908616781235,
0.24242423474788666,
0.1599999964237213,
0.1818181723356247,
0.2142857164144516,
0.20512820780277252,
0.10256409645080566,
0.19999998807907104,
0,
0.1875,
0.15094339847564697,
0.07692307233810425,
0.19999998807907104
] | B1liIlBKvS | true | [
"We manage to emerge communication with selfish agents, contrary to the current view in ML"
] |
[
"Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed.",
"We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss.",
"To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet).",
"During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry.",
"We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces.",
"Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data.",
"More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.",
"It remains an open question why DNNs, typically with far more model parameters than training samples, can achieve such small generalization error.",
"Previous work used various complexity measures from statistical learning theory, such as VC dimension (Vapnik, 1998) , Radamacher complexity BID1 , and uniform stability BID2 BID10 , to provide an upper bound for the generalization error, suggesting that the effective capacity of DNNs, possibly with some regularization techniques, is usually limited.",
"However, the experiments by Zhang et al. (2017) showed that, even with data-independent regularization, DNNs can perfectly fit the training data when the true labels are replaced by random labels, or when the training data are replaced by Gaussian noise.",
"This suggests that DNNs with data-independent regularization have enough capacity to \"memorize\" the training data.",
"This poses an interesting question for network regularization design: is there a way for DNNs to refuse to (over)fit training samples with random labels, while exhibiting better generalization power than conventional DNNs when trained with true labels?",
"Such networks are very important because they will extract only intrinsic patterns from the training data instead of memorizing miscellaneous details.One would expect that data-dependent regularizations should be a better choice for reducing the memorizing capacity of DNNs.",
"Such regularizations are typically enforced by penalizing the standard softmax cross entropy loss with an extra geometric loss which regularizes the feature geometry BID8 Zhu et al., 2018; Wen et al., 2016) .",
"However, regularizing DNNs with an extra geometric loss has two disadvantages: First, the output of the softmax layer, usually viewed as a probability distribution, is typically inconsistent with the feature geometry enforced by the geometric loss.",
"Therefore, the geometric loss typically has a small weight to avoid jeopardizing the minimization of the softmax loss.",
"Second, we find that DNNs with such regularization can still perfectly (over)fit random training samples or random labels.",
"The reason is that the geometric loss (because of its small weight) is ignored and only the softmax loss is minimized.This suggests that simply penalizing the softmax loss with a geometric loss is not sufficient to regularize DNNs.",
"Instead, the softmax loss should be replaced by a validation loss that is consistent with the enforced geometry.",
"More specifically, every training batch B is split into two sub-batches, the geometry batch B g and the validation batch B v .",
"The geometric loss l g is imposed on the features of B g for them to exhibit a desired geometric structure.",
"A semi-supervised learning algorithm based on the proposed feature geometry is then used to generate a predicted label distribution for the validation batch, which combined with the true labels defines a validation loss on B v .",
"The total loss on the training batch B is then defined as the weighted sum l = l g + λl v .",
"Because the predicted label distribution on B v is based on the enforced geometry, the geometric loss l g can no longer be neglected.",
"Therefore, l g and l v will be minimized simultaneously, i.e., the geometry is correctly enforced (small l g ) and it can be used to predict validation samples (small l v ).",
"We call such DNNs Geometrically-Regularized-Self-Validating neural Networks (GRSVNets).",
"See FIG0 for a visual illustration of the network architecture.GRSVNet is a general architecture because every consistent geometry/validation pair can fit into this framework as long as the loss functions are differentiable.",
"In this paper, we focus on a particular type of GRSVNet, the Orthogonal-Low-rank-Embedding-GRSVNet (OLE-GRSVNet).",
"More specifically, we impose the OLE loss (Qiu & Sapiro, 2015) on the geometry batch to produce features residing in orthogonal subspaces, and we use the principal angles between the validation features and those subspaces to define a predicted label distribution on the validation batch.",
"We prove that the loss function obtains its minimum if and only if the subspaces of different classes spanned by the features in the geometry batch are orthogonal, and the features in the validation batch reside perfectly in the subspaces corresponding to their labels (see FIG0 ).",
"We show in our experiments that OLE-GRSVNet has better generalization performance when trained on real data, but it refuses to memorize the training samples when given random training data or random labels, which suggests that OLE-GRSVNet effectively learns intrinsic patterns.Our contributions can be summarized as follows:• We proposed a general framework, GRSVNet, to effectively impose data-dependent DNN regularization.",
"The core idea is the self-validation of the enforced geometry with a consistent validation loss on a separate batch of features.•",
"We study a particular case of GRSVNet, OLE-GRSVNet, that can produce highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal.•",
"OLE-GRSVNet achieves better generalization performance when compared to DNNs with conventional regularizers. And",
"more importantly, unlike conventional DNNs, OLEGRSVNet refuses to fit the training data (i.e., with a training error close to random guess) when the training data or the training labels are randomly generated. This",
"implies that OLE-GRSVNet never memorizes the training samples, only learns intrinsic patterns.",
"We proposed a general framework, GRSVNet, for data-dependent DNN regularization.",
"The core idea is the self-validation of the enforced geometry on a separate batch using a validation loss consistent with the geometric loss, so that the predicted label distribution has a meaningful geometric interpretation.",
"In particular, we study a special case of GRSVNet, OLE-GRSVNet, which is capable of producing highly discriminative features: samples from the same class belong to a low-rank subspace, and the subspaces for different classes are orthogonal.",
"When trained on benchmark datasets with real labels, OLE-GRSVNet achieves better test accuracy when compared to DNNs with different regularizations sharing the same baseline architecture.",
"More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize and overfit the training data when trained on random labels or random data.",
"This suggests that OLE-GRSVNet effectively reduces the memorizing capacity of DNNs, and it only extracts intrinsically learnable patterns from the data.Although we provided some intuitive explanation as to why GRSVNet generalizes well on real data and refuses overfitting random data, there are still open questions to be answered.",
"For example, what is the minimum representational capacity of the baseline DNN (i.e., number of layers and number of units) to make even GRSVNet trainable on random data?",
"Or is it because of the learning algorithm (SGD) that prevents GRSVNet from learning a decision boundary that is too complicated for random samples?",
"Moreover, we still have not answered why conventional DNNs, while fully capable of memorizing random data by brute force, typically find generalizable solutions on real data.",
"These questions will be the focus of our future work.",
"It suffices to prove the case when K = 2, as the case for larger K can be proved by induction.",
"In order to simplify the notation, we restate the original theorem for K = 2:Theorem.",
"Let A ∈ R N ×m and B ∈ R N ×n be matrices of the same row dimensions, and [A, B] ∈ R N ×(m+n) be the concatenation of A and B. We have DISPLAYFORM0 Moreover, the equality holds if and only if A * B = 0, i.e., the column spaces of A and B are orthogonal.Proof.",
"The inequality (8) and the sufficient condition for the equality to hold is easy to prove.",
"More specifically, DISPLAYFORM1 Moreover, if A * B = 0, then DISPLAYFORM2 where |A| = (A * A) 1 2 .",
"Therefore, DISPLAYFORM3 Next, we show the necessary condition for the equality to hold, i.e., DISPLAYFORM4 DISPLAYFORM5 | be a symmetric positive semidefinite matrix.",
"We DISPLAYFORM6 Let DISPLAYFORM7 be the orthonormal eigenvectors of |A|, |B|, respectively.",
"Then DISPLAYFORM8 Similarly, DISPLAYFORM9 Suppose that [A, B] * = A * + B * , then DISPLAYFORM10 Therefore, both of the inequalities in this chain must be equalities, and the first one being equality only if G = 0. This combined with the last equation in FORMULA2 implies DISPLAYFORM11 APPENDIX B PROOF OF THEOREM 2Proof.",
"First, l is defined in equation FORMULA8 as DISPLAYFORM12 The nonnegativity of l g (Z g ) is guaranteed by Theorem",
"1. The validation loss l v (Y v ,Ŷ v ) is also nonnegative since it is the average (over the validation batch) of the cross entropy losses: DISPLAYFORM13 Therefore l = l g + λl v is also nonnegative.Next, for a given λ > 0, l(X, Y) obtains its minimum value zero if and only if both l g (Z g ) and l v (Y v ,Ŷ v ) are zeros.•",
"By Theorem 1, l g (Z g ) = 0 if and only if span(Z g c )⊥ span(Z g c ), ∀c = c .•",
"According to (19), l v (Y v ,Ŷ v ) = 0 if and only ifŷ(x) = δ y , ∀x ∈ X v , i.e., for every x ∈ X v c , its feature z = Φ(x; θ) belongs to span(Z g c ).At",
"last, we want to prove that if λ > 0, and X v contains at least one sample for each class, then rank(span(Z g c )) ≥ 1 for any c ∈ {1, . . . , K}. If",
"not, then there exists c ∈ {1, . . . , K} such that rank(span(Z g c )) =",
"0. Let x ∈ X v be a validation datum belonging to class y = c. The",
"predicted probability of x belonging to class c is defined in (3): DISPLAYFORM14 Thus we have DISPLAYFORM15"
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12765957415103912,
0.10526315122842789,
0.4444444477558136,
0.051282044500112534,
0.0476190410554409,
0.23529411852359772,
0.17391303181648254,
0.04878048226237297,
0.12121211737394333,
0.2448979616165161,
0.23529411852359772,
0.19230768084526062,
0.2545454502105713,
0,
0.08163265138864517,
0.05882352590560913,
0.4444444477558136,
0.12765957415103912,
0.11428570747375488,
0,
0.10526315122842789,
0.11999999731779099,
0,
0.04999999329447746,
0.04444443807005882,
0.07407406717538834,
0.1666666567325592,
0.12121211737394333,
0.07547169178724289,
0.07547169178724289,
0.2571428418159485,
0.052631575614213943,
0.19999998807907104,
0.0624999962747097,
0.21276594698429108,
0.06451612710952759,
0.3448275923728943,
0.08510638028383255,
0.1538461446762085,
0.04651162400841713,
0.20512819290161133,
0.1875,
0.08888888359069824,
0.25,
0.13636362552642822,
0,
0.10810810327529907,
0.12121211737394333,
0,
0.060606054961681366,
0,
0.1395348757505417,
0,
0.029411761090159416,
0,
0.057971011847257614,
0,
0.03703703358769417,
0.1111111044883728,
0.05714285373687744,
0.0555555522441864,
0.0555555522441864
] | B1GSBsRcFX | true | [
"we propose a new framework for data-dependent DNN regularization that can prevent DNNs from overfitting random data or random labels."
] |
[
"End-to-end automatic speech recognition (ASR) commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate (WER).",
"This suggests that predicting sequences of words directly may be helpful instead.",
"However, training with word-level supervision can be more difficult due to the sparsity of examples per label class.",
"In this paper we analyze an end-to-end ASR model that combines a word-and-character representation in a multi-task learning (MTL) framework.",
"We show that it improves on the WER and study how the word-level model can benefit from character-level supervision by analyzing the learned inductive preference bias of each model component empirically.",
"We find that by adding character-level supervision, the MTL model interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model).",
"End-to-end automatic speech recognition (ASR) allows for learning a direct mapping from audio signals to character outputs.",
"Usually, a language model re-scores the predicted transcripts during inference to correct spelling mistakes BID16 .",
"If we map the audio input directly to words, we can use a simpler decoding mechanism and reduce the prediction time.",
"Unfortunately, word-level models can only be trained on known words.",
"Out-of-vocabulary (OOV) words have to be mapped to an unknown token.",
"Furthermore, decomposing transcripts into sequences of words decreases the available number of examples per label class.",
"These shortcomings make it difficult to train on the word-level BID2 .Recent",
"works have shown that multi-task learning (MTL) BID8 on the word-and character-level can improve the word-error rate (WER) of common end-to-end speech recognition architectures BID2 BID3 BID18 BID21 BID22 BID24 BID29 . MTL can",
"be interpreted as learning an inductive bias with favorable generalization properties BID6 . In this",
"work we aim at characterizing the nature of this inductive bias in word-character-level MTL models by analyzing the distribution of words that they recognize. Thereby",
", we seek to shed light on the learning process and possibly inform the design of better models. We will",
"focus on connectionist temporal classification (CTC) BID15 . However",
", the analysis can also prove beneficial to other modeling paradigms, such as RNN Transducers BID14 or Encoder-Decoder models, e.g., BID5 BID9 .Contributions",
". We show that",
", contrary to earlier negative results BID2 BID27 , it is in fact possible to train a word-level model from scratch on a relatively small dataset and that its performance can be further improved by adding character-level supervision. Through an",
"empirical analysis we show that the resulting MTL model combines the preference biases of word-and character-level models. We hypothesize",
"that this can partially explain why word-character MTL improves on only using a single decomposition, such as phonemes, characters or words.Several works have explored using words instead of characters or phonemes as outputs of the end-toend ASR model BID2 BID27 . Soltau et al.",
"BID27 found that in order to solve the problem of observing only few labels per word, they needed to use a large dataset of 120, 000 hours to train a word-level model directly. Accordingly,",
"Audhkhasi et al. BID2 reported difficulty to train a model on words from scratch and instead fine-tuned a pre-trained character-level model after replacing the last dense layer with a word embedding.MTL enables a straightforward joint training procedure to integrate transcript information on multiple levels of granularity. Treating word-and",
"character-level transcription as two distinct tasks allows for combining their losses in a parallel BID21 BID22 BID28 BID29 or hierarchical structure BID13 BID20 BID24 . Augmenting the commonly-used",
"CTC loss with an attention mechanism can help with aligning the predictions on both character-and word-level BID3 BID12 BID22 . All these MTL methods improve",
"a standard CTC baseline.Finding the right granularity of the word decomposition is in itself a difficult problem. While Li et al. BID22 used different",
"fixed decompositions of words, sub-words and characters, it is also possible to optimize over alignments and decompositions jointly BID23 . Orthogonal to these works different",
"authors have explored how to minimize WER directly by computing approximate gradients BID25 BID32 .When and why does MTL work? Earlier",
"theoretical work argued that",
"the auxiliary task provides a favorable inductive bias to the main task BID6 . Within natural language processing on",
"text several works verified empirically that this inductive bias is favorable if there is a certain notion of relatedness between the tasks BID4 BID7 BID26 . Here, we investigate how to characterize",
"the inductive bias learned via MTL for speech recognition.",
"In contrast to earlier studies in the literature, we found that, even on a relatively small dataset, training on a word-level can be feasible.",
"Furthermore, we found that combining a word-level model with character-level supervision in MTL can improve results noticeably.",
"To gain a better understanding of this, we characterized the inductive bias of word-character MTL in ASR by comparing the distributions of recognized words at the beginning of training.",
"We found that adding character-level supervision to a word-level interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model).",
"This effect could be even more pronounced on harder datasets than WSJ, such as medical communication data where many long words are infrequent, but very important.",
"Further analysis of word distributions in terms of pitch, noise and acoustic variability could provide additional insight."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.06666666269302368,
0.1111111044883728,
0.05405404791235924,
0.260869562625885,
0.1538461446762085,
0.17142856121063232,
0.060606054961681366,
0.10810810327529907,
0,
0,
0.12121211737394333,
0.06666666269302368,
0.20408162474632263,
0.0624999962747097,
0.1463414579629898,
0.21621620655059814,
0,
0.0476190410554409,
0,
0.1090909019112587,
0.2222222238779068,
0.1071428507566452,
0.0833333283662796,
0.13114753365516663,
0.04444443807005882,
0.04878048226237297,
0.1463414579629898,
0.10256409645080566,
0.10256409645080566,
0,
0.05882352590560913,
0.0833333283662796,
0.2222222238779068,
0.04999999329447746,
0,
0.1428571343421936,
0.1538461446762085,
0,
0.1764705777168274
] | B1GySqOojm | true | [
"Multi-task learning improves word-and-character-level speech recognition by interpolating the preference biases of its components: frequency- and word length-preference."
] |
[
"Discretizing floating-point vectors is a fundamental step of modern indexing methods.",
"State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data.",
"In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere.",
"As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. ",
"For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss. \n",
"Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks.",
"Further more, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique.\n",
"Recent work BID27 proposed to leverage the pattern-matching ability of machine learning algorithms to improve traditional index structures such as B-trees or Bloom filters, with encouraging results.",
"In their one-dimensional case, an optimal B-Tree can be constructed if the cumulative density function (CDF) of the indexed value is known, and thus they approximate this CDF using a neural network.",
"We emphasize that the CDF itself is a mapping between the indexed value and a uniform distribution in [0, 1] .",
"In this work, we wish to generalize such an approach to multi-dimensional spaces.",
"More precisely, as illustrated by FIG0 , we aim at learning a function that maps real-valued vectors to a uniform distribution over a d-dimensional sphere, such that a fixed discretizing structure, for example a fixed binary encoding (sign of components) or a regular lattice quantizer, offers competitive coding performance.Our approach is evaluated in the context of similarity search, where methods often rely on various forms of learning machinery BID12 BID45 ; in particular there is a substantial body of literature on methods producing compact codes BID20 ).",
"Yet the problem of jointly optimizing a coding stage and a neural network remains essentially unsolved, partly because .",
"It is learned end-to-end, yet the part of the network in charge of the discretization operation is fixed in advance, thereby avoiding optimization problems.",
"The learnable function f , namely the \"catalyzer\", is optimized to increase the quality of the subsequent coding stage.",
"input λ = 0 λ = 0.01 λ = 0.1 λ → ∞ FIG1 : Illustration of our method, which takes as input a set of samples from an unknown distribution.",
"We learn a neural network that aims at preserving the neighborhood structure in the input space while best covering the output space (uniformly).",
"This trade-off is controlled by a parameter λ.",
"The case λ = 0 keeps the locality of the neighbors but does not cover the output space.",
"On the opposite, when the loss degenerates to the differential entropic regularizer (λ → ∞), the neighbors are not maintained by the mapping.",
"Intermediate values offer different trade-offs between neighbor fidelity and uniformity, which is proper input for an efficient lattice quantizer (depicted here by the hexagonal lattice A 2 ).it",
"is difficult to optimize through a discretization function. For",
"this reason, most efforts have been devoted to networks producing binary codes, for which optimization tricks exist, such as soft binarization or stochastic relaxation, which are used in conjunction with neural networks BID28 BID18 . However",
"it is difficult to improve over more powerful codes such as those produced by product quantization BID20 , and recent solutions addressing product quantization require complex optimization procedures BID24 BID34 .In order",
"to circumvent this problem, we propose a drastic simplification of learning algorithms for indexing. We learn",
"a mapping such that the output follows the distribution under which the subsequent discretization method, either binary or a more general quantizer, performs better. In other",
"terms, instead of trying to adapt an indexing structure to the data, we adapt the data to the index.Our technique requires to jointly optimize two antithetical criteria. First, we",
"need to ensure that neighbors are preserved by the mapping, using a vanilla ranking loss BID40 BID6 BID44 . Second, the",
"training must favor a uniform output. This suggests",
"a regularization similar to maximum entropy BID36 , except that in our case we consider a continuous output space. We therefore",
"propose to cast an existing differential entropy estimator into a regularization term, which plays the same \"distribution-matching\" role as the Kullback-Leiber term of variational auto-encoders BID9 .As a side note",
", many similarity search methods are implicitly designed for the range search problem (or near neighbor, as opposed to nearest neighbor BID15 BID0 ), that aims at finding all vectors whose distance to the query vector is below a fixed threshold. For real-world",
"high-dimensional data, range search usually returns either no neighbors or too many. The discrepancy",
"between near-and nearest-neighbors is significantly reduced by our technique, see Section 3.3 and Appendix C for details.Our method is illustrated by FIG1 . We summarize our",
"contributions as follows:• We introduce an approach for multi-dimensional indexing that maps the input data to an output space in which indexing is easier. It learns a neural",
"network that plays the role of an adapter for subsequent similarity search methods.• For this purpose",
"we introduce a loss derived from the Kozachenko-Leonenko differential entropy estimator to favor uniformity in the spherical output space.• Our learned mapping",
"makes it possible to leverage spherical lattice quantizers with competitive quantization properties and efficient algebraic encoding.• Our ablation study shows",
"that our network can be trained without the quantization layer and used as a plug-in for processing features before using standard quantizers. We show quantitatively that",
"our catalyzer improves performance by a significant margin for quantization-based (OPQ BID10 ) and binary (LSH BID5 ) method.This paper is organized as follows. Section 2 discusses related",
"works. Section 3 introduces our neural",
"network model and the optimization scheme. Section 4 details how we combine",
"this strategy with lattice assignment to produce compact codes. The experimental section 5 evaluates",
"our approach.",
"Choice of λ.",
"The marginal distributions for these two views are much more uniform with our KoLeo regularizer, which is a consequence of the higher uniformity in the high-dimensional latent space.Qualitative evaluation of the uniformity.",
"Figure 3 shows the histogram of the distance to the nearest (resp. 100 th nearest) neighbor, before applying the catalyzer (left) and after (right).",
"The overlap between the two distributions is significantly reduced by the catalyzer.",
"We evaluate this quantitatively by measuring the probability that the distance between a point and its nearest neighbor is larger than the distance between another point and its 100 th nearest neighbor.",
"In a very imbalanced space, this value is 50%, whereas in a uniform space it should approach 0%.",
"In the input space, this probability is 20.8%, and it goes down to 5.0% in the output space thanks to our catalyzer.Visualization of the output distribution.",
"While FIG1 illustrates our method with the 2D disk as an output space, we are interested in mapping input samples to a higher dimensional hyper-sphere.",
"FIG2 proposes a visualization of the high-dimensional density from a different viewpoint, with the Deep1M dataset mapped in 8 dimensions.",
"We sample 2 planes randomly in R dout and project the dataset points (f (x 1 ), ..., f (x n )) on them.",
"For each column, the 2 figures are the angular histograms of the points with a polar parametrization of this plane.",
"The area inside the curve is constant and proportional to the number of samples n.",
"A uniform angular distribution produces a centered disk, and less uniform distributions look like unbalanced potatoes.The densities we represent are marginalized, so if the distribution looks non-uniform then it is non-uniform in d out -dimensional space, but the reverse is not true.",
"Yet one can compare the results obtained for different regularization coefficients, which shows that our regularizer has a strong uniformizing effect on the mapping, ultimately resembling that of a uniform distribution for λ = 1."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.13333332538604736,
0.17142856121063232,
0.16326530277729034,
0.27272728085517883,
0.0952380895614624,
0.1428571343421936,
0.1666666567325592,
0.08888888359069824,
0.1599999964237213,
0.2702702581882477,
0.06451612710952759,
0.1538461446762085,
0.2222222238779068,
0.15789473056793213,
0.1111111044883728,
0.1395348757505417,
0.5128204822540283,
0.07407406717538834,
0.11428570747375488,
0.10526315122842789,
0.12765957415103912,
0.1428571343421936,
0.1538461446762085,
0.04081632196903229,
0.2857142686843872,
0.1904761791229248,
0.1428571343421936,
0.21052631735801697,
0.07407406717538834,
0.307692289352417,
0.1702127605676651,
0.1355932205915451,
0.060606054961681366,
0.0476190410554409,
0.4888888895511627,
0.1666666567325592,
0.24390242993831635,
0.09999999403953552,
0.22727271914482117,
0.08510638028383255,
0.07999999821186066,
0.12903225421905518,
0.060606054961681366,
0,
0.25,
0.09999999403953552,
0.06666666269302368,
0.1904761791229248,
0.1666666567325592,
0.22727271914482117,
0.22727271914482117,
0.21621620655059814,
0.1428571343421936,
0.1111111044883728,
0.12121211737394333,
0.10526315122842789,
0.1599999964237213
] | SkGuG2R5tm | true | [
"We learn a neural network that uniformizes the input distribution, which leads to competitive indexing performance in high-dimensional space"
] |
[
"Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance.",
"A variance reduced TD (VRTD) algorithm was proposed by Korda and La (2015), which applies the variance reduction technique directly to the online TD learning with Markovian samples.",
"In this work, we first point out the technical errors in the analysis of VRTD in Korda and La (2015), and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance.",
"We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate.",
"Furthermore, the variance error (for both i.i.d. and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD.",
"In reinforcement learning (RL), policy evaluation aims to obtain the expected long-term reward of a given policy and plays an important role in identifying the optimal policy that achieves the maximal cumulative reward over time Bertsekas and Tsitsiklis (1995) ; Dayan and Watkins (1992) ; Rummery and Niranjan (1994) .",
"The temporal difference (TD) learning algorithm, originally proposed by Sutton (1988) , is one of the most widely used policy evaluation methods, which uses the Bellman equation to iteratively bootstrap the estimation process and continually update the value function in an incremental way.",
"In practice, if the state space is large or infinite, function approximation is often used to find an approximate value function efficiently.",
"Theoretically, TD with linear function approximation has been shown to converge to the fixed point solution with i.i.d. samples and Markovian samples in Sutton (1988) ; Tsitsiklis and Van Roy (1997) .",
"The finite sample analysis of TD has also been studied in Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a); Cai et al. (2019) .",
"Since each iteration of TD uses one or a mini-batch of samples to estimate the mean of the gradient 1 , TD learning usually suffers from the inherent variance, which substantially degrades the convergence accuracy.",
"Although a diminishing stepsize or very small constant stepsize can reduce the variance Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a) , they also slow down the convergence significantly.",
"Two approaches have been proposed to reduce the variance.",
"The first approach is the so-called batch TD, which takes a fixed sample set and transforms the empirical mean square projected Bellman error (MSPBE) into an equivalent convex-concave saddle-point problem Du et al. (2017) .",
"Due to the finite-sample nature of such a problem, stochastic variance reduction techniques for conventional optimization can be directly applied here to reduce the variance.",
"In particular, Du et al. (2017) showed that SVRG Johnson and Zhang (2013) and SAGA Defazio et al. (2014) can be applied to improve the performance of batch TD algorithms, and Peng et al. (2019) proposed two variants of SVRG to further save the computation cost.",
"However, the analysis of batch TD does not take into account the statistical nature of the training samples, which are generated by a MDP.",
"Hence, there is no guarantee of such obtained solutions to be close to the fixed point of TD learning.",
"The second approach is the so-called TD with centering (CTD) algorithm proposed in Korda and La (2015) , which introduces the variance reduction idea to the original TD learning algorithm.",
"For the sake of better reflecting its major feature, we refer to CTD as Variance Reduced TD (VRTD) throughout this paper.",
"Similarly to the SVRG in Johnson and Zhang (2013) , VRTD has outer and inner loops.",
"The beginning of each inner-loop (i.e. each epoch) computes a batch of sample gradients so that each subsequent inner loop iteration modifies only one sample gradient in the batch gradient to reduce the variance.",
"The main difference between VRTD and batch TD is that VRTD applies the variance reduction directly to TD learning rather than to a transformed optimization problem in batch TD.",
"Though Korda and La (2015) empirically verified that VRTD has better convergence accuracy than vanilla TD learning, some technical errors in the analysis in Korda and La (2015) have been pointed out in follow up studies Dalal et al. (2018a) ; Narayanan and Szepesvári (2017) .",
"Furthermore, as we discuss in Section 3, the technical proof in Korda and La (2015) regarding the convergence of VRTD also has technical errors so that their results do not correctly characterize the impact of variance reduction on TD learning.",
"Given the recent surge of interest in the finite time analysis of the vanilla TD Bhandari et al. (2018) ; Srikant and Ying (2019) ; Dalal et al. (2018a) , it becomes imperative to reanalyze the VRTD and accurately understand whether and how variance reduction can help to improve the convergence accuracy over vanilla TD.",
"Towards this end, this paper specifically addresses the following central questions.",
"• For i.i.d. sampling, it has been shown in Bhandari et al. (2018) that vanilla TD converges only to a neighborhood of the fixed point for a constant stepsize and suffers from a constant error term caused by the variance of the stochastic gradient at each iteration.",
"For VRTD, does the variance reduction help to reduce such an error and improve the accuracy of convergence?",
"How does the error depend on the variance reduction parameter, i.e., the batch size for variance reduction?",
"• For Markovian sampling, it has been shown in Bhandari et al. (2018) ; Srikant and Ying (2019) that the convergence of vanilla TD further suffers from a bias error due to the correlation among samples in addition to the variance error as in i.i.d. sampling.",
"Does VRTD, which was designed to have reduced variance, also enjoy reduced bias error?",
"If so, how does the bias error depend on the batch size for variance reduction?"
] | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2790697515010834,
0.2790697515010834,
0.25531914830207825,
0.21621620655059814,
0.31111109256744385,
0.20689654350280762,
0.13793103396892548,
0.052631575614213943,
0.1304347813129425,
0.1463414579629898,
0.21276594698429108,
0.1666666567325592,
0.14814814925193787,
0.11764705181121826,
0.19999998807907104,
0.14814814925193787,
0.20512819290161133,
0.22857142984867096,
0.22727271914482117,
0.25641024112701416,
0.12121211737394333,
0.17391303181648254,
0.2857142686843872,
0.1428571343421936,
0.22641508281230927,
0.23333333432674408,
0.1428571343421936,
0.23333333432674408,
0.22857142984867096,
0.11764705181121826,
0.23728813230991364,
0.06451612710952759,
0.1249999925494194
] | S1ly10EKDS | true | [
"This paper provides a rigorous study of the variance reduced TD learning and characterizes its advantage over vanilla TD learning"
] |
[
"We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition.",
"To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others.\n",
"This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared.\n",
"As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy.",
"Furthermore, it allows us to handle any number of domains simultaneously.",
"While deep learning has ushered in great advances in automated image understanding, it still suffers from the same weaknesses as all other machine learning techniques: when trained with images obtained under specific conditions, deep networks typically perform poorly on images acquired under different ones.",
"This is known as the domain shift problem: the changing conditions cause the statistical properties of the test, or target, data, to be different from those of the training, or source, data, and the network's performance degrades accordingly.",
"Domain adaptation aims to address this problem, especially when annotating images from the target domain is difficult, expensive, or downright infeasible.",
"The dominant trend is to map images to features that are immune to the domain shift, so that the classifier works equally well on the source and target domains (Fernando et al., 2013; Ganin & Lempitsky, 2015; .",
"In the context of deep learning, the standard approach is to find those features using a single architecture for both domains (Tzeng et al., 2014; Ganin & Lempitsky, 2015; Yan et al., 2017; Zhang et al., 2018) .",
"Intuitively, however, as the domains have different properties, it is not easy to find one network that does this effectively for both.",
"A better approach is to allow domains to undergo different transformations to arrive at domain-invariant features.",
"This has been the focus of recent work (Tzeng et al., 2017; Bermúdez-Chacón et al., 2018; Rozantsev et al., 2018; , where source and target data pass through two different networks with the same architecture but different weights, nonetheless related to each other.",
"In this paper, we introduce a novel, even more flexible paradigm for domain adaptation, that allows the different domains to undergo different computations, not only in terms of layer weights but also in terms of number of operations, while selectively sharing subsets of these computations.",
"This enables the network to automatically adapt to situations where, for example, one domain depicts simpler images, such as synthetic ones, which may not need as much processing power as those coming from more complex domains, such as images taken in-the-wild.",
"Our formulation reflects the intuition that source and target domain networks should be similar because they solve closely related problems, but should also perform domain-specific computations to offset the domain shift.",
"To turn this intuition into a working algorithm, we develop a multibranch architecture that sends the data through multiple network branches in parallel.",
"What gives it the necessary flexibility are trainable gates that are tuned to modulate and combine the outputs of these branches, as shown in , each of which processes the data in parallel branches, whose outputs are then aggregated in a weighted manner by a gate to obtain a single response.",
"To allow for domain-adaptive computations, each domain has its own set of gates, one for each computational unit, which combine the branches in different ways.",
"As a result, some computations are shared across domains while others are domain-specific.",
"computations should be carried out for each one.",
"As an additional benefit, in contrast to previous strategies for untying the source and target streams (Rozantsev et al., 2018; , our formulation naturally extends to more than two domains.",
"In other words, our contribution is a learning strategy that adaptively adjusts the specific computation to be performed for each domain.",
"To demonstrate that it constitutes an effective approach to extracting domain-invariant features, we implement it in conjunction with the popular domain classifier-based method of Ganin & Lempitsky (2015) .",
"Our experiments demonstrate that our Domain Adaptive Multibranch Networks, which we will refer to as DAMNets, not only outperform the original technique of Ganin & Lempitsky (2015) , but also the state-of-the-art strategy for untying the source and target weights of Rozantsev et al. (2019) , which relies on the same domain classifier.",
"We will make our code publicly available upon acceptance of the paper.",
"We have introduced a domain adaptation approach that allows for adaptive, separate computations for different domains.",
"Our framework relies on computational units that aggregate the outputs of multiple parallel operations, and on a set of trainable domain-specific gates that adapt the aggregation process to each domain.",
"Our experiments have demonstrated the benefits of this approach over the state-of-the-art weight untying strategy; the greater flexibility of our method translates into a consistently better accuracy.",
"Although we only experimented with using the same branch architectures within a computational unit, our framework generalizes to arbitrary branch architectures, the only constraint being that their outputs are of commensurate shapes.",
"An interesting avenue for future research would therefore be to automatically determine the best operation to perform for each domain, for example by combining our approach with neural architecture search strategies.",
"Figure 1 : Multibranch LeNet.",
"This architecture is a multibranch extension to the LeNet used by DANN (Ganin & Lempitsky, 2015 Figure 2 : Multibranch SVHNet.",
"This architecture is a multibranch extension to the SVHNet used by DANN (Ganin & Lempitsky, 2015 (He et al., 2016) .",
"We preserve the groupings described in the original paper (He et al., 2016) .",
"N denotes the number of classes in the dataset."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.33898305892944336,
0.158730149269104,
0.158730149269104,
0.04255318641662598,
0.045454543083906174,
0.0833333283662796,
0.1904761791229248,
0.14814814925193787,
0.1818181723356247,
0.1515151411294937,
0.2181818187236786,
0.1702127605676651,
0.11594202369451523,
0.1944444328546524,
0.11594202369451523,
0.13114753365516663,
0.145454540848732,
0.138888880610466,
0.178571417927742,
0.04444444179534912,
0.09756097197532654,
0.09677419066429138,
0.29629629850387573,
0.13333332538604736,
0.1249999925494194,
0,
0.25,
0.16949151456356049,
0.035087715834379196,
0.12903225421905518,
0.16393442451953888,
0,
0.14814814925193787,
0.14814814925193787,
0.043478257954120636,
0.04878048598766327
] | rJxycxHKDS | true | [
"A Multiflow Network is a dynamic architecture for domain adaptation that learns potentially different computational graphs per domain, so as to map them to a common representation where inference can be performed in a domain-agnostic fashion."
] |
[
"The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time.",
"To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process.",
"However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency).",
"In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly.",
"To address this, we propose a new distributed reinforcement learning algorithm, IMPACT.",
"IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling.",
"In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA.",
"For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.",
"Proximal Policy Optimization (Schulman et al., 2017 ) is one of the most sample-efficient on-policy algorithms.",
"However, it relies on a synchronous architecture for collecting experiences, which is closely tied to its trust region optimization objective.",
"Other architectures such as IMPALA can achieve much higher throughputs due to the asynchronous collection of samples from workers.",
"Yet, IMPALA suffers from reduced sample efficiency since it cannot safely take multiple SGD steps per batch as PPO can.",
"The new agent, Importance Weighted Asynchronous Architectures with Clipped Target Networks (IMPACT), mitigates this inherent mismatch.",
"Not only is the algorithm highly sample efficient, it can learn quickly, training 30 percent faster than IMPALA.",
"At the same time, we propose a novel method to stabilize agents in distributed asynchronous setups and, through our ablation studies, show how the agent can learn in both a time and sample efficient manner.",
"In our paper, we show that the algorithm IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency.",
"In our experiments, we demonstrate in the experiments that IMPACT exceeds state-of-the-art agents in training time (with same hardware) while maintaining similar sample efficiency with PPO's.",
"The contributions of this paper are as follows:",
"1. We show that when collecting experiences asynchronously, introducing a target network allows for a stabilized surrogate objective and multiple SGD steps per batch (Section 3.1).",
"2. We show that using a circular buffer for storing asynchronously collected experiences allows for smooth trade-off between real-time performance and sample efficiency (Section 3.2).",
"3. We show that IMPACT, when evaluated using identical hardware and neural network models, improves both in real-time and timestep efficiency over both synchronous PPO and IMPALA (Section 4).",
"into a large training batch and the learner performs minibatch SGD.",
"IMPALA workers asynchronously generate data.",
"IMPACT consists of a batch buffer that takes in worker experience and a target's evaluation on the experience.",
"The learner samples from the buffer.",
"In conclusion, we introduce IMPACT, which extends PPO with a stabilized surrogate objective for asynchronous optimization, enabling greater real-time performance without sacrificing timestep efficiency.",
"We show the importance of the IMPACT objective to stable training, and show it can outperform tuned PPO and IMPALA baselines in both real-time and timestep metrics.",
"Time ( In Figure 9 , we gradually add components to IMPALA until the agent is equivalent to IMPACT's.",
"Starting from IMPALA, we gradually add PPO's objective function, circular replay buffer, and target-worker clipping.",
"In particular, IMPALA with PPO's objective function and circular replay buffer is equivalent to an asynchronous-variant of PPO (APPO).",
"APPO fails to perform as well as synchronous distributed PPO, since PPO is an on-policy algorithm."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.25806450843811035,
0.1249999925494194,
0.13333332538604736,
0.19999998807907104,
0.0714285671710968,
0.1111111044883728,
0.09999999403953552,
0.2857142686843872,
0,
0,
0,
0.1111111044883728,
0,
0.1764705777168274,
0.1666666567325592,
0.2702702581882477,
0.2926829159259796,
0,
0.0476190410554409,
0.1463414579629898,
0.0952380895614624,
0.14814814925193787,
0,
0.1249999925494194,
0,
0.04999999701976776,
0.10256409645080566,
0,
0.06451612710952759,
0.05714285373687744,
0
] | BJeGlJStPr | true | [
"IMPACT helps RL agents train faster by decreasing training wall-clock time and increasing sample efficiency simultaneously."
] |
[
"In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs).",
"More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes.",
"Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations.",
"Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.",
"Learning good representations is seen by many machine learning researchers as the main reason behind the tremendous successes of the field in recent years (Bengio et al., 2013) .",
"In image analysis (Krizhevsky et al., 2012) , natural language processing (Vaswani et al., 2017) or reinforcement learning (Mnih et al., 2015) , groundbreaking results rely on efficient and flexible deep learning architectures that are capable of transforming a complex input into a simple vector while retaining most of its valuable features.",
"The universal approximation theorem (Cybenko, 1989; Hornik et al., 1989; Hornik, 1991; Pinkus, 1999) provides a theoretical framework to analyze the expressive power of such architectures by proving that, under mild hypotheses, multi-layer perceptrons (MLPs) can uniformly approximate any continuous function on a compact set.",
"This result provided a first theoretical justification of the strong approximation capabilities of neural networks, and was the starting point of more refined analyses providing valuable insights into the generalization capabilities of these architectures (Baum and Haussler, 1989; Geman et al., 1992; Saxe et al., 2014; Bartlett et al., 2018) .",
"Despite a large literature and state-of-the-art performance on benchmark graph classification datasets, graph neural networks yet lack a similar theoretical foundation (Xu et al., 2019) .",
"Universality for these architectures is either hinted at via equivalence with approximate graph isomorphism tests (k-WL tests in Xu et al. 2019; Maron et al. 2019a ), or proved under restrictive assumptions (finite node attribute space in Murphy et al. 2019) .",
"In this paper, we introduce Colored Local Iterative Procedure 1 (CLIP), which tackles the limitations of current Message Passing Neural Networks (MPNNs) by showing, both theoretically and experimentally, that adding a simple coloring scheme can improve the flexibility and power of these graph representations.",
"More specifically, our contributions are:",
"1) we provide a precise mathematical definition for universal graph representations,",
"2) we present a general mechanism to design universal neural networks using separability,",
"3) we propose a novel node coloring scheme leading to CLIP, the first provably universal extension of MPNNs,",
"4) we show that CLIP achieves state of the art results on benchmark datasets while significantly outperforming traditional MPNNs as well as recent methods on graph property testing.",
"The rest of the paper is organized as follows: Section 2 gives an overview of the graph representation literature and related works.",
"Section 3 provides a precise definition for universal representations, as well as a generic method to design them using separable neural networks.",
"In Section 4, we show that most state-of-the-art representations are not sufficiently expressive to be universal.",
"Then, using the analysis of Section 3, Section 5 provides CLIP, a provably universal extension of MPNNs.",
"Finally, Section 6 shows that CLIP achieves state-of-the-art accuracies on benchmark graph classification taks, as well as outperforming its competitors on graph property testing problems.",
"In this paper, we showed that a simple coloring scheme can improve the expressive power of MPNNs.",
"Using such a coloring scheme, we extended MPNNs to create CLIP, the first universal graph representation.",
"Universality was proven using the novel concept of separable neural networks, and our experiments showed that CLIP is state-of-the-art on both graph classification datasets and property testing tasks.",
"The coloring scheme is especially well suited to hard classification tasks that require complex structural information to learn.",
"The framework is general and simple enough to extend to other data structures such as directed, weighted or labeled graphs.",
"Future work includes more detailed and quantitative approximation results depending on the parameters of the architecture such as the number of colors k, or number of hops of the iterative neighborhood aggregation."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.12765957415103912,
0.24137930572032928,
0.3414634168148041,
0.12765957415103912,
0.04081632196903229,
0.060606054961681366,
0.12121211737394333,
0.09677419066429138,
0.21739129722118378,
0.14035087823867798,
0.1269841194152832,
0,
0.24242423474788666,
0.34285715222358704,
0.3499999940395355,
0.0833333283662796,
0.0952380895614624,
0.2857142686843872,
0.15789473056793213,
0.1621621549129486,
0.09090908616781235,
0.1538461446762085,
0.2631579041481018,
0.16326530277729034,
0.1538461446762085,
0.04878048226237297,
0.04255318641662598
] | rJxt0JHKvS | true | [
"This paper introduces a coloring scheme for node disambiguation in graph neural networks based on separability, proven to be a universal MPNN extension."
] |
[
"In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components.",
"Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. ",
"Here, we use DCGAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.",
"Algorithmic music composition is almost as old as computers themselves, dating back to the 1957 \"Illiac suite\" (Hiller Jr & Isaacson, 1958) .",
"Since then, automated music composition evolved with technology, progressing from the first rule-and-randomness based methods to the sophisticated tools made possible by modern-day machine learning (see Fernández & Vico (2013) and Briot et al. (2017) for detailed surveys on history and state of the art of algorithmic music composition).",
"One of the first machine learning (ML) approaches to music generation was Conklin & Witten (1995) , who used the common notion of entropy as a measurement to build what they termed a multiple viewpoint system.",
"Standard feedforward neural networks have difficulties with sequence based information such as music.",
"Predicting the next note of a piece, when only based on the current note, does not account for long-range context or structure (such as key and musical sections) which help give coherence to compositions.",
"As music is traditionally represented as sequences of notes, recurrent neural networks are a natural tool for music (especially melody) generation, and multiple groups used RNNs fairly successfully for a variety of types of music.",
"Todd (1989) used a sequential model for composition in 1989, and Eck & Schmidhuber (2002) used the adapted LSTM structure to successfully generate music that had both short-term musical structure and contained the higher-level context and structure needed.",
"Subsequently, there have been a number of RNN-based melody generators (Simon & Oore, 2017; Lee et al., 2017; Eck & Lapalme, 2008; Sturm et al., 2016; Chen & Miikkulainen, 2001; Boulanger-Lewandowski et al., 2012; .",
"Other approaches such as MidiNet by Yang et al. (2017) , though not RNNs, also leveraged the sequential representation of music.",
"Using an RNN architecture provides a lot of flexibility when generating music, as an RNN has the ability to generate pieces of varying length.",
"However, in some styles of music this is not as desired.",
"This is true of traditional Irish music -and especially their jigs and reels.",
"These pieces have a more rigid format where the varying length can prevent capturing the interplay between the phrases of the piece.Finding jigs and reels to train on was made easy by an excellent database of Irish traditional melodies in ABC notation (a text based format), publicly available at TheSessionKeith.",
"Several RNN-based generators were trained on the melodies from TheSession, most notably Sturm et al. (Sturm et al., 2016; Sturm & Ben-Tal, 2018) , as well as Eck & Lapalme (2008) .",
"It is natural to view music, and in particular melodies, as sequential data.",
"However, to better represent long-term dependencies it can be useful to present music as a two-dimensional form, where related parts and occurrences of long patterns end up aligned.",
"This benefit is especially apparent in forms of music where a piece consists of a well-defined, fixed-length components, such as reels in Irish music.",
"These components are often variations on the same theme, with specific rules on where repeats vs. changes should be introduced.",
"Aligning them allows us to use vertical spatial proximity to capture these dependencies, while still representing the local structure in the sequence by horizontal proximity.",
"In this project, we leverage such two-dimensional representation of melodies for non-sequential melody generation.",
"We focus on melody generation using deep convolutional generative adversarial networks (DCGANs) without recurrent components for fixed-format music such as reels.",
"This approach is intended to capture higher-level structures in the pieces (like sections), and better mimic interplay between smaller parts (musical motifs).",
"More specifically, we use dilations of several semantically meaningful lengths (a bar or a phrase) to further capture the dependencies.",
"Dilated convolutions, introduced by Yu & Koltun (2015) , have been used in a number of applications over the last several years to capture long-range dependencies, notably in WaveNet (Oord et al., 2016) .",
"However, they are usually combined with some recurrent component even when used for a GAN-based generation such as in Zhang et al. (2019) or Liu & Yang (2019) .",
"Not all techniques applicable to images can be used for music, however: pooling isn't effective, as the average of two pitches can create notes which fall outside of the 12-semitone scale (which is the basis the major and minor scale as well as various modes).",
"This is reflected in the architecture of our discriminator, with dilations and towers as the main ingredients.",
"Converting sequential data into a format which implicitly encodes temporal information as spatial information is an effective way of generating samples of such data as whole pieces.",
"Here, we explored this approach for melody generation of fixed-length music forms, such as an Irish reel, using non-recurrent architecture for the discriminator CNN with towers and dilations, as well as a CNN for the GAN itself.",
"One advantage of this approach is that the model learns global and contextual information simultaneously, even with a small model.",
"LSTMs and other approaches need a much larger model to be able to learn both the contextual neighboring note sequences and global melody structure.",
"In future work, we would like to introduce boosting in order to capture the structure of the distribution more faithfully, and increase the range of pieces our model can generate.",
"Natural extensions of the model would be to introduce multiple channels to capture durations better (for example, as in Colombo et al. (2016) ), and add polyphony (ie, using some form of piano roll representation).",
"Another direction could be to experiment with higher-dimensional representation of the sequence data, to better capture several types of dependencies simultaneously.",
"Additionally, it would be interesting to apply it to other kinds of fixed-length sequential data with long-range patterns."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3243243098258972,
0.1860465109348297,
0.1666666567325592,
0.04999999329447746,
0.0317460261285305,
0.07843136787414551,
0.1249999925494194,
0.07692307233810425,
0.1249999925494194,
0.07843136787414551,
0.04255318641662598,
0.04999999329447746,
0.14999999105930328,
0.06666666269302368,
0,
0.09090908616781235,
0.08888888359069824,
0.0624999962747097,
0.17391303181648254,
0.10256409645080566,
0.10526315122842789,
0.04878048226237297,
0.12121211737394333,
0.25,
0,
0.10256409645080566,
0.039215680211782455,
0.17391303181648254,
0.1071428507566452,
0.11428570747375488,
0.0952380895614624,
0.19999998807907104,
0.10526315122842789,
0.04878048226237297,
0.13333332538604736,
0.07692307233810425,
0.052631575614213943,
0.05714285373687744
] | HkePOCNtPH | true | [
"Representing melodies as images with semantic units aligned we can generate them using a DCGAN without any recurrent components."
] |
[
"Neural machine translation (NMT) systems have reached state of the art performance in translating text and widely deployed. ",
"Yet little is understood about how these systems function or break. ",
"Here we show that NMT systems are susceptible to producing highly pathological translations that are completely untethered from the source material, which we term hallucinations. ",
"Such pathological translations are problematic because they are are deeply disturbing of user trust and easy to find. ",
"We describe a method t generate hallucinations and show that many common variations of the NMT architecture are susceptible to them.",
"We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques and show that data augmentation significantly reduces hallucination frequency.",
"Finally, we analyze networks that produce hallucinations and show signatures of hallucinations in the attention matrix and in the stability measures of the decoder.",
"Neural machine translation (NMT) systems are language translation systems based on deep learning architectures BID10 BID1 BID31 .",
"In the past few years, NMT has vastly improved and has been deployed in production systems, for example at Google BID33 , Facebook BID15 , Microsoft BID17 , and many others.",
"As NMT systems are built on deep learning methodology, they exhibit both the strengths and weaknesses of the approach.",
"For example, NMT systems are competitive with state of the art performance BID6 and scale well to very large datasets BID23 but like most large deep learning systems, NMT systems are poorly understood.",
"For example, in many commercial translation systems, entering repeated words many times occasionally results in strange translations, a phenomenon which has been highly publicized BID12 .",
"More broadly, recent work shows that NMT systems are highly sensitive to noise in the input tokens BID3 and also susceptible to adversarial inputs BID9 .",
"When there is an error in translation, it can be challenging to either understand why the mistake occurred or engineer a fix.Here we continue the study of noise in the input sequence and describe a type of phenomenon that is particularly pernicious, whereby inserting a single additional input token into the source sequence can completely divorce the translation from the input sentence.",
"For example, here is a German input sentence translated to English (reference) by a small NMT system: Source: Caldiero sprach mit E!",
"Nachrichten nach dem hart erkämpften Sieg, noch immer unter dem Schocküber den Gewinn des Großen Preises von 1 Million $.",
"Reference: Caldiero spoke with E!",
"News after the hard-fought victory, still in shock about winning the $1 million grand prize.",
"NMT Translation: Caldiero spoke with E, after the hard won victory, still under the shock of the winning of the Grand Prix of 1 million $.",
"In this paper we uncovered and studied a hallucination-like phenomenon whereby adding a single additional token into the input sequence causes complete mistranslation.",
"We showed that hallucinations are common in the NMT architecture we examined, as well as in its variants.",
"We note that hallucinations appear to be model specific.",
"We showed that the attention matrices associated with hallucinations were statistically different on average than those associated with input sentences that could not be perturbed.",
"Finally we proposed a few methods to reduce the occurrence of hallucinations.Our model has two differences from production systems.",
"For practical reasons we studied a small model and used a limited amount of training data.",
"Given these differences it is likely that our model shows more hallucinations than a quality production model.",
"However, given news reports of strange translations in popular public translation systems BID12 , the dynamical nature of the phenomenon, the fact that input datasets are noisy and finite, and that our most effective technique for preventing hallucinations is a data augmentation technique that requires knowledge of hallucinations, it would be surprising to discover that hallucinations did not occur in production systems.While it is not entirely clear what should happen when a perturbing input token is added to an input source sequence, it seems clear that having an utterly incorrect translation is not desirable.",
"This phenomenon appeared to us like a dynamical problem.",
"Here are two speculative hypotheses: perhaps a small problem in the decoder is amplified via iteration into a much larger problem.",
"Alternatively, perhaps the perturbing token places the decoder state in a poorly trained part of state space, the dynamics jump around wildly for while until an essentially random well-trodden stable trajectory is found, producing the remaining intelligible sentence fragment.Many of our results can be interpreted from the vantage of dynamical systems as well.",
"For example, we note that the NMT networks using CFN recurrent modules were highly susceptible to perturbations in our experiments.",
"This result highlights the difficulty of understanding or fixing problems in recurrent networks.",
"Because the CFN is embedded in a larger graph that contains an auto-regressive loop, there is no guarantee that the chaos-free property of the CFN will transfer to the larger graph.",
"The techniques we used to reduce hallucinations can also be interpreted as dynamical regularization.",
"For example, L2 weight decay is often discussed in the context of generalization.",
"However, for RNNs L2 regularization can also be thought of as dynamically conditioning a network to be more stable.",
"L2 regularization of input embeddings likely means that rare tokens will have optimization pressure to reduce the norm of those embeddings.",
"Thus, when rare tokens are inserted into an input token sequence, the effects may be reduced.",
"Even the data augmentation technique appears to have stability effects, as Appendix 10 shows the overall stability exponents are reduced when data augmentation is used.Given our experimental results, do we have any recommendations for those that engineer and maintain production NMT systems?",
"Production models should be tested for hallucinations, and when possible, the attention matrices and hidden states of the decoder should be monitored.",
"Our results on reducing hallucinations suggest that standard regularization techniques such as Dropout and L2 weight decay on the embeddings are important.",
"Further, data augmentation seems critical and we recommend inserting randomly chosen perturbative tokens in the input sentence as a part of the standard training regime (while monitoring that the BLEU score does not fall).",
"We note a downside of data augmentation is that, to some extent, it requires knowing the types of the pathological phenomenon one desires to train against.",
"Figure 7 : Schematic of the NMT decoder.",
"The input sequence, x 1:S , is encoded by a bidirectional encoder (not shown) into a sequence of encodings, z 1:S .",
"The attention network, f att , computes a weighted sum of these encodings (computed weights not shown), based on conditioning information from h and provides the weighted encoding to the 2-layer decoder, f dec , as indicated by the arrows.",
"The decoder proceeds forward in time producing the translation one step at a time.",
"As the decoder proceeds forward, it interacts with both the attention network and also receives as input the decoded output symbol from the previous time step."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19512194395065308,
0.05882352590560913,
0.17777776718139648,
0.20512819290161133,
0.23255813121795654,
0.2978723347187042,
0.25,
0,
0.12244897335767746,
0.14999999105930328,
0.15686273574829102,
0.08888888359069824,
0.17391303181648254,
0.22535210847854614,
0.04651162400841713,
0,
0,
0.1111111044883728,
0.09302324801683426,
0.13636362552642822,
0.21052631735801697,
0.12903225421905518,
0.09090908616781235,
0.2380952388048172,
0.10810810327529907,
0,
0.15555554628372192,
0.12903225421905518,
0.09756097197532654,
0.08695651590824127,
0.1428571343421936,
0.22857142984867096,
0.17777776718139648,
0.1111111044883728,
0.17142856121063232,
0.09999999403953552,
0.19512194395065308,
0.052631575614213943,
0.09999999403953552,
0.14999999105930328,
0.09302324801683426,
0.14814814925193787,
0.2222222238779068,
0.13333332538604736,
0.04878048226237297,
0.14035087823867798,
0.11428570747375488,
0.08888888359069824
] | SJxTk3vB3m | true | [
"We introduce and analyze the phenomenon of \"hallucinations\" in NMT, or spurious translations unrelated to source text, and propose methods to reduce its frequency."
] |
[
"For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks.",
"In this work, we identify two issues of current explanatory methods.",
"First, we show that two prevalent perspectives on explanations—feature-additivity and feature-selection—lead to fundamentally different instance-wise explanations.",
"In the literature, explainers from different perspectives are currently being directly compared, despite their distinct explanation goals.",
"The second issue is that current post-hoc explainers have only been thoroughly validated on simple models, such as linear regression, and, when applied to real-world neural networks, explainers are commonly evaluated under the assumption that the learned models behave reasonably.",
"However, neural networks often rely on unreasonable correlations, even when producing correct decisions.",
"We introduce a verification framework for explanatory methods under the feature-selection perspective.",
"Our framework is based on a non-trivial neural network architecture trained on a real-world task, and for which we are able to provide guarantees on its inner workings.",
"We validate the efficacy of our evaluation by showing the failure modes of current explainers.",
"We aim for this framework to provide a publicly available,1 off-the-shelf evaluation when the feature-selection perspective on explanations is needed.",
"A large number of post-hoc explanatory methods have recently been developed with the goal of shedding light on highly accurate, yet black-box machine learning models (Ribeiro et al., 2016a; Lundberg & Lee, 2017; Arras et al., 2017; Shrikumar et al., 2017; Ribeiro et al., 2016b; Plumb et al., 2018; Chen et al., 2018) .",
"Among these methods, there are currently at least two widely used perspectives on explanations: feature-additivity (Ribeiro et al., 2016a; Lundberg & Lee, 2017; Shrikumar et al., 2017; Arras et al., 2017) and feature-selection (Chen et al., 2018; Ribeiro et al., 2018; Carter et al., 2018) , which we describe in detail in the sections below.",
"While both shed light on the overall behavior of a model, we show that, when it comes to explaining the prediction on a single input in isolation, i.e., instance-wise explanations, the two perspectives lead to fundamentally different explanations.",
"In practice, explanatory methods adhering to different perspectives are being directly compared.",
"For example, Chen et al. (2018) and Yoon et al. (2019) compare L2X, a feature-selection explainer, with LIME (Ribeiro et al., 2016a) and SHAP (Lundberg & Lee, 2017) , two feature-additivity explainers.",
"We draw attention to the fact that these comparisons may not be coherent, given the fundamentally different explanation targets, and we discuss the strengths and limitations of the two perspectives.",
"Secondly, while current explanatory methods are successful in pointing out catastrophic biases, such as relying on headers to discriminate between pieces of text about Christianity and atheism (Ribeiro et al., 2016a) , it is an open question to what extent they are reliable when the model that they aim to explain (which we call the target model) has a less dramatic bias.",
"This is a difficult task, precisely because the ground-truth decision-making process of neural networks is not known.",
"Consequently, when applied to complex neural networks trained on real-world datasets, a prevalent way to evaluate the explainers is to assume that the target models behave reasonably, i.e., that they did not rely on irrelevant correlations.",
"For example, in their morphosyntactic agreement paradigm, Pörner et al. (2018) assume that a model that predicts if a verb should be singular or plural given the tokens before the verb, must be doing so by focusing on a noun that the model had identified as the subject.",
"Such assumptions may be poor, since recent works show a series of surprising spurious correlations in human-annotated datasets, on which neural networks learn to heavily rely (Gururangan et al., 2018; Glockner et al., 2018; Carmona et al., 2018) .",
"Therefore, it is not reliable to penalize an explainer for pointing to tokens that just do not appear significant to us.",
"We address the above issue by proposing a framework capable of generating evaluation tests for the explanatory methods under the feature-selection perspective.",
"Our tests consist of pairs of (target model, dataset).",
"Given a pair, for each instance in the dataset, the specific architecture of our model allows us to identify a subset of tokens that have zero contribution to the model's prediction on the instance.",
"We further identify a subset of tokens clearly relevant to the prediction.",
"Hence, we test if explainers rank zero-contribution tokens higher than relevant tokens.",
"We instantiated our framework on three pairs of (target model, dataset) on the task of multi-aspect sentiment analysis.",
"Each pair corresponds to an aspect and the three models (of same architecture) have been trained independently.",
"We highlight that our test is not a sufficient test for concluding the power of explainers in full generality, since we do not know the whole ground-truth behaviour of the target models.",
"Indeed, we do not introduce an explanation generation framework but a framework for generating evaluation tests for which we provide certain guarantees on the behaviour of the target model.",
"Under these guarantees we are able to test the explainers for critical failures.",
"Our framework therefore generates necessary evaluation tests, and our metrics penalize explainers only when we are able to guarantee that they produced an error.",
"To our knowledge, we are the first to introduce an automatic and non-trivial evaluation test that does not rely on speculations on the behavior of the target model.",
"Finally, we evaluate L2X (Chen et al., 2018) , a feature-selection explainer, under our test.",
"Even though our test is specifically designed for feature-selection explanatory methods, since, in practice, the two types of explainers are being compared, and, since LIME (Ribeiro et al., 2016a) and SHAP (Lundberg & Lee, 2017) are two very popular explainers, we were interested in how the latter perform on our test, even though they adhere to the feature-additivity perspective.",
"Interestingly, we find that, most of the time, LIME and SHAP perform better than L2X.",
"We will detail in Section 5 the reasons why we believe this is the case.",
"We provide the error rates of these explanatory methods to raise awareness of their possible modes of failure under the feature-selection perspective of explanations.",
"For example, our findings show that, in certain cases, the explainers predict the most relevant token to be among the tokens with zero contribution.",
"We will release our test, which can be used off-the-shelf, and encourage the community to use it for testing future work on explanatory methods under the feature-selection perspective.",
"We also note that our methodology for creating this evaluation is generic and can be instantiated on other tasks or areas of research.",
"In this work, we instantiate our framework on the RCNN model trained on the BeerAdvocate corpus, 3 on which the RCNN was initially evaluated (Lei et al., 2016) .",
"BeerAdvocate consists of a total of ≈ .100K human-generated multi-aspect beer reviews, where the three considered aspects are appearance, aroma, and palate.",
"The reviews are accompanied with fractional ratings originally between 0 and 5 for each aspect independently.",
"The RCNN is a regression model with the goal to predict the rating, rescaled between 0 and 1 for simplicity.",
"Three separate RCNNs are trained, one for each aspect independently, with the same default settings.",
"4 With the above procedure, we gathered three datasets D a , one for each aspect a.",
"For each dataset, we know that for each instance x ∈ D a , the set of non-selected tokens N x has zero contribution to the prediction of the model.",
"For obtaining the clearly relevant tokens, we chose a threshold of τ = 0.1, since the scores are in [0, 1], and the ground-truth ratings correspond to {0, 0.1, 0.2, . . . , 1}.",
"Therefore, a change in prediction of 0.1 is to be considered clearly significant for this task.",
"We provide several statistics of our datasets in Appendix A. For example, we provide the average lengths of the reviews, of the selected tokens per review, of the clearly relevant tokens among the selected, and of the non-selected tokens.",
"We note that we usually obtained 1 or 2 clearly relevant tokens per datapoints, showing that our threshold of 0.1 is likely very strict.",
"However, we prefer to be more conservative in order to ensure high guarantees on our evaluation test.",
"We also provide the percentages of datapoints eliminated in order to ensure the no-handshake condition (Equation 7).",
"Evaluating explainers.",
"We test three popular explainers: LIME (Ribeiro et al., 2016a), SHAP (Lundberg & Lee, 2017) , and L2X (Chen et al., 2018) .",
"We used the code of the explainers as provided in the original repositories, 5 with their default settings for text explanations, with the exception that, for L2X, we set the dimension of the word embeddings to 200 (the same as in the RCNN), and we also allowed training for a maximum of 30 epochs instead of 5.",
"As mentioned in Section 3, LIME and SHAP adhere to the feature-additivity perspective, hence our evaluation is not directly targeting these explainers.",
"However, we see in Table 1 that, in practice, LIME and SHAP outperformed L2X on the majority of the metrics, even though L2X is a featureselection explainer.",
"We hypothesize that a major limitation of L2X is the requirement to know the number of important features per instance.",
"Indeed, L2X learns a distribution over the set of features by maximizing the mutual information between subsets of K features and the response variable, where K is assumed to be known.",
"In practice, one usually does not know how many features per instance a model relied on.",
"To test L2X under real-world circumstances, we used as K the average number of tokens highlighted by human annotators on the subset manually annotated by McAuley et al. (2012) .",
"We obtained an average K of 23, 18, and 13 for the three aspects, respectively.",
"In Table 1 , we see that, on metric (A), all explainers are prone to stating that the most relevant feature is a token with zero contribution, as much as 14.79% of the time for LIME and 12.95% of the time for L2X in the aroma aspect.",
"We consider this the most dramatic form of failure.",
"Metric (B) shows that both explainers can rank at least one zero-contribution token higher than a clearly relevant feature, i.e., there is at least one mistake in the predicted ranking.",
"Finally, metric (C) shows that, in average, SHAP only places one zero-contribution token ahead of a clearly relevant token for the first two aspects and around 9 tokens for the third aspect, while L2X places around 3-4 zero-contribution tokens ahead of a clearly relevant one for all three aspects.",
"Figure 4 : Explainers' rankings (with top 5 features on the right-hand side) on an instance from the palate aspect in our evaluation dataset.",
"Qualitative Analysis.",
"In Figure 6 , we present an example from our dataset of the palate aspect.",
"More examples in Appendix C. The heatmap corresponds to the ranking determined by each explainer, and the intensity of the color decreases linearly with the ranking of the tokens.",
"6 We only show in the heatmap the first K = 10 ranked tokens, for visibility reasons.",
"Tokens in S x are in bold, and the clearly relevant tokens from SR x are additionally underlined.",
"The first selected by the explainer is marked wth a rectangular.",
"Additionally the 5 ranks tokens by each explainer are on the right-hand side.",
"Firstly, we notice that both explainers are prone to attributing importance to nonselected tokens, with LIME and SHAP even ranking the tokens \"mouthfeel\" and \"lacing\" belonging to N x as first two (most important).",
"Further, \"gorgeous\", the only relevant word used by the model, did not even make it in top 13 tokens for L2X.",
"Instead, L2X gives \"taste\", \"great\", \"mouthfeel\" and \"lacing\" as most important tokens.",
"We note that if the explainer was evaluated by humans assuming that the RCNN behaves reasonably, then this choice could have well been considered correct.",
"In this work, we first shed light on an important distinction between two widely used perspectives of explanations.",
"Secondly, we introduced an off-the-shelf evaluation test for post-hoc explanatory methods under the feature-selection perspective.",
"To our knowledge, this is the first automatic verification framework offering guarantees on the behaviour of a non-trivial real-world neural network.",
"We presented the error rates on different metrics for three popular explanatory methods to raise awareness of the types of failures that these explainers can produce, such as incorrectly predicting even the most relevant token.",
"While instantiated on a natural language processing task, our methodology is generic and can be adapted to other tasks and other areas.",
"For example, in computer vision, one could train a neural network that first makes a hard selection of super-pixels to retain, and subsequently makes a prediction based on the image where the non-selected super-pixels have been blurred.",
"The same procedure of checking for zero contribution of non-selected super-pixels would then apply.",
"We also point out that the core algorithm in the majority of the current post-hoc explainers are also domain-agnostic.",
"Therefore, we expect our evaluation to provide a representative view of the fundamental limitations of the explainers."
] | [
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1111111044883728,
0.1666666567325592,
0.06896550953388214,
0,
0.1599999964237213,
0.1538461446762085,
0.4000000059604645,
0.42105263471603394,
0.07692307233810425,
0.3030303120613098,
0.14814814925193787,
0.0363636314868927,
0.0833333283662796,
0.1599999964237213,
0.04999999701976776,
0,
0.11428571492433548,
0.13793103396892548,
0.17391304671764374,
0.07692307233810425,
0.12765957415103912,
0.06451612710952759,
0.3636363446712494,
0,
0.14999999105930328,
0.07999999821186066,
0,
0.13793103396892548,
0,
0.09999999403953552,
0.2631579041481018,
0.07692307233810425,
0.10810810327529907,
0.10526315122842789,
0.0714285671710968,
0.09090908616781235,
0,
0,
0.12121211737394333,
0,
0.20000000298023224,
0.1666666567325592,
0.10810810327529907,
0.05882352590560913,
0.06896550953388214,
0.1249999925494194,
0.0714285671710968,
0.13793103396892548,
0.10526315122842789,
0.045454543083906174,
0.13333332538604736,
0,
0,
0.13793103396892548,
0,
0,
0.07407406717538834,
0.05714285373687744,
0.10810810327529907,
0.06451612710952759,
0.05128204822540283,
0.13793103396892548,
0.09999999403953552,
0.0714285671710968,
0.1090909019112587,
0,
0.0476190447807312,
0.08510638028383255,
0.11428570747375488,
0,
0,
0.06896550953388214,
0,
0.0833333283662796,
0.07999999821186066,
0,
0.060606054961681366,
0,
0,
0.06451612710952759,
0.3571428656578064,
0.3636363446712494,
0.17777776718139648,
0.12121211737394333,
0.2222222238779068,
0.07692307233810425,
0.06896550953388214,
0.1428571343421936
] | S1e-0kBYPB | true | [
"An evaluation framework based on a real-world neural network for post-hoc explanatory methods"
] |
[
"Planning in high-dimensional space remains a challenging problem, even with recent advances in algorithms and computational power.",
"We are inspired by efference copy and sensory reafference theory from neuroscience. ",
"Our aim is to allow agents to form mental models of their environments for planning. ",
"The cerebellum is emulated with a two-stream, fully connected, predictor network.",
"The network receives as inputs the efference as well as the features of the current state.",
"Building on insights gained from knowledge distillation methods, we choose as our features the outputs of a pre-trained network, yielding a compressed representation of the current state. ",
"The representation is chosen such that it allows for fast search using classical graph search algorithms.",
"We display the effectiveness of our approach on a viewpoint-matching task using a modified best-first search algorithm.",
"As we manipulate an object in our hands, we can accurately predict how it looks after some action is performed.",
"Through our visual sensory system, we receive high-dimensional information about the object.",
"However, we do not hallucinate its full-dimensional representation as we estimate how it would look and feel after we act.",
"But we feel that we understood what happened if there is an agreement between the experience of the event and our predicted experience.",
"There has been much recent work on methods that take advantage of compact representations of states for search and exploration.",
"One of the advantages of this approach is that finding a good representation allows for faster and more efficient planning.",
"This holds in particular when the latent space is of a much lower dimensionality than the one where the states originally live in.",
"Our central nervous system (CNS) sends a command (efferent) to our motor system, as well as sending a copy of the efferent to our cerebellum, which is our key organ for predicting the sensory outcome of actions when we initiate a movement and is responsible for fine motor control.",
"The cerebellum then compares the result of the action (sensory reafference) with the intended consequences.",
"If they differ, then the cerebellum makes changes to its internal structure such that it does a better job next time -i.e., in no uncertain terms, it learns.",
"The cerebellum receives 40 times more information than it outputs, by a count of the number of axons.",
"This gives us a sense of the scale of the compression ratio between the high dimensional input and low dimensional output.",
"Thus, we constrain our attention to planning in a low-dimensional space, without necessarily reconstructing the high-dimensional one.",
"We apply this insight for reducing the complexity of tasks such that planning in high dimensionality space can be done by classical AI methods in low dimensionality space .",
"Our contributions are thus twofold: provide a link between efference theory and classical planning with a simple model and introduce a search method for applying the model to reduced state-space search.",
"We validate our approach experimentally on visual data associated with categorical actions that connect the images, for example taking an object and rotating it.",
"We create a simple manipulation task using the NORB dataset (LeCun et al., 2004) , where the agent is presented with a starting viewpoint of an object and the task is to produce a sequence of actions such that the agent ends up with the target viewpoint of the object.",
"As the NORB data set can be embedded on a cylinder (Schüler et al., 2018) (Hadsell et al., 2006) or a sphere (Wang et al., 2018) , we can visualize the actions as traversing the embedded manifold.",
"Pairing the EfferenceNet with a good but generic feature map allows us to perform an accurate search in the latent space of manipulating unseen objects.",
"This remarkably simple method, inspired by the neurology of the cerebellum, reveals a promising line of future work.",
"We validate our method by on a viewpoint-matching task derived from the NORB dataset.",
"In the case of deterministic environments, EfferenceNets calculate features of the current state and action, which in turn define the next state.",
"This opens up a future direction of research by combining EfferenceNets with successor features (Barreto et al., 2017) .",
"Furthermore, the study of effective feature maps strikes us as an important factor in this line of work to consider.",
"We utilize here Laplacian Eigenmaps and pre-trained deep networks.",
"It is probably possible to improve the performance of the system by end-to-end training but we believe that it is more promising to work on generic multi-purpose representations.",
"Possible further methods include Slow Feature Analysis (SFA) (Wiskott & Sejnowski, 2002) (Schüler et al., 2018) .",
"SFA has been previously shown (Sprekeler, 2011) to solve a special case of LEMs while it allows for natural out-of-sample embeddings."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.13793103396892548,
0.07692307233810425,
0.0714285671710968,
0.0833333283662796,
0,
0.10526315122842789,
0.1428571343421936,
0.27586206793785095,
0,
0,
0,
0,
0.1875,
0.1249999925494194,
0.1818181723356247,
0.07843136787414551,
0,
0.0476190447807312,
0.06666666269302368,
0.06666666269302368,
0.06666666269302368,
0.15789473056793213,
0.20512820780277252,
0.1621621549129486,
0.0833333283662796,
0.09999999403953552,
0.21621620655059814,
0.06896550953388214,
0.29629629850387573,
0,
0.0624999962747097,
0,
0.1818181723356247,
0.052631575614213943,
0,
0.11764705181121826
] | rkxloaEYwB | true | [
"We present a neuroscience-inspired method based on neural networks for latent space search"
] |
[
"Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks.",
"Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox.",
"In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models.",
"To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process.",
"Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.",
"Distributed representations have played a pivotal role in the current success of machine learning.",
"In contrast with the symbolic representations of classical AI, distributed representation spaces can encode rich notions of semantic similarity in their distance measures, allowing systems to generalise to novel inputs.",
"Methods to learn these representations have gained significant traction, in particular for modelling words BID30 .",
"They have since been successfully applied to many other domains, including images BID15 BID39 and graphs BID25 BID17 BID33 .Using",
"unlabelled data to learn effective representations is at the forefront of modern machine learning research. The Natural",
"Language Processing (NLP) community in particular, has invested significant efforts in the construction BID30 BID37 BID10 BID21 , evaluation and theoretical analysis BID28 of distributed representations for words.Recently, attention has shifted towards the unsupervised learning of representations for larger pieces of text, such as phrases BID50 BID51 , sentences BID22 BID43 BID19 BID7 , and entire paragraphs BID27 . Some of this",
"work simply sums or averages constituent word vectors to obtain a sentence representation BID32 BID31 BID48 BID7 , which is surprisingly effective but naturally cannot leverage any contextual information.Another line of research has relied on a sentence-level distributional hypothesis BID38 , originally applied to words BID18 , which is an assumption that sentences which occur in similar contexts have a similar meaning. Such models",
"often use an encoder-decoder architecture BID12 to predict the adjacent sentences of any given sentence. Examples of",
"such models include SkipThought , which uses Recurrent Neural Networks (RNNs) for its encoder and decoders, and FastSent BID19 , which replaces the RNNs with simpler bagof-words (BOW) versions.Models trained in an unsupervised manner on large text corpora are usually applied to supervised transfer tasks, where the representation for a sentence forms the input to a supervised classification problem, or to unsupervised similarity tasks, where the similarity (typically taken to be the cosine similarity) of two inputs is compared with corresponding human judgements of semantic similarity in order to inform some downstream process, such as information retrieval.Interestingly, some researchers have observed that deep complex models like SkipThought tend to do well on supervised transfer tasks but relatively poorly on unsupervised similarity tasks, whereas for shallow log-linear models like FastSent the opposite is true BID19 BID13 . It has been",
"highlighted that this should be addressed by analysing the geometry of the representation space BID6 BID42 BID19 , however, to the best of our knowledge it has not been systematically attempted 1 .In this work",
"we attempt to address the observed performance gap on unsupervised similarity tasks between representations produced by simple models and those produced by deep complex models. Our main contributions",
"are as follows:• We introduce the concept of an optimal representation space, in which the space has a similarity measure that is optimal with respect to the objective function.• We show that models with",
"log-linear decoders are usually evaluated in their optimal space, while recurrent models are not. This effectively explains",
"the performance gap on unsupervised similarity tasks.• We show that, when evaluated",
"in their optimal space, recurrent models close that gap. We also provide a procedure for",
"extracting this optimal space using the decoder hidden states.• We validate our findings with",
"a series of consistent empirical evaluations utilising a single publicly available codebase.",
"In this work, we introduced the concept of an optimal representation space, where semantic similarity directly corresponds to distance in that space, in order to shed light on the performance gap between simple and complex architectures on downstream tasks.",
"In particular, we studied the space of initial hidden states to BOW and RNN decoders (typically the outputs of some encoder) and how that space relates to the training objective of the model.For BOW decoders, the optimal representation space is precisely the initial hidden state of the decoder equipped with dot product, whereas for RNN decoders it is not.",
"Noting that it is precisely these spaces that have been used for BOW and RNN decoders has led us to a simple explanation for the observed performance gap between these architectures, namely that the former has been evaluated in its optimal representation space, whereas the latter has not.Furthermore, we showed that any neural network that outputs a probability distribution has an optimal representation space.",
"Since a RNN does produce a probability distribution, we analysed its objective function which motivated a procedure of unrolling the decoder.",
"This simple method allowed us to extract representations that are provably optimal under dot product, without needing to retrain the model.We then validated our claims by comparing the empirical performance of different architectures across transfer tasks.",
"In general, we observed that unrolling even a single state of the decoder always outperforms the raw encoder output with RNN decoder, and almost always outperforms the raw encoder output with BOW decoder for some number of unrolls.",
"This indicates different vector embeddings can be used for different downstream tasks depending on what type of representation space is most suitable, potentially yielding high performance on a variety of tasks from a single trained model.Although our analysis of decoder architectures was restricted to BOW and RNN, others such as convolutional BID49 and graph BID25 decoders are more appropriate for many tasks.",
"Similarly, although we focus on Euclidean vector spaces, hyperbolic vector spaces BID34 , complex-valued vector spaces BID44 and spinor spaces BID23 all have beneficial modelling properties.",
"In each case, although an optimal representation space should exist, it is not clear if the intuitive space and similarity measure is the optimal one.",
"However, there should at least exist a mapping from the intuitive choice of space to the optimal space using a transformation provided by the network itself, as we showed with the RNN decoder.",
"Evaluating in this space should further improve performance of these models.",
"We leave this for future work.Ultimately, a good representation is one that makes a subsequent learning task easier.",
"For unsupervised similarity tasks, this essentially reduces to how well the model separates objects in the chosen representation space, and how appropriately the similarity measure compares objects in that space.",
"Our findings lead us to the following practical advice:",
"i) Use a simple model architecture where the optimal representation space is clear by construction, or",
"ii) use an arbitrarily complex model architecture and analyse the objective function to reveal, for a chosen vector representation, an appropriate similarity metric.We hope that future work will utilise a careful understanding of what similarity means and how it is linked to the objective function, and that our analysis can be applied to help boost the performance of other complex models.",
"1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10Number of unroll steps Spearman correlation coefficient Figure 3 : Performance on the STS tasks depending on the number of unrolled hidden states of the decoders, using cosine similarity as the similarity measure.",
"The top row presents results for the RNN encoder and the bottom row for the BOW encoder.",
"Red: Raw encoder output with BOW decoder.",
"Green: Raw encoder output with RNN decoder.",
"Blue: Unrolled RNN decoder output.",
"For both RNN and BOW encoders, unrolling the decoder strictly outperforms *-RNN for almost every number of unroll steps, and perform nearly as well as or better than *-BOW.A COMPARISON WITH SKIPTHOUGHT Table 3 : Performance of the SkipThought model, with and without layer normalisation BID8 , compared against the RNN-RNN model used in our experimental setup.",
"On each task, the highest performing model is highlighted in bold.",
"For SICK-R, we report the Pearson correlation, and for STS14 we report the Pearson/Spearman correlation with human-provided scores.",
"For all other tasks, reported values indicate test accuracy.",
"† indicates results taken from BID13 .",
"‡ indicates our results from running SentEval on the model downloaded from BID8 's publicly available codebase (https://github.com/ryankiros/layer-norm).",
"We attribute the discrepancies in performance to differences in experimental setup or implementation.",
"However, we expect our unrolling procedure to also boost SkipThought's performance on unsupervised similarity tasks, as we show for RNN-RNN in our fair singlecodebase comparisons in the main text.",
"As discussed in Section 3, the objective function is maximising the dot product between the BOW decoder/unrolled RNN-decoder and the context.",
"However, as other researchers in the field and the STS tasks specifically use cosine similarity by default, we present the results using cosine similarity in TAB4 and the results for different numbers of unrolled hidden decoder states in Figure 3 .Although",
"the results in TAB4 are consistent with the dot product results in Table 1 , the overall performance across STS tasks is noticeably lower when dot product is used instead of cosine similarity to determine semantic similarity. Switching",
"from using cosine similarity to dot product transitions from considering only angle between two vectors, to also considering their length. Empirical",
"studies have indicated that the length of a word vector corresponds to how sure of its context the model that produces it is. This is related",
"to how often the model has seen the word, and how many different contexts it appears in (for example, the word vectors for \"January\" and \"February\" have similar norms, however, the word vector for \"May\" is noticeably smaller) BID41 . Using the raw encoder",
"output (RNN-RNN) achieves the lowest performance across all tasks. Unrolling the RNN decoders",
"dramatically improves the performance across all tasks compared to using the raw encoder RNN output, validating the theoretical justification presented in Section 3.3. BOW encoder: We do not observe",
"the same uplift in performance from unrolling the RNN encoder compared to the encoder output. This is consistent with our findings",
"when using dot product (see Table 1 ). A corollary is that longer sentences",
"on average have shorter norms, since they contain more words which, in turn, have appeared in more contexts BID0 . During training, the corpus can induce",
"differences in norms in a way that strongly penalises sentences potentially containing multiple contexts, and consequently will disfavour these sentences as similar to other sentences under the dot product. This induces a noise that potentially",
"renders the dot product a less useful metric to choose for STS tasks than cosine similarity, which is unaffected by this issue.using dot product as the similarity measure. On each task, the highest performing",
"setup for each encoder type is highlighted in bold and the highest performing setup overall is underlined."
] | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.21739129722118378,
0.40816324949264526,
0.1355932205915451,
0.18867923319339752,
0.23728813230991364,
0.13333332538604736,
0.16949151456356049,
0.043478257954120636,
0.039215680211782455,
0.0833333283662796,
0.17073170840740204,
0.16091953217983246,
0.1702127605676651,
0.23076923191547394,
0.12903225421905518,
0.25,
0.29999998211860657,
0.08510638028383255,
0.22727271914482117,
0.260869562625885,
0.08695651590824127,
0.0952380895614624,
0.3692307770252228,
0.25,
0.2469135820865631,
0.1599999964237213,
0.21212120354175568,
0.23728813230991364,
0.21176470816135406,
0.11320754140615463,
0.23076923191547394,
0.16949151456356049,
0.0476190447807312,
0.16326530277729034,
0.3272727131843567,
0.04999999701976776,
0.21276594698429108,
0.25,
0.1515151411294937,
0.1395348757505417,
0,
0,
0,
0.21686746180057526,
0.0952380895614624,
0.17391303181648254,
0,
0,
0.12244897335767746,
0.09302325546741486,
0.21052631735801697,
0.08163265138864517,
0.2222222238779068,
0.13114753365516663,
0.04081632196903229,
0.18867923319339752,
0.12121211737394333,
0.09302325546741486,
0.10344827175140381,
0.04081632196903229,
0.08888888359069824,
0.11320754140615463,
0.1666666567325592,
0.16129031777381897,
0.1304347813129425
] | Byd-EfWCb | true | [
"By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks."
] |
[
"Cloze test is widely adopted in language exams to evaluate students' language proficiency.",
"In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams.",
"With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets.",
"We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data.",
"We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck.",
"In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.",
"Being a classic language exercise, the cloze test BID26 is an accurate assessment of language proficiency BID7 BID11 BID27 and has been widely employed in language examinations.",
"Under standard setting, a cloze test requires examinees to fill in the missing word (or sentence) that best fits the surrounding context.",
"To facilitate natural language understanding, automatically generated cloze datasets were introduced to measure the ability of machines in reading comprehension BID8 BID9 BID17 .",
"In these datasets, each cloze question typically consists of a context paragraph and a question sentence.",
"By randomly replacing a particular word in the question sentence with a blank symbol, a single test case is created.",
"For instance, the CNN/Daily Mail BID8 take news articles as the context and the summary bullet points as the question sentence.",
"Only named entities are considered when creating the blanks.",
"Similarly, in Children's Books test (CBT) BID9 , the cloze question is obtained by removing a word in the last sentence of every consecutive 21 sentences, with the first 20 sentences being the context.",
"Different from the CNN/Daily Mail datasets, CBT also provides each question with a candidate answer set, consisting of randomly sampled words with the same part-of-speech tag from the context as that of the ground truth.Thanks to the automatic generation process, these datasets can be very large in size, leading to significant research progress.",
"However, compared to how humans would create cloze questions, the automatic generation process bears some inevitable issues.",
"Firstly, the blanks are chosen uniformly without considering which aspect of the language phenomenon the question will test.",
"Hence, quite a portion of automatically generated questions can be purposeless or even trivial to answer.",
"Another issue involves the ambiguity of the answer.",
"Given a context and a blanked sentence, there can be multiple words that fit almost equally well into the blank.",
"A possible solution is to include a candidate option set, as done by CBT, to get rid of the ambiguity.",
"However, automatically generating the candidate option set can be problematic since it cannot guarantee the ambiguity is removed.",
"More importantly, automatically generated candidates can be totally irrelevant or simply grammatically unsuitable for the blank, resulting in again trivial questions.",
"Probably due to these unsatisfactory issues, it has been shown neural models have achieved comparable performance with human within very short time BID3 BID6 BID23 .",
"While there has been work trying to incorporate human design into cloze question generation BID30 , the MSR Sentence Completion Challenge created by this effort is quite small in size, limiting the possibility of developing powerful neural models on it.Motivated by the aforementioned drawbacks, we propose CLOTH, a large-scale cloze test dataset collected from English exams.",
"Questions in the dataset are designed by middle-school and highschool teachers to prepare Chinese students for entrance exams.",
"To design a cloze test, teachers firstly determine the words that can test students' knowledge of vocabulary, reasoning or grammar; then replace those words with blanks and provide three candidate options for each blank.",
"If a question does not specifically test grammar usage, all of the candidate options would complete the sentence with correct grammar, leading to highly confusing questions.",
"As a result, human-designed questions are usually harder and are a better assessment of language proficiency.",
"Note that, different from the reading comprehension task, a general cloze test does not focus on testing reasoning abilities but evaluates several aspects of language proficiency including vocabulary, reasoning and grammar.To verify if human-designed cloze questions are difficult for current models, we train dedicated models as well as the state-of-the-art language model and evaluate their performance on this dataset.",
"We find that the state-of-the-art model lags behind human performance even if the model is trained on a large external corpus.",
"We analyze where the model fails compared to human.",
"After conducting error analysis, we assume the performance gap results from the model's inability to use long-term context.",
"To verify this assumption, we evaluate humans' performance when they are only allowed to see one sentence as the context.",
"Our assumption is confirmed by the matched performances of the model and human when given only one sentence.",
"In addition, we demonstrate that human-designed data is more informative and more difficult than automatically generated data.",
"Specifically, when the same amount of training data is given, human-designed training data leads to better performance.",
"Additionally, it is much easier for the same model to perform well on automatically generated data.",
"In this paper, we propose a large-scale cloze test dataset CLOTH that is designed by teachers.",
"With the missing blanks and candidate options carefully created by teachers to test different aspects of language phenomenon, CLOTH requires a deep language understanding and better captures the complexity of human language.",
"We find that human outperforms state-of-the-art models by a significant margin, even if the model is trained on a large corpus.",
"After detailed analysis, we find that the performance gap is due to model's inability to understanding a long context.",
"We also show that, compared to automatically-generated questions, human-designed questions are more difficult and leads to a larger margin between human performance and the model's performance."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.3478260934352875,
0.23529411852359772,
0.2857142686843872,
0.11764705181121826,
0.054054051637649536,
0.060606054961681366,
0.2222222238779068,
0.1875,
0.1764705777168274,
0.07999999821186066,
0.06896550953388214,
0,
0,
0.1463414579629898,
0.035087715834379196,
0.1428571343421936,
0.14814814925193787,
0.07407406717538834,
0,
0,
0.19999998807907104,
0,
0,
0.0555555522441864,
0.15625,
0.3448275923728943,
0.13636362552642822,
0.1111111044883728,
0.1599999964237213,
0.15625,
0,
0.09999999403953552,
0.0714285671710968,
0.06451612710952759,
0.0714285671710968,
0,
0.07692307233810425,
0.07407406717538834,
0.4444444477558136,
0.2631579041481018,
0.06451612710952759,
0.06896550953388214,
0.05882352590560913
] | rJJzTyWCZ | true | [
"A cloze test dataset designed by teachers to assess language proficiency"
] |
[
"Recent work suggests goal-driven training of neural networks can be used to model neural activity in the brain.",
"While response properties of neurons in artificial neural networks bear similarities to those in the brain, the network architectures are often constrained to be different.",
"Here we ask if a neural network can recover both neural representations and, if the architecture is unconstrained and optimized, also the anatomical properties of neural circuits.",
"We demonstrate this in a system where the connectivity and the functional organization have been characterized, namely, the head direction circuit of the rodent and fruit fly.",
"We trained recurrent neural networks (RNNs) to estimate head direction through integration of angular velocity.",
"We found that the two distinct classes of neurons observed in the head direction system, the Ring neurons and the Shifter neurons, emerged naturally in artificial neural networks as a result of training.",
"Furthermore, connectivity analysis and in-silico neurophysiology revealed structural and mechanistic similarities between artificial networks and the head direction system.",
"Overall, our results show that optimization of RNNs in a goal-driven task can recapitulate the structure and function of biological circuits, suggesting that artificial neural networks can be used to study the brain at the level of both neural activity and anatomical organization.",
"Artificial neural networks have been increasingly used to study biological neural circuits.",
"In particular, recent work in vision demonstrated that convolutional neural networks (CNNs) trained to perform visual object classification provide state-of-the-art models that match neural responses along various stages of visual processing Khaligh-Razavi & Kriegeskorte, 2014; Yamins & DiCarlo, 2016; Güçlü & van Gerven, 2015; Kriegeskorte, 2015) .",
"Recurrent neural networks (RNNs) trained on cognitive tasks have also been used to account for neural response characteristics in various domains (Mante et al., 2013; Sussillo et al., 2015; Song et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Remington et al., 2018; Wang et al., 2018; Orhan & Ma, 2019; Yang et al., 2019) .",
"While these results provide important insights on how information is processed in neural circuits, it is unclear whether artificial neural networks have converged upon similar architectures as the brain to perform either visual or cognitive tasks.",
"Answering this question requires understanding the functional, structural, and mechanistic properties of artificial neural networks and of relevant neural circuits.",
"We address these challenges using the brain's internal compass -the head direction system, a system that has accumulated substantial amounts of functional and structural data over the past few decades in rodents and fruit flies (Taube et al., 1990a; Turner-Evans et al., 2017; Green et al., 2017; Seelig & Jayaraman, 2015; Stone et al., 2017; Lin et al., 2013; Finkelstein et al., 2015; Wolff et al., 2015; Green & Maimon, 2018) .",
"We trained RNNs to perform a simple angular velocity (AV) integration task (Etienne & Jeffery, 2004) and asked whether the anatomical and functional features that have emerged as a result of stochastic gradient descent bear similarities to biological networks sculpted by long evolutionary time.",
"By leveraging existing knowledge of the biological head direction (HD) systems, we demonstrate that RNNs exhibit striking similarities in both structure and function.",
"Our results suggest that goal-driven training of artificial neural networks provide a framework to study neural systems at the level of both neural activity and anatomical organization.",
"(2017)).",
"e) The brain structures in the fly central complex that are crucial for maintaining and updating heading direction, including the protocerebral bridge (PB) and the ellipsoid body (EB).",
"f) The RNN model.",
"All connections within the RNN are randomly initialized.",
"g) After training, the output of the RNN accurately tracks the current head direction.",
"Previous work in the sensory systems have mainly focused on obtaining an optimal representation (Barlow, 1961; Laughlin, 1981; Linsker, 1988; Olshausen & Field, 1996; Simoncelli & Olshausen, 2001; Khaligh-Razavi & Kriegeskorte, 2014) with feedforward models.",
"Several recent studies have probed the importance of recurrent connections in understanding neural computation by training RNNs to perform tasks (e.g., Mante et al. (2013); Sussillo et al. (2015) ; Cueva & Wei (2018)), but the relation of these trained networks to the anatomy and function of brain circuits are not mapped.",
"Using the head direction system, we demonstrate that goal-driven optimization of recurrent neural networks can be used to understand the functional, structural and mechanistic properties of neural circuits.",
"While we have mainly used perturbation analysis to reveal the dynamics of the trained RNN, other methods could also be applied to analyze the network.",
"For example, in Appendix Fig. 10 , using fixed point analysis (Sussillo & Barak, 2013; Maheswaranathan et al., 2019) , we found evidence consistent with attractor dynamics.",
"Due to the limited amount of experimental data available, comparisons regarding tuning properties and connectivity are largely qualitative.",
"In the future, studies of the relevant brain areas using Neuropixel probes (Jun et al., 2017) and calcium imaging (Denk et al., 1990) will provide a more in-depth characterization of the properties of HD circuits, and will facilitate a more quantitative comparison between model and experiment.",
"In the current work, we did not impose any additional structural constraint on the RNNs during training.",
"We have chosen to do so in order to see what structural properties would emerge as a consequence of optimizing the network to solve the task.",
"It is interesting to consider how additional structural constraints affect the representation and computation in the trained RNNs.",
"One possibility would to be to have the input or output units only connect to a subset of the RNN units.",
"Another possibility would be to freeze a subset of connections during training.",
"Future work should systematically explore these issues.",
"Recent work suggests it is possible to obtain tuning properties in RNNs with random connections (Sederberg & Nemenman, 2019) .",
"We found that training was necessary for the joint HD*AV tuning (see Appendix Fig. 9 ) to emerge.",
"While Sederberg & Nemenman (2019) consider a simple binary classification task, our integration task is computationally more complicated.",
"Stable HD tuning requires the system to keep track of HD by accurate integration of AV, and to stably store these values over time.",
"This computation might be difficult for a random network to perform (Cueva et al., 2019) .",
"Our approach contrasts with previous network models for the HD system, which are based on hand-crafted connectivity (Zhang, 1996; Skaggs et al., 1995; Xie et al., 2002; Green et al., 2017; Kim et al., 2017; Knierim & Zhang, 2012; Song & Wang, 2005; Kakaria & de Bivort, 2017; Stone et al., 2017) .",
"Our modeling approach optimizes for task performance through stochastic gradient descent.",
"We found that different input statistics lead to different heading representations in an RNN, suggesting that the optimal architecture of a neural network varies depending on the task demandan insight that would be difficult to obtain using the traditional approach of hand-crafting network solutions.",
"Although we have focused on a simple integration task, this framework should be of general relevance to other neural systems as well, providing a new approach to understand neural computation at multiple levels.",
"Our model may be used as a building block for AI systems to perform general navigation (Pei et al., 2019) .",
"In order to effectively navigate in complex environments, the agent would need to construct a cognitive map of the surrounding environment and update its own position during motion.",
"A circuit that performs heading integration will likely be combined with another circuit to integrate the magnitude of motion (speed) to perform dead reckoning.",
"Training RNNs to perform more challenging navigation tasks such as these, along with multiple sources of inputs, i.e., vestibular, visual, auditory, will be useful for building robust navigational systems and for improving our understanding of the computational mechanisms of navigation in the brain (Cueva & Wei, 2018; Banino et al., 2018) .",
"Figure 9: Joint HD × AV tuning of the initial, randomly connected network and the final trained network.",
"a) Before training, the 100 units in the network do not have pronounced joint HD × AV tuning.",
"The color scale is different for each unit (blue = minimum activity, yellow = maximum activity) to maximally highlight any potential variation in the untrained network.",
"b) After training, the units are tuned to HD × AV, with the exception of 12 units (shown at the bottom) which are not active and do not influence the network."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2631579041481018,
0.23255813121795654,
0.3181818127632141,
0.27272728085517883,
0.2222222238779068,
0.25,
0.15789473056793213,
0.3928571343421936,
0.25,
0.13114753365516663,
0.09677419066429138,
0.1090909019112587,
0.2631579041481018,
0.10958904027938843,
0.32258063554763794,
0.22727271914482117,
0.4444444477558136,
0.1304347813129425,
0,
0.13793103396892548,
0.12121211737394333,
0.07407406717538834,
0.20588235557079315,
0.21739129722118378,
0.1395348757505417,
0.04255318641662598,
0.20512819290161133,
0.14035087823867798,
0.05405404791235924,
0.13636362552642822,
0.15789473056793213,
0.15789473056793213,
0.12121211737394333,
0,
0.04999999329447746,
0.051282044500112534,
0.051282044500112534,
0.1428571343421936,
0.05405404791235924,
0.09677419066429138,
0.1249999925494194,
0.14035087823867798,
0.11764705181121826,
0.0476190410554409,
0.1702127605676651,
0.1860465109348297,
0.11594202369451523,
0.21621620655059814,
0.052631575614213943,
0.04347825422883034,
0.21739129722118378
] | HklSeREtPB | true | [
"Artificial neural networks trained with gradient descent are capable of recapitulating both realistic neural activity and the anatomical organization of a biological circuit."
] |
[
"Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices.",
"Their energy is dominated by the number of multiplies needed to perform the convolutions.",
"Winograd’s minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations.",
"We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity.",
"First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.",
"Second, we prune the weights in the Winograd domain to exploit static weight sparsity.",
"For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x.",
"We also show that moving ReLU to the Winograd domain allows more aggressive pruning.",
"Deep Convolutional Neural Networks (CNNs) have shown significant improvement in many machine learning applications.",
"However, CNNs are compute-limited.",
"Their performance is dominated by the number of multiplies needed to perform the convolutions.",
"Moreover, the computational workload of CNNs continues to grow over time.",
"BID16 proposed a CNN model with less than 2.3 × 10 7 multiplies for handwritten digit classification.",
"Later, BID13 developed AlexNet, an ImageNet-winning CNN with more than 1.1 × 10 9 multiplies.",
"In 2014, ImageNetwinning and runner up CNNs increased the number of multiplies to 1.4 × 10 9 BID24 ) and 1.6 × 10 10 BID22 respectively.",
"Despite the powerful representational ability of large scale CNNs, their computational workload prohibits deployment on mobile devices.",
"Two research directions have been explored to address the problem.",
"BID14 proposed using Winograd's minimal filtering algorithm BID25 to reduce the number of multiplies needed to perform 3 × 3 kernel convolutions.",
"On the other end, pruning the model BID5 and exploiting the dynamic sparsity of activations due to ReLU also reduces the required multiplies.",
"Unfortunately, the above two directions are not compatible: the Winograd transformation fills in the zeros in both the weights and the activations FIG0 ) -eliminating the gain from exploiting sparsity.",
"Thus, for a pruned network, Winograd's algorithm actually increases the number of multiplies; the loss of sparsity more than offsets the reduced operation count.In this paper, we introduce two modifications to the original Winograd-based convolution algorithm to eliminate this problem.",
"First, we move the ReLU operation to be after the Winograd transform to also make the activations sparse at the point where the multiplies are performed.",
"Second, we prune the weights after (rather than before) they are transformed.",
"Thus, the weights are sparse when the elementwise multiply is performed -reducing the operation count.",
"Together, these two modifications enable the gains of Winograd's algorithm and of exploiting sparsity to be combined.",
"We open-source our code and models at https://github.com/xingyul/Sparse-Winograd-CNN.",
"In this section, we summarize the experiment results and compare the three models in terms of",
"a) weight and activation dimensions and",
"b) the dynamic density of activations.",
"We then visualize the kernels to illustrate the pattern of the proposed Winograd-ReLU model kernel.",
"DISPLAYFORM0",
"We have shown that we can combine the computational savings of sparse weights and activations with the savings of the Winograd transform by making two modifcations to conventional CNNs.",
"To make the weights sparse at the point of multiplication, we train and prune the weights in the transform domain.",
"This simple approach does not reduce the workload with respect to spatial pruning, though, so we move the ReLU non-linear operation after the Winograd transform to make the activations sparse at the point of multiplication.",
"Moving ReLU to the Winograd domain also allows the weights to be more aggressively pruned without losing accuracy.",
"With a 2 × 2 output patch (p = 4), the net result is a reduction of 10.4×, 6.8× and 10.8× in computation on three datasets: CIFAR-10, CIFAR-100 and ImageNet.We plan to extend this work in the following directions.",
"First, we expect that even greater savings on computation can be realized by using larger patch sizes (e.g., p = 6), and there may be benefit in exploring different Winograd transformation matrices (B,G and A).",
"Second, we expect that using different pruning rates r i for each network layer will help maintain accuracy and improve overall workload reduction.",
"Finally, we expect that combining our Winograd-ReLU network with other network simplification techniques, e.g. quantization of weights and/or activations BID4 BID18 BID20 , will reduce the energy of computation even further."
] | [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.1702127605676651,
0,
0.23076923191547394,
0.25,
0.04444444179534912,
0.23999999463558197,
0.07999999821186066,
0,
0,
0,
0.06896550953388214,
0,
0.05882352590560913,
0,
0,
0,
0.12903225421905518,
0.17142856121063232,
0.04444444179534912,
0.1249999925494194,
0,
0,
0.07407406717538834,
0.09999999403953552,
0.1538461446762085,
0.1249999925494194,
0,
0,
0.1111111044883728,
0.2222222238779068,
0.09756097197532654,
0.2222222238779068,
0.08510638028383255,
0.1304347813129425,
0.1764705777168274,
0.04878048226237297
] | HJzgZ3JCW | true | [
"Prune and ReLU in Winograd domain for efficient convolutional neural network"
] |
[
"In this paper we present a novel optimization algorithm called Advanced Neuroevolution.",
"The aim for this algorithm is to train deep neural networks, and eventually act as an alternative to Stochastic Gradient Descent (SGD) and its variants as needed.We evaluated our algorithm on the MNIST dataset, as well as on several global optimization problems such as the Ackley function.",
"We find the algorithm performing relatively well for both cases, overtaking other global optimization algorithms such as Particle Swarm Optimization (PSO) and Evolution Strategies (ES).\n",
"Gradient Descent (GD) and its variations like stochastic gradient descent BID2 are the de facto standard for training deep neural networks (DNNs) for tasks in various domains like Object Detection BID10 , Robotic Grasping BID9 and Machine Translation BID1 .",
"Most of the field of Deep Learning is centered around algorithms similar to variants of Gradient Descent to find the optimal weights given desired input/output pairs BID7 , BID4 , BID14 .",
"However, there are also some limitations to using gradient-based optimization.",
"For example, the neural network and the loss function have to be differentiable end-to-end.",
"As a consequence, there are a number of problems that can not be directly modeled or solved without some alterations such as Formal Logic and Hard Attention BID11 .",
"Note that throughout this paper, we will refer to gradient-based methods collectively as SGD.",
"Similarly, we will refer to Advanced Neuroevolution with the acronym AdvN.For those reasons, we developed a new algorithm which we call Advanced Neuroevolution.",
"It is not a single algorithm, in truth.",
"It is an ensemble of low-level algorithms, layered on top of each other.",
"Those low-level algorithms have different scopes of operations addressing different levels of abstraction in the search process.",
"For example, the perturbation mechanism addresses the introduction of noise into the models, the most basic operation.",
"In contrast, the minimum distance mechanism addresses the global scale properties, i.e. the search regions.",
"The goal is to traverse the search space as efficiently as possible without use of gradients.",
"In the case of neural networks the search space is the weight space, including biases.Indeed, while this algorithm was developed primarily for training of deep neural networks, it can be used for other optimization tasks.",
"In essence, we present the algorithm as an evolutionary optimization algorithm, with a focus on DNNs.There are many global optimization algorithms such as Evolution Strategies BID13 , Particle Swarm Optimization BID8 and Simulated Annealing BID23 .",
"Each has its merits and limitations.",
"Our aim is not to compete directly with those algorithms but rather to complement them and offer another option with its own merits and limitations.",
"To evaluate the performance of such algorithms we can use well-known benchmark functions such as the Rastrigin or Ackley function.",
"We recognize those functions and test Advanced Neuroevolution against them to assess its performance.In addition, there have been other approaches to using evolutionary optimization techniques to train DNNs, see and BID19 as recent examples.",
"It reflects the awareness within the broader research community about the potential of such algorithms, and the need for alternatives to SGD.",
"We don't see our algorithm replacing SGD, especially in fields where it is already quite successful such as Computer Vision.",
"Our aim is to complement it, by offering another option.",
"Furthermore, there is no reason why both can not be used in tandem as part of a grander learning strategy.",
"We presented the Advanced Neuroevolution algorithm as an alternative optimization step to SGD to train neural networks.",
"The work is motivated by some limitations we perceived in gradient-based methods, such as differentiability and sample-inefficiency.",
"The algorithm is benchmarked against other optimization algorithms on typical optimization problems.",
"It performed satisfactorily well, and improved upon all of them.",
"For fairness, we noted that the implementation of the other algorithms may not be optimized, and they can arguably perform better.Next our algorithm is tested on the MNIST digit classification task.",
"It achieved 90% accuracy on the entire validation set using only 2000 images from the training set.",
"In all our experiments, halfprecision floats are used in order to decrease the time of the computations.",
"The computations are done only on 4 Titan V GPUs instead of thousands of CPU cores as in other evolutionary algorithms papers.",
"This makes training of neural networks with evolutionary algorithms more tractable in terms of resource requirements.Finally, while not presented in this work, preliminary tests of our algorithm on RL tasks have been promising.",
"It solves the assigned problems, though it takes longer than other approaches.",
"We aim to improve upon the algorithm and the strategies employed in order to achieve competitive results on RL and Robotics tasks."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1538461446762085,
0.3396226465702057,
0.14999999105930328,
0.1599999964237213,
0.04999999701976776,
0.1666666567325592,
0.2222222238779068,
0.04878048226237297,
0.0714285671710968,
0.1764705777168274,
0,
0.07692307233810425,
0,
0,
0,
0.06896550953388214,
0.2222222238779068,
0.1666666567325592,
0.09999999403953552,
0.1111111044883728,
0.0624999962747097,
0.21739129722118378,
0.12121211737394333,
0.05882352590560913,
0.0833333283662796,
0,
0.4000000059604645,
0.06451612710952759,
0.23999999463558197,
0.0833333283662796,
0.1818181723356247,
0.06896550953388214,
0.06666666269302368,
0.05714285373687744,
0.17777776718139648,
0,
0.24242423474788666
] | r1g5Gh05KQ | true | [
"A new algorithm to train deep neural networks. Tested on optimization functions and MNIST."
] |
[
"Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies.",
"Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches.",
"We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example.",
"Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs.",
"We find significant speedups in training neural networks with multiplicative Gaussian perturbations.",
"We show that flipout is effective at regularizing LSTMs, and outperforms previous methods.",
"Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services.",
"Stochasticity is a key component of many modern neural net architectures and training algorithms.",
"The most widely used regularization methods are based on randomly perturbing a network's computations BID29 BID7 .",
"Bayesian neural nets can be trained with variational inference by perturbing the weights BID4 BID0 .",
"Weight noise was found to aid exploration in reinforcement learning BID20 BID2 .",
"Evolution strategies (ES) minimizes a black-box objective by evaluating many weight perturbations in parallel, with impressive performance on robotic control tasks BID25 .",
"Some methods perturb a network's activations BID29 BID7 , while others perturb its weights BID4 BID0 BID20 BID2 BID25 .",
"Stochastic weights are appealing in the context of regularization or exploration because they can be viewed as a form of posterior uncertainty about the parameters.",
"However, compared with stochastic activations, they have a serious drawback: because a network typically has many more weights than units, it is very expensive to compute and store separate weight perturbations for every example in a mini-batch.",
"Therefore, stochastic weight methods are typically done with a single sample per mini-batch.",
"In contrast, activations are easy to sample independently for different training examples within a mini-batch.",
"This allows the training algorithm to see orders of magnitude more perturbations in a given amount of time, and the variance of the stochastic gradients decays as 1/N , where N is the mini-batch size.",
"We believe this is the main reason stochastic activations are far more prevalent than stochastic weights for neural net regularization.",
"In other settings such as Bayesian neural nets and evolution strategies, one is forced to use weight perturbations and live with the resulting inefficiency.In order to achieve the ideal 1/N variance reduction, the gradients within a mini-batch need not be independent, but merely uncorrelated.",
"In this paper, we present flipout, an efficient method for decorrelating the gradients between different examples without biasing the gradient estimates.",
"Flipout applies to any perturbation distribution that factorizes by weight and is symmetric around 0-including DropConnect, multiplicative Gaussian perturbations, evolution strategies, and variational Bayesian neural nets-and to many architectures, including fully connected nets, convolutional nets, and RNNs.In Section 3, we show that flipout gives unbiased stochastic gradients, and discuss its efficient vectorized implementation which incurs only a factor-of-2 computational overhead compared with shared perturbations.",
"We then analyze the asymptotics of gradient variance with and without flipout, demonstrating strictly reduced variance.",
"In Section 4, we measure the variance reduction effects on a variety of architectures.",
"Empirically, flipout gives the ideal 1/N variance reduction in all architectures we have investigated, just as if the perturbations were done fully independently for each training example.",
"We demonstrate speedups in training time in a large batch regime.",
"We also use flipout to regularize the recurrent connections in LSTMs, and show that it outperforms methods based on dropout.",
"Finally, we use flipout to vectorize evolution strategies BID25 , allowing a single GPU to handle the same throughput as 40 CPU cores using existing approaches; this corresponds to a factor-of-4 cost reduction on Amazon Web Services.",
"We have introduced flipout, an efficient method for decorrelating the weight gradients between different examples in a mini-batch.",
"We showed that flipout is guaranteed to reduce the variance compared with shared perturbations.",
"Empirically, we demonstrated significant variance reduction in the large batch setting for a variety of network architectures, as well as significant speedups in training time.",
"We showed that flipout outperforms dropout-based methods for regularizing LSTMs.",
"Flipout also makes it practical to apply GPUs to evolution strategies, resulting in substantially increased throughput for a given computational cost.",
"We believe flipout will make weight perturbations practical in the large batch setting favored by modern accelerators such as Tensor Processing Units (Jouppi et al., 2017) .",
"DISPLAYFORM0 In this section, we provide the proof of Theorem 2 (Variance Decomposition Theorem).Proof",
". We use",
"the notations from Section 3.2. Let x,",
"x denote two training examples from the mini-batch B, and ∆W, ∆W denote the weight perturbations they received. We begin",
"with the decomposition into data and estimation terms (Eqn. 6), which we repeat here for convenience: DISPLAYFORM1 The data term from Eqn. 13 can be",
"simplified: DISPLAYFORM2 We break the estimation term from Eqn. 13 into variance",
"and covariance terms: DISPLAYFORM3 We now separately analyze the cases of fully independent perturbations, shared perturbations, and flipout.Fully independent perturbations. If the perturbations",
"are fully independent, the second term in Eqn. 15 disappears. Hence",
", combining Eqns",
". 13, 14, and 15, we are",
"left with DISPLAYFORM4 which is just α/N ."
] | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1702127605676651,
0.1599999964237213,
0.8936170339584351,
0.09756097197532654,
0.15789473056793213,
0.05128204822540283,
0.0624999962747097,
0.14999999105930328,
0.0476190410554409,
0.19512194395065308,
0,
0.1666666567325592,
0.09090908616781235,
0.12244897335767746,
0.26229506731033325,
0.20512820780277252,
0.19512194395065308,
0.2142857164144516,
0.31111109256744385,
0.24242423474788666,
0.3478260934352875,
0.16470587253570557,
0.1463414579629898,
0.09999999403953552,
0.19230768084526062,
0.1111111044883728,
0.08695651590824127,
0.06666666269302368,
0.5454545617103577,
0.14999999105930328,
0.1249999925494194,
0.1111111044883728,
0.08695651590824127,
0.18867923319339752,
0.04878048226237297,
0.0714285671710968,
0.05882352590560913,
0.22727271914482117,
0.07999999821186066,
0.10526315122842789,
0.13333332538604736,
0.054054051637649536,
0,
0,
0
] | rJNpifWAb | true | [
"We introduce flipout, an efficient method for decorrelating the gradients computed by stochastic neural net weights within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example."
] |
[
"Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision.",
"However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN.",
"In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework.",
"An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE.",
"Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network.",
"The decoder proceeds in the reverse order of the encoder's composite mappings.",
"A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. ",
"Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.",
"Deep generative models play a more and more important role in cracking challenges in computer vision as well as in other disciplines, such as high-quality image generation Karras et al., 2018a; Brock et al., 2018 ), text-to-speech transformation (van den Oord et al., 2016a; , information retrieval (Wang et al., 2017) , 3D rendering (Wu et al., 2016; Eslami et al., 2018) , and signal-to-image acquisition (Zhu et al., 2018) .",
"Overall, the generative models fall into four categories: autoencoder and its most important variant of Variational AutoEncoder (VAE) (Kingma & Welling, 2013) , auto-regressive models (van den Oord et al., 2016b; a) , Generative Adversarial Network (GAN) (Goodfellow et al., 2014) , and normalizing flows (NF) (Tabak & Vanden-Eijnden, 2010; Tabak & Turner, 2013; Rezende & Mohamed, 2015) .",
"Here we compare these models through the perspective of data dimensionality reduction and reconstruction.",
"To be formal, let x be a data point in the d x -dimensional observable space R dx and y be its corresponding low-dimensional representation in the feature space R dy .",
"The general formulation of dimensionality reduction is",
"where f (·) is the mapping function and d y d x .",
"The manifold learning aims at requiring f under various constraints on y (Tenenbaum1 et al., 2000; Roweis & Saul, 2000) .",
"However, the sparsity of data points in high-dimensional space often leads to model overfitting, thus necessitating research on opposite mapping from y to x, i.e.",
"where g(·) is the opposite mapping function with respect to f (·), to reconstruct the data.",
"In general, the role of g(·) is a regularizer to f (·) or a generator to produce more data.",
"The autoencoder is of mapping x f → y g →x.",
"A common assumption in autoencoder is that the variables in lowdimensional space are usually sampled from a prior distribution P(z; θ) such as uniform or Gaussian.",
"To differentiate from y, we let z represent the low-dimensional vector following the prior distribution.",
"Thus we can write g : R dz → R dx , z → x = g(z), z ∼ P(z; θ).",
"It is crucial to establish such dual maps z = f (x) and x = g(z).",
"In the parlance of probability, the process of x → z = f (x) is called inference, and the other procedure of z → x = g(z) is called sampling or generation.",
"VAE is capable of carrying out inference and generation in one framework by two collaborative functional modules.",
"However, it is known that in many cases VAEs are only able to generate blurry images due to the imprecise variational inference.",
"To see this, we write the approximation of the marginal log-likelihood",
"where KL[q(z|x)||p(z)] is the Kullback-Leibler divergence with respect to posterior probability q(z|x) and prior p(z).",
"This lower-bound log-likelihood usually produces imprecise inference.",
"Furthermore, the posterior collapse frequently occurs when using more sophisticated decoder models (Bowman et al., 2015; Kingma et al., 2016 ).",
"These two issues greatly limit the generation capability of the VAE.",
"On the other hand, GAN is able to achieve photo-realistic generation results (Karras et al., 2018a; .",
"However, its critical limitation is the absence of the encoder f (x) for carrying inference on real images.",
"Effort has been made on learning an encoder for GAN under the framework of VAE, however the previous two issues of learning VAE still exist.",
"Normalizing flows can perform the exact inference and generation with one architecture by virtue of invertible networks (Kingma & Dhariwal, 2018) .",
"But it requires the dimension d x of the data space to be identical to the dimension d z of the latent space, thus posing computational issues due to high complexity of learning deep flows and computing the Jacobian matrices.",
"Inspired by recent success of GANs (Karras et al., 2018a; and normalizing flows (Kingma et al., 2016; Kingma & Dhariwal, 2018) , we develop a new model called Latently Invertible Autoencoder (LIA).",
"LIA utilizes an invertible network to bridge the encoder and the decoder of VAE in a symmetric manner.",
"We summarize its key advantages as follows:",
"• The symmetric design of the invertible network brings two benefits.",
"The prior distribution can be exactly fitted from an unfolded feature space, thus significantly easing the inference problem.",
"Besides, since the latent space is detached, the autoencoder can be trained without variational optimization thus there is no approximation here.",
"• The two-stage adversarial learning decomposes the LIA framework into a Wasserstein GAN (only a prior needed) and a standard autoencoder without stochastic variables.",
"Therefore the training is deterministic 2 , implying that the model will be not affected by the posterior collapse when the decoder is more complex or followed by additional losses such as the adversarial loss and the perceptual loss.",
"• We compare LIA with state-of-the-art generative models on inference and generation/reconstruction.",
"The experimental results on FFHQ and LSUN datasets show the LIA achieves superior performance on inference and generation.",
"A new generative model, named Latently Invertible Autoencoder (LIA), has been proposed for generating image sample from a probability prior and simultaneously inferring accurate latent code for a given sample.",
"The core idea of LIA is to symmetrically embed an invertible network in an autoencoder.",
"Then the neural architecture is trained with adversarial learning as two decomposed modules.",
"With the design of two-stage training, the decoder can be replaced with any GAN generator for high-resolution image generation.",
"The role of the invertible network is to remove any probability optimization and bridge the prior with unfolded feature vectors.",
"The effectiveness of LIA is validated with experiments of reconstruction (inference and generation) on FFHQ and LSUN datasets.",
"It is still challenging to faithfully recover all the image content especially when the objects or scenes have unusual parts.",
"For example, LIA fails to recover the hand appeared at the top of the little girl (the second row in Figure 3) .",
"Besides, the Bombay cat's necklace (the second row in Figure 5 ) is missed in the reconstructed image.",
"These features belong to multiple unique parts of the objects or scenes, which are difficult for the generative model to capture.",
"One possible solution is to raise the dimension of latent variables (e.g. using multiple latent vectors) or employ the attention mechanism to highlight such unusual structures in the decoder, which is left for future work."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08510638028383255,
0.2916666567325592,
0.260869562625885,
0.3499999940395355,
0.2916666567325592,
0.17142856121063232,
0.380952388048172,
0.1904761791229248,
0.054794516414403915,
0.08219177275896072,
0.15789473056793213,
0.1249999925494194,
0.12903225421905518,
0.17142856121063232,
0,
0.20408162474632263,
0.15789473056793213,
0.19512194395065308,
0.11428570747375488,
0.16326530277729034,
0.052631575614213943,
0,
0.1538461446762085,
0.17391303181648254,
0.2926829159259796,
0.2666666507720947,
0.11764705181121826,
0.20512820780277252,
0.06451612710952759,
0.09090908616781235,
0.1764705777168274,
0.1463414579629898,
0.19512194395065308,
0.1304347813129425,
0.2222222238779068,
0.14814814925193787,
0.25925925374031067,
0.39024388790130615,
0,
0.22857142984867096,
0.1428571343421936,
0.1395348757505417,
0.17391303181648254,
0.2181818187236786,
0.1111111044883728,
0.14999999105930328,
0.2745097875595093,
0.31578946113586426,
0.1621621549129486,
0.1428571343421936,
0.3255814015865326,
0.14999999105930328,
0.1395348757505417,
0.1818181723356247,
0.14999999105930328,
0.1860465109348297,
0.2181818187236786
] | ryefE1SYDr | true | [
"A new model Latently Invertible Autoencoder is proposed to solve the problem of variational inference in VAE using the invertible network and two-stage adversarial training."
] |
[
"Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism.",
"Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs.",
"The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability.",
"To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs).",
"A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller.",
"We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs.",
"We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.",
"Representing the logic of a computer program with a parametrized model, such as a neural network, is a central challenge in AI with applications including reinforcement learning, robotics, natural language processing, and programming by example.",
"A salient feature of recently-proposed approaches for learning programs BID32 BID6 is their ability to leverage the hierarchical structure of procedure invocations present in well-designed programs.Explicitly exposing this hierarchical structure enables learning neural programs with empirically superior generalization, compared to baseline methods that learn only from elementary computer operations, but requires training data that does not consists only of low-level computer operations but is annotated with the higher-level procedure calls BID32 BID6 .",
"tackled the problem of learning hierarchical neural programs from a mixture of annotated training data (hereafter called strong supervision) and unannotated training data where only the elementary operations are given without their call-stack annotations (called weak supervision).",
"In this paper, we propose to learn hierarchical neural programs from a mixture of strongly supervised and weakly supervised data via the Expectation-Gradient method and an explicit program counter, in lieu of a high-dimensional real-valued state of a recurrent neural network.Our approach is inspired by recent work in robot learning and control.",
"In Imitation Learning (IL), an agent learns to behave in its environment using supervisor demonstrations of the intended behavior.",
"However, existing approaches to IL are largely insufficient for addressing algorithmic domains, in which the target policy is program-like in its accurate and structured manipulation of inputs and data structures.",
"An example of such a domain is long-hand addition, where the computer loops over the digits to be added, from least to most significant, calculating the sum and carry.",
"In more complicated examples, the agent must correctly manipulate data structures to compute the right output.Three main challenges set algorithmic domains apart from other IL domains.",
"First, the agent's policy must be highly accurate.",
"Algorithmic behavior is characterized by a hard constraint of output correctness, where any suboptimal actions are simply wrong and considered failures.",
"In contrast, many tasks in physical and simulated domains tolerate errors in the agent's actions, as long as some goal region in state-space is eventually reached, or some safety constraints are satisfied.",
"A second challenge is that algorithms often use specific data structures, which may require the algorithmic policies to have a particular structure.",
"A third challenge is that the environment in algorithmic domains, which consists of the program input and the data structures, is almost completely unobservable directly by the agent.",
"They can only be scanned using some limited reading apparatus, such as the read/write heads in a Turing Machine or the registers in a register machine.Recently proposed methods can infer from demonstration data hierarchical control policies, where high-level behaviors are composed of low-level manipulation primitives BID8 .",
"In this paper, we take a similar approach to address the challenges of algorithmic domains, by introducing Parametrized Hierarchical Procedures (PHPs), a structured model of algorithmic policies inspired by the options framework BID38 , as well as the procedural programming paradigm.",
"A PHP is a sequence of statements, such that each statement branches conditionally on the observation, to either (1) perform an elementary operation, (2) invoke another PHP as a sub-procedure, or (3) terminate and return control to the caller PHP.",
"The index of each statement in the sequence serves as a program counter to accurately remember which statement was last executed and which one is next.",
"The conditional branching in each statement is implemented by a neural network mapping the program counter and the agent's observation into the elementary operation, sub-procedure, or termination to be executed.",
"The PHP model is detailed in Section 4.1.PHPs have the potential to address the challenges of algorithmic domains by strictly maintaining two internal structures: a call stack containing the current branch of caller PHPs, and the current program counter of each PHP in the stack.",
"When a statement invokes a PHP as a sub-procedure, this PHP is pushed into the call stack.",
"When a statement terminates the current PHP, it is popped from the stack, returning control to the calling PHP to execute its next statement (or, in the case of the root PHP, ending the entire episode).",
"The stack also keeps the program counter of each PHP, which starts at 0, and is incremented each time a non-terminating statement is executed.PHPs impose a constraining structure on the learned policies.",
"The call stack arranges the policy into a hierarchical structure, where a higher-level PHP can solve a task by invoking lower-level PHPs that solve sub-tasks.",
"Since call stacks and program counters are widely useful in computer programs, they provide a strong inductive bias towards policy correctness in domains that conform to these constraints, while also being computationally tractable to learn.",
"To support a larger variety of algorithmic domains, PHPs should be extended in future work to more expressive structures, for example allowing procedures to take arguments.We experiment with PHPs in two benchmarks, the NanoCraft domain introduced in , and long-hand addition.",
"We find that our algorithm is able to learn PHPs from a mixture of strongly and weakly supervised demonstrations with better sample complexity than previous algorithms: it achieves better test performance with fewer demonstrations.In this paper we make three main contributions:• We introduce the PHP model and show that it is easier to learn than the NPI model BID32 ).•",
"We propose an Expectation-Gradient algorithm for efficiently training PHPs from a mixture of annotated and unannotated demonstrations (strong and weak supervision).•",
"We demonstrate efficient training of multi-level PHPs on NanoCraft and long-hand addition BID32 , and achieve improved success rate.2 RELATED",
"WORK BID32 Recursive NPI BID6 (recursive) NPL Mixed PHP (this work) Mixed BID18 , the Neural GPU BID19 , and End-to-End Memory Networks BID37 , have been proposed for learning neural programs from input-output examples, with components such as variable-sized memory and novel addressing mechanisms facilitating the training process.In contrast, our work considers the setting where, along with the input-output examples, execution traces are available which describe the steps necessary to solve a given problem. The Neural",
"Programmer-Interpreter (NPI, BID32 ) learns hierarchical policies from execution traces which not only indicate the low-level actions to perform, but also a structure over them specified by higher-level abstractions. BID6 showed",
"that learning from an execution trace with recursive structure enables perfect generalization. Neural Program",
"Lattices work within the same setting as the NPI, but can learn from a dataset of execution traces where only a small fraction contains information about the higher-level hierarchy.In demonstrations where the hierarchical structure along the trace is missing, this latent space grows exponentially in the trace length. address this challenge",
"via an approximation method that selectively averages latent variables on different computation paths to reduce the complexity of enumerating all paths. In contrast, we compute",
"exact gradients using dynamic programming, by considering a hierarchical structure that has small discrete latent variables in each time step.Other works use neural networks as a tool for outputting programs written in a discrete programming language, rather than having the neural network itself represent a program. BID3 learned to generate",
"programs for solving competition-style problems. BID9 and BID31 generate",
"programs in a domain-specific language for manipulating strings in spreadsheets.",
"In this paper we introduced the Parametrized Hierarchical Procedures (PHP) model for hierarchical representation of neural programs.",
"We proposed an Expectation-Gradient algorithm for training PHPs from a mixture of strongly and weakly supervised demonstrations of an algorithmic behavior, showed how to perform level-wise training of multi-level PHPs, and demonstrated the benefits of our approach on two benchmarks.PHPs alleviate the sample complexity required to train policies with unstructured memory architectures, such as LSTMs, by imposing the structure of a call stack augmented with program counters.",
"This structure may be limiting in that it requires the agent to also rely on observable information that could otherwise be memorized, such as the building specifications in the NanoCraft domain.",
"The benchmarks used so far in the field of neural programming are simple enough and observable enough to be solvable by PHPs, however we note that more complicated and less observable domains may require more expressive memory structures, such as passing arguments to sub-procedures.",
"Future work will explore such structures, as well as new benchmarks to further challenge the community.Our results suggest that adding weakly supervised demonstrations to the training set can improve performance at the task, but only when the strongly supervised demonstrations already get decent performance.",
"Weak supervision could attract the optimization process to a different hierarchical structure than intended by the supervisor, and in such cases we found it necessary to limit the number of weakly supervised demonstrations, or weight them less than demonstrations annotated with the intended hierarchy.An open question is whether the attractors strengthened by weak supervision are alternative but usable hierarchical structures, that are as accurate and interpretable as the supervisor's.",
"Future work will explore the quality of solutions obtained by training from only weakly supervised demonstrations."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1904761791229248,
0.21739129722118378,
0.2448979616165161,
0.0555555522441864,
0.23529411852359772,
0.38461539149284363,
0.25531914830207825,
0.18867923319339752,
0.1818181723356247,
0.40740740299224854,
0.3030303120613098,
0.1463414579629898,
0.1599999964237213,
0.2083333283662796,
0.08510638028383255,
0.06666666269302368,
0.1395348757505417,
0.07999999821186066,
0.09090908616781235,
0.1304347813129425,
0.1538461446762085,
0.1428571343421936,
0.21052631735801697,
0.17391303181648254,
0.1599999964237213,
0.23728813230991364,
0.1666666567325592,
0.19999998807907104,
0.19607841968536377,
0.22727271914482117,
0.145454540848732,
0.23333333432674408,
0.3380281627178192,
0.5116279125213623,
0.1904761791229248,
0.1818181723356247,
0.15094339847564697,
0.1666666567325592,
0.1538461446762085,
0.1304347813129425,
0.1492537260055542,
0.12903225421905518,
0.12903225421905518,
0.3589743673801422,
0.2857142686843872,
0.0416666604578495,
0.13114753365516663,
0.03389830142259598,
0.1794871687889099,
0.15789473056793213
] | rJl63fZRb | true | [
"We introduce the PHP model for hierarchical representation of neural programs, and an algorithm for learning PHPs from a mixture of strong and weak supervision."
] |
[
"Deep Reinforcement Learning has managed to achieve state-of-the-art results in learning control policies directly from raw pixels.",
"However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system.",
"Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially.",
"In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better.",
"We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes.",
"In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent.",
"We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one.",
"Concretely, we use Unaligned Generative Adversarial Networks (GANs) to create a mapping function to translate images in the target task to corresponding images in the source task.",
"These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter.",
"We show that learning this mapping is substantially more efficient than re-training.",
"A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \\url{https://streamable.com/msgtm} and \\url{https://streamable.com/5e2ka}.",
"Transferring knowledge from previous occurrences to new circumstances is a fundamental human capability and is a major challenge for deep learning applications.",
"A plausible requirement for artificial general intelligence is that a network trained on one task can reuse existing knowledge instead of learning from scratch for another task.",
"For instance, consider the task of navigation during different hours of the day.",
"A human that knows how to get from one point to another on daylight will quickly adjust itself to do the same task during night time, while for a machine learning system making a decision based on an input image it might be a harder task.",
"That is because it is easier for us to make analogies between similar situations, especially in the things we see, as opposed to a robot that does not have this ability and its knowledge is based mainly on what it already saw.Deep reinforcement learning has caught the attention of researchers in the past years for its remarkable success in achieving human-level performance in a wide variety of tasks.",
"One of the field's famous achievements was on the Atari 2600 games where an agent was trained to play video games directly from the screen pixels and information received from the game BID20 .",
"However, this approach depends on interacting with the environment a substantial number of times during training.",
"Moreover, it struggles to generalize beyond its experience, the training process of a new task has to be performed from scratch even for a related one.",
"Recent works have tried to overcome this inefficiency with different approaches such as, learning universal policies that can generalize between related tasks BID25 , as well as other transfer approaches BID7 BID24 .",
"In this work, we first focus on the Atari game Breakout, in which the main concept is moving the paddle towards the ball in order to maximize the score of the game.",
"We modify the game by introducing visual changes such as adding a rectangle in the middle of the image or diagonals in the background.",
"From a human perspective, it appears that making visual changes that are not significant to the game's dynamics should not influence the score of the game, a player who mastered the original game should be able to trivially adapt to such visual variants.",
"We show that the agent fails to transfer.",
"Furthermore, fine-tuning, the main transfer learning method used today in neural networks, also fails to adapt to the small visual change: the information learned in the source task does not benefit the learning process of the very related target task, and can even decelerate it.",
"The algorithm behaves as if these are entirely new tasks.Our second focus is attempting to transfer agent behavior across different levels of a video game: can an agent trained on the first level of a game use this knowledge and perform adequately on subsequent levels?",
"We explore the Nintendo game Road Fighter, a car racing game where the goal is to finish the track before the time runs out without crashing.",
"The levels all share the same dynamics, but differ from each other visually and in aspects such as road width.",
"Similar to the Breakout results, an agent trained to play the first level fails to correctly adapt its past experience, causing the learned policy to completely fail on the new levels.To address the generalization problem, we propose a zero-shot generalization approach, in which the agent learns to transfer between related tasks by learning to visually map images from the target task back to familiar corresponding images from the source task.",
"Such mapping is naturally achieved using Generative Adversarial Networks (GANs) BID9 , one of the most popular methods for the image-to-image translation that is being used in computer vision tasks such as style transfer BID15 , object transfiguration BID31 , photo enhancement BID17 and more recently, video game level generation BID27 .",
"In our setup, it is not realistic to assume paired images in both domains, calling for the use of Unaligned GANs BID19 BID15 .",
"Using this approach we manage to transfer between similar tasks with no additional learning.Contributions This work presents three main contributions.",
"First, in Section 2, we demonstrate how an agent trained with deep reinforcement learning algorithms fails to adapt to small visual changes, and that the common transfer method of fine-tuning fails as well.",
"Second, in Section 3, we propose to separate the visual mapping from the game dynamics, resulting in a new transfer learning approach for related tasks based on visual input mapping.",
"We evaluate this approach on Breakout and Road Fighter, and present the results comparing to different baselines.",
"We show that our visual transfer approach is much more sample efficient then the alternatives.",
"Third, in section 5, we suggest an evaluation setup for unaligned GAN architectures, based on their achieved performance on concrete down-stream tasks.",
"We demonstrated the lack of generalization by looking at artificially constructed visual variants of a game (Breakout), and different levels of a game (Road Fighter).",
"We further show that transfer learning by fine-tuning fails.",
"The policies learned using model-free RL algorithms on the original game are not directly transferred to the modified games even when the changes are irrelevant to the game's dynamics.We present a new approach for transfer learning between related RL environments using GANs without the need for any additional training of the RL agent, and while requiring orders of magnitude less interactions with the environment.",
"We further suggest this setup as a way to evaluate GAN architectures by observing their behavior on concrete tasks, revealing differences between the Cycle-GAN and UNIT-GAN architectures.",
"We believe our approach is applicable to cases involving both direct and less direct mapping between environments, as long as an image-to-image translation exist.",
"While we report a success in analogy transfer using Unaligned GANs, we also encountered limitations in the generation process that made it difficult for the agent to maximize the results on the Road Fighter's tasks.",
"In future work, we plan to explore a tighter integration between the analogy transfer method and the RL training process, to facilitate better performance where dynamic adjustments are needed in addition to the visual mapping."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0,
0.08163265138864517,
0.2142857164144516,
0.15686273574829102,
0.2222222238779068,
0.08510638028383255,
0.16393442451953888,
0.07692307233810425,
0.3396226465702057,
0.04651162400841713,
0.25925925374031067,
0.11764705181121826,
0.1428571343421936,
0.1428571343421936,
0.0845070406794548,
0.20689654350280762,
0.20689654350280762,
0.1702127605676651,
0.1818181723356247,
0.13114753365516663,
0.178571417927742,
0.23529411852359772,
0.1875,
0.10256409645080566,
0.1764705777168274,
0.27397260069847107,
0.30188679695129395,
0.11764705181121826,
0.2380952388048172,
0.1538461446762085,
0.07407406717538834,
0.07692307233810425,
0.19354838132858276,
0.28070175647735596,
0.3404255211353302,
0.1304347813129425,
0.07692307233810425,
0.38461539149284363,
0.04999999701976776,
0.2650602459907532,
0.21052631735801697,
0.11320754140615463,
0.19672130048274994,
0.22580644488334656
] | rkxjnjA5KQ | true | [
"We propose a method of transferring knowledge between related RL tasks using visual mappings, and demonstrate its effectiveness on visual variants of the Atari Breakout game and different levels of Road Fighter, a Nintendo car driving game."
] |
[
"A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization.",
"We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years.\n",
"We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings.",
"We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning.",
"Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings.",
"Adaptive gradient methods have remained a cornerstone of optimization for deep learning.",
"They revolve around a simple idea: scale the step sizes according to the observed gradients along the execution.",
"It is generally believed that these methods enjoy accelerated optimization, and are more robust to hyperparameter choices.",
"For these reasons, adaptive optimizers have been applied across diverse architectures and domains.",
"However, in recent years, there has been renewed scrutiny on the distinction between adaptive methods and \"vanilla\" stochastic gradient descent (SGD).",
"Namely, several lines of work have purported that SGD, while often slower to converge, finds solutions that generalize better: for the same optimization error (training error), adaptive gradient methods will produce models with a higher statistical error (holdout validation error).",
"This claim, which can be shown to be true in a convex overparameterized examples, has perhaps muddled the consensus between academic research and practitioners pushing the empirical state of the art.",
"For the latter group, adaptive gradient methods have largely endured this criticism, and remain an invaluable instrument in the deep learning toolbox.",
"In this work, we revisit the generalization performance of adaptive gradient methods from an empirical perspective, and examine several often-overlooked factors which can have a significant effect on the optimization trajectory.",
"Addressing these factors, which does not require trying yet another new optimizer, can often account for what appear to be performance gaps between adaptive methods and SGD.",
"Our experiments suggest that adaptive gradient methods do not necessarily incur a generalization penalty: if an experiment indicates as such, there are a number of potential confounding factors and simple fixes.",
"We complete the paper with a discussion of inconsistent evidence for the generalization penalty of adaptive methods, from both experimental and theoretical viewpoints.",
"In this section we provide two simple examples of stochastic convex problems where it can be seen that when it comes to generalization both AdaGrad and SGD can be significantly better than the other depending on the instance.",
"Our purpose to provide both the examples is to stress our point that the issue of understanding the generalization performance of SGD vs. adaptive methods is more nuanced than what simple examples might suggest and hence such examples should be treated as qualitative indicators more for the purpose of providing intuition.",
"Indeed which algorithm will perform better on a given problem, depends on various properties of the precise instance."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.13793103396892548,
0.06666666269302368,
0.060606054961681366,
0.052631575614213943,
0.0624999962747097,
0.23999999463558197,
0.06896550953388214,
0,
0,
0.05882352590560913,
0.07843136787414551,
0.04878048226237297,
0.05882352590560913,
0.1395348757505417,
0.04999999701976776,
0.2790697515010834,
0.23529411852359772,
0.08510638028383255,
0.037735845893621445,
0.06666666269302368
] | BJl6t64tvr | true | [
"Adaptive gradient methods, when done right, do not incur a generalization penalty. "
] |
[
"The ability to generalize quickly from few observations is crucial for intelligent systems.",
"In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered.",
"These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall.",
"We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. ",
"In addition, its memory compression allows it to scale to thousands of unknown labels. ",
"Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification.",
"In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning.",
"Consider the following sequential decision problem: at every iteration of an episode we are provided with an image of a digit (e.g. MNIST) and an unknown symbol.",
"Our goal is to output a digit Y = X + S where X is the value of the MNIST digit, and S is a numerical value that is randomly assigned to the unknown symbol at the beginning of each episode.",
"After seeing only a single instance of a symbol an intelligent system should not only be able to infer the value S of the symbol but also to correctly generalize the operation associated with the symbol to any other digit in the remaining iterations of that episode.Despite its simplicity, this task emphasizes three cognitive abilities that a generic learning algorithm should display:",
"1. the algorithm can learn a behaviour and then flexibly apply it to a range of different tasks using only a few context observations at test time;",
"2. the algorithm can memorize and quickly recall previous experiences for quick adaptation; and",
"3. the algorithm can process these recalled memories in a non-trivial manner to carry out tasks that require reasoning.The first point is commonly described as \"learning to learn\" or meta-learning, and represents a new way of looking at statistical inference BID22 BID2 BID1 .",
"Traditional neural networks are trained to approximate arbitrary probability distributions with great accuracy by parametric adaptation via gradient descent BID13 BID23 .",
"After training that probability distribution is fixed and neural networks can only generalize well when the testing distribution matches the training distribution BID16 .",
"In contrast, meta-learning systems are trained to learn an algorithm that infers a function directly from the observations it receives at test time.",
"This setup is more flexible than the traditional approach and generalizes better to unseen distributions as it incorporates new information even after the training phase is over.",
"It also allows these models to improve their accuracy as they observe more data, unlike models which learn a fixed distribution.The second requirement -being able to memorize and efficiently recall previous experience -is another active area of research.",
"Storing information in a model proves especially challenging as we move beyond small toy-examples to tasks with higher dimensional data or real-world problems.Current methods often work around this by summarizing past experiences in one lower-dimensional representation BID7 BID10 or using memory modules BID6 .",
"While the former approach can produce good results, the representation and therefore the amount of information we can ultimately encode with such models will be of a fixed and thus limited size.",
"Working with neural memory modules, on the other hand, presents its own challenges as learning to store and keep the right experiences is not trivial.",
"In order to successfully carry out the task defined at the beginning of this paper a model should learn to capture information about a flexible and unbounded number of symbols observed in an episode without storing redundant information.Finally, reasoning requires processing recalled experiences in order to apply the information they contain to the current data point being processed.",
"In simple cases such as classification, it is enough to simply recall memories of similar data points and directly infer the current class by combining them using a weighted average or a simple kernel BID26 BID24 , which limits the models to performing interpolation.",
"In the example mentioned above, more complex reasoning is necessary for human-level generalisation.In this paper we introduce Approximate Posterior Learning (APL, pronounced like the fruit), a self-contained model and training procedure that address these challenges.",
"APL learns to carry out few-shot approximation of new probability distributions and to store only as few context points as possible in order to carry out the current task.",
"In addition it learns how to process recalled experiences to carry out tasks of varying degrees of complexity.",
"This sequential algorithm was inspired by Bayesian posterior updating BID8 in the sense that the output probability distribution is updated as more data is observed.We demonstrate that APL can deliver accuracy comparable to other state-of-the-art algorithms in standard few-shot classification benchmarks while being more data efficient.",
"We also show it can scale to a significantly larger number of classes while retaining good performance.",
"Finally, we apply APL to the reasoning task introduced as motivation and verify that it can perform the strong generalization we desire.The main contributions of this paper are:• A simple memory controller design which uses a surprise-based signal to write the most predictive items to memory.",
"By not needing to learn what to write, we avoid costly backpropagation through memory which makes the setup easier and faster to train.",
"This design also minimizes how much data is stored, making our method more memory efficient.•",
"An integrated external and working memory architecture which can take advantage of the best of both worlds: scalability and sparse access provided by the working memory; and all-to-all attention and reasoning provided by a relational reasoning module.•",
"A training setup which steers the system towards learning an algorithm which approximates the posterior without backpropagating through the whole sequence of data in an episode.",
"We introduced a self-contained system which can learn to approximate a probability distribution with as little data and as quickly as it can.",
"This is achieved by putting together the training setup which encourages adaptation; an external memory which allows the system to recall past events; a writing system to adapt the memory to uncertain situations; and a working memory architecture which can efficiently compare items retrieved from memory to produce new predictions.We showed that the model can:• Reach state of the art accuracy with a smaller memory footprint than other meta-learning models by efficiently choosing which data points to remember.•",
"Scale to very large problem sizes thanks to the use of an external memory module with sparse access.•",
"Perform fewer than 1-shot generalization thanks to relational reasoning across neighbors."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.21052631735801697,
0.260869562625885,
0.23076923191547394,
0.1249999925494194,
0,
0.1538461446762085,
0,
0.1599999964237213,
0.18518517911434174,
0.054794516414403915,
0.23999999463558197,
0.15789473056793213,
0.1492537260055542,
0.04347825422883034,
0.09090908616781235,
0.25,
0.19999998807907104,
0.09677419066429138,
0.1492537260055542,
0.1538461446762085,
0.08163265138864517,
0.25,
0.1846153736114502,
0.16949151456356049,
0.12244897335767746,
0,
0.12121211737394333,
0.0952380895614624,
0.1515151411294937,
0.1304347813129425,
0.04878048226237297,
0.18518517911434174,
0.12765957415103912,
0.27272728085517883,
0.21176470816135406,
0.04651162400841713,
0
] | ByeSdsC9Km | true | [
"We introduce a model which generalizes quickly from few observations by storing surprising information and attending over the most relevant data at each time point."
] |
[
"Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing.",
"For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision.",
"We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text.",
"In this paper, we present an approach of fast reading for text classification.",
"Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence.",
"Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions.",
"With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches. \n",
"Recurrent neural nets (RNNs), including GRU nets BID6 and LSTM nets BID12 , have been increasingly applied to many problems in natural language processing.",
"Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks BID29 ) (e.g., language modeling BID2 BID20 , machine translation BID13 , conversational/dialogue modeling BID26 , question answering BID11 BID17 , and document summarization BID21 ); and the classification tasks (e.g., part-of-speech tagging BID23 , chunking, named entity recognition BID7 , sentimental analysis BID28 , and document classification BID14 BID25 ).",
"To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems.",
"However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand.",
"For instance, for sentiment analysis it is sufficient to read the first half of a review like \"this movie is amazing\" or \"it is the best I have ever seen,\" to provide an answer even without reading the rest of the review.",
"In other cases, we may want to skip or skim some text without carefully checking it.",
"For example, sentences such as \"it's worth to try\" are usually more important than irrelevant text such as \"we got here while it's still raining outside\" or \"I visited on Saturday.\"",
"On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text.",
"All of these techniques enable us to achieve fast and accurate reading.",
"Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence.In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text.To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision.",
"To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading.",
"We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance.",
"To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.We evaluate our approach on four different sentiment analysis and document topic classification datasets.",
"By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action BID33 , we find that our approach achieves both higher efficiency and better accuracy.",
"We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks.",
"By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision.",
"To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading.",
"An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.",
"We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0,
0,
0.072727270424366,
0.2666666507720947,
0.11320754140615463,
0.1621621549129486,
0.25531914830207825,
0.10256409645080566,
0.11764705181121826,
0.09302324801683426,
0.1395348757505417,
0.11764705181121826,
0.060606054961681366,
0.04255318641662598,
0.05714285373687744,
0.13793103396892548,
0.09999999403953552,
0.13333332538604736,
0.25641024112701416,
0.25,
0.1304347813129425,
0.9696969389915466,
0.11320754140615463,
0.13333332538604736,
0.09999999403953552,
0.22857142984867096
] | ryZ8sz-Ab | true | [
"We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. "
] |
[
"Reinforcement learning agents need to explore their unknown environments to solve the tasks given to them.",
"The Bayes optimal solution to exploration is intractable for complex environments, and while several exploration methods have been proposed as approximations, it remains unclear what underlying objective is being optimized by existing exploration methods, or how they can be altered to incorporate prior knowledge about the task.",
"Moreover, it is unclear how to acquire a single exploration strategy that will be useful for solving multiple downstream tasks.",
"We address these shortcomings by learning a single exploration policy that can quickly solve a suite of downstream tasks in a multi-task setting, amortizing the cost of learning to explore.",
"We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task.",
"We optimize the objective by reducing it to a two-player, zero-sum game between a state density model and a parametric policy.",
"Our theoretical analysis of this approach suggests that prior exploration methods do not learn a policy that does distribution matching, but acquire a replay buffer that performs distribution matching, an observation that potentially explains these prior methods' success in single-task settings.",
"On both simulated and real-world tasks, we demonstrate that our algorithm explores faster and adapts more quickly than prior methods.",
"Reinforcement learning (RL) algorithms must be equipped with exploration mechanisms to effectively solve tasks with limited reward signals.",
"These tasks arise in many real-world applications where providing human supervision is expensive.",
"The inability of current RL algorithms to adequately explore limits their applicability to long-horizon control tasks.",
"A wealth of prior work has studied exploration for RL.",
"While, in theory, the Bayes-optimal exploration strategy is optimal, it is intractable to compute exactly, motivating work on tractable heuristics for exploration.",
"Exploration methods based on random actions have limited ability to cover a wide range of states.",
"More sophisticated techniques, such as intrinsic motivation, accelerate learning in the single-task setting.",
"However, these methods have two limitations.",
"First, they do not explicitly define an objective to quantify \"good exploration,\" but rather argue that exploration arises implicitly through some iterative procedure.",
"Lacking a well-defined optimization objective, it remains challenging to understand what these methods are doing and why they work.",
"Similarly, the lack of a metric to quantify exploration, even if only for evaluation, makes it challenging to compare exploration methods and assess progress in this area.",
"The second limitation is that these methods target the single-task setting.",
"Because these methods aim to converge to the optimal policy for a particular task, it is challenging to repurpose these methods to solve multiple tasks.",
"We address these shortcomings by considering a multi-task setting, where many different reward functions can be provided for the same set of states and dynamics.",
"Rather than exploring from scratch for each task, we aim to learn a single, task-agnostic exploration policy that can be adapted to many possible downstream reward functions, amortizing the cost of learning to explore.",
"This exploration policy can be viewed as a prior on the policy for solving downstream tasks.",
"Learning will consist of two phases: during training, we acquire this task-agnostic exploration policy; during testing, we use this exploration policy to quickly explore and maximize the task reward.",
"Learning a single exploration policy is considerably more difficult than doing exploration throughout the course of learning a single task.",
"The latter is done by intrinsic motivation (Pathak et al., 2017; Tang et al., 2017; Oudeyer et al., 2007) and count-based exploration methods (Bellemare et al., 2016) , which can effectively explore to find states with high reward, at which point the agent can decrease exploration and increase exploitation of those high-reward states.",
"While these methods perform efficient exploration for learning a single task, the policy at any particular iteration is not a good exploration policy.",
"For example, the final policy at convergence would only visit the high-reward states discovered for the current task.",
"What objective should be optimized to obtain a good exploration policy?",
"We recast exploration as a problem of State Marginal Matching: given a desired state distribution, we learn a mixture of policies for which the state marginal distribution matches this desired distribution.",
"Without any prior information, this objective reduces to maximizing the marginal state entropy H [s] , which encourages the policy to visit as many states as possible.",
"The distribution matching objective also provides a convenient mechanism to incorporate prior knowledge about the task, whether in the form of safety constraints that the agent should obey; preferences for some states over other states; reward shaping; or the relative importance of each state dimension for a particular task.",
"We also propose an algorithm to optimize the State Marginal Matching (SMM) objective.",
"First, we reduce the problem of SMM to a two-player, zero-sum game between a policy player and a density player.",
"We find a Nash Equilibrium for this game using fictitious play (Brown, 1951) , a classic procedure from game theory.",
"Our resulting algorithm iteratively fits a state density model and then updates the policy to visit states with low density under this model.",
"Our analysis of this approach sheds light on prior work on exploration.",
"In particular, while the policy learned by existing exploration algorithms does not perform distribution matching, the replay buffer does, an observation that potentially explains the success of prior methods.",
"On both simulated and real-world tasks, we demonstrate that our algorithm explores more effectively and adapts more quickly to new tasks than state-of-the-art baselines.",
"In this paper, we introduced a formal objective for exploration.",
"While it is often unclear what existing exploration algorithms will converge to, our State Marginal Matching objective has a clear solution:",
"at convergence, the policy should visit states in proportion to their density under a target distribution.",
"Not only does this objective encourage exploration, it also provides human users with a flexible mechanism to bias exploration towards states they prefer and away from dangerous states.",
"Upon convergence, the resulting policy can thereafter be used as a prior in a multi-task setting, amortizing exploration and enabling faster adaptation to new, potentially sparse, reward functions.",
"The algorithm we proposed looks quite similar to previous exploration methods based on prediction error, suggesting that those methods are also performing some form of distribution matching.",
"However, by deriving our method from first principles, we note that these prior methods omit a crucial historical averaging step, without which the algorithm is not guaranteed to converge.",
"Experiments on both simulated and real-world tasks demonstrated how SMM learns to explore, enabling an agent to efficiently explore in new tasks provided at test time.",
"In future work, we aim to study connections between inverse RL, MaxEnt RL and state marginal matching, all of which perform some form of distribution matching.",
"Empirically, we aim to scale to more complex tasks by parallelizing the training of all mixture components simultaneously.",
"Broadly, we expect the state distribution matching problem formulation to enable the development of more effective and principled RL methods that reason about distributions rather than individual states."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.07017543166875839,
0.11764705181121826,
0.25,
0.3404255211353302,
0.12121211737394333,
0.2083333283662796,
0,
0.06451612710952759,
0.07407406717538834,
0.13793103396892548,
0.25,
0.11764705181121826,
0.19999998807907104,
0.14814814925193787,
0,
0.05405404791235924,
0.060606054961681366,
0.19999998807907104,
0,
0.05882352590560913,
0.20512820780277252,
0.1304347813129425,
0.20689654350280762,
0.10256409645080566,
0.19354838132858276,
0.1090909019112587,
0.11764705181121826,
0.06666666269302368,
0.1599999964237213,
0.41025641560554504,
0.15789473056793213,
0.24561403691768646,
0.07407406717538834,
0.19354838132858276,
0.1249999925494194,
0.11428570747375488,
0.1599999964237213,
0.1463414579629898,
0,
0.1666666567325592,
0.11428570747375488,
0.2666666507720947,
0.1463414579629898,
0.19512194395065308,
0.19999998807907104,
0.04651162400841713,
0.052631575614213943,
0.25641024112701416,
0.06451612710952759,
0.2926829159259796
] | Hkla1eHFvS | true | [
"We view exploration in RL as a problem of matching a marginal distribution over states."
] |
[
"The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems.",
"Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions.",
"However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation.",
"Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.\n\n",
"Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry.",
"In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines.",
"We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.",
"Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing.",
"We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models.",
"For sensory perception tasks, neural networks have mostly replaced handcrafted features.",
"Instead of defining features by hand using domain knowledge, it is now possible to learn them, resulting in improved accuracy and saving a considerable amount of work.",
"However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not.The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet).",
"Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood.",
"Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant.Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations.",
"Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant.",
"As it turns out, a convolution operation can be defined for almost any group of transformation -not just translations.",
"By simply replacing convolutions with group convolutions (wherein filters are not Figure 1 : Hexagonal G-CNN.",
"A p6 group convolution is applied to a single-channel hexagonal image f and filter ψ 1 , producing a single p6 output feature map f g ψ 1 with 6 orientation channels.",
"This feature map is then group-convolved again with a p6 filter ψ 2 .",
"The group convolution is implemented as a Filter Transformation (FT) step, followed by a planar hexagonal convolution.",
"As shown here, the filter transform of a planar filter involves only a rotation, whereas the filter transform for a filter on the group p6 involves a rotation and orientation channel cycling.",
"Note that in general, the orientation channels of p6 feature maps will not be rotated copies of each other, as happens to be the case in this figure.",
"just shifted but transformed by a larger group; see Figure 1 ), convolutional networks can be made equivariant to and exploit richer groups of symmetries BID0 .",
"Furthermore, this technique was shown to be more effective than data augmentation.Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels.",
"For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections -the symmetries of a square lattice.The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE(2), is the fact that it requires interpolation in order to rotate the filters.",
"Although it is possible to use bilinear interpolation in a neural network BID10 , it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth.",
"This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation.",
"This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m.To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation.",
"Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane BID17 , and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels BID15 BID2 .",
"Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3 × 3 filters when the number of parameters is kept the same.As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry.",
"Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution.In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed.",
"Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice.",
"This is easily achieved using bilinear interpolation.",
"Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution.",
"To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution.",
"Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice.We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) BID21 .",
"Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label.",
"In situations where the number of examples is limited, data efficient learning is important.",
"Our experiments demonstrate that group convolutions systematically improve performance.",
"The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters.",
"Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv.The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks.",
"Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present results, Section 6 gives an overview of other related work after which we discuss our findings and conclude."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.060606054961681366,
0.05405404791235924,
0,
0.14814814925193787,
0.11428570747375488,
0.1621621549129486,
0.12121211737394333,
0.12121211737394333,
0.08695651590824127,
0.052631575614213943,
0.13333332538604736,
0.1111111044883728,
0.07547169178724289,
0.06666666269302368,
0.12903225421905518,
0.07407406717538834,
0.1538461446762085,
0.07999999821186066,
0.2222222238779068,
0.1764705777168274,
0,
0.15789473056793213,
0.0952380895614624,
0.054794516414403915,
0.13333332538604736,
0.1111111044883728,
0.1090909093618393,
0.07692307233810425,
0.13333332538604736,
0.15686273574829102,
0.25,
0,
0.11764705181121826,
0.05128204822540283,
0.2380952388048172,
0,
0,
0.0952380895614624,
0.06666666269302368,
0.1538461446762085,
0.12244897335767746
] | r1vuQG-CW | true | [
"We introduce G-HexaConv, a group equivariant convolutional neural network on hexagonal lattices."
] |
[
"Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data.",
"Methods for object localization, however, are still in need of substantial improvement.",
"Common approaches to this problem involve the use of a sliding window, sometimes at multiple scales, providing input to a deep CNN trained to classify the contents of the window.",
"In general, these approaches are time consuming, requiring many classification calculations.",
"In this paper, we offer a fundamentally different approach to the localization of recognized objects in images.",
"Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights.",
"We provide a simple method to interpret classifier weights in the context of individual classified images.",
"This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each in- put pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition.",
"These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image.",
"We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object.",
"Our experimental results, using real-world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique.",
"Deep Convolutional Neural Networks (CNNs) have been shown to be effective at image classification, accurately performing object recognition even with thousands of object classes when trained on a sufficiently rich data set of labeled images BID14 .",
"One advantage of CNNs is their ability to learn complete functional mappings from image pixels to object categories, without any need for the extraction of hand-engineered image features BID21 .",
"To facilitate learning through stochastic gradient descent, CNNs are (at least approximately) differentiable with regard to connection weight parameters.Image classification, however, is only one of the problems of computer vision.",
"In the task of image classification, each image has a single label, associated with the class identity of the main object in the image, and the goal is to assign correct labels in a manner that generalizes to novel images.",
"This can be accomplished by training a machine learning classifier, such as a CNN, on a large data set of labeled images BID5 .",
"In the object localization task, in comparison, the output for a given image is not a class label but the locations of a specified number of objects in the image, usually encoded as bounding boxes.",
"Evaluation of an object localization system generally requires ground truth bounding boxes to compare to the system's output.",
"The detection task is more difficult than the localization task, as the number of objects are not predetermined BID21 .In",
"this paper, we focus on object localization, identifying the position in the image of a recognized object. As",
"is common in the localization literature, position information is output in the form of a bounding box. Previously",
"developed techniques for accomplishing this task generally involve searching the image for the object, considering many candidate bounding boxes with different sizes and locations, sometimes guided by an auxilliary algorithm for heuristically identifying regions of interest BID21 ; BID10 ; BID13 . For each candidate",
"location, the sub-image captured by the bounding box is classified for object category, with the final output Figure 1 : Examples of sensitivity maps, displaying the sensitivity of network internal representations to individual pixels, providing information about the locations of the main objects in the source images.bounding box either being the specific candidate region classified as the target object with the highest level of certainty or some heuristic combination of neighboring or overlapping candidate regions with high classification certainty. These approaches tend",
"to be time consuming, often requiring deep CNN classification calculations of many candidate regions at multiple scales. Efforts to speed these",
"methods mostly focus on reducing the number of regions considered, typically by using some adjunct heuristic region proposal algorithm BID10 ; BID17 ; BID13 . Still, the number of considered",
"regions is often reported to be roughly 2,000 per image. While these approaches can be fairly",
"accurate, their slowness limits their usefulness, particularly for online applications.A noteworthy alternative approach is to directly train a deep CNN to produce outputs that match ground truth localization bounding boxes, using a large image data set that provides both category and localization information for each image. It appears as if some form of this method",
"was used with AlexNet BID14 , though details concerning localization, rather than image classification, are difficult to discern from the published literature. A natural approach would be to cast the learning",
"of bounding boxes as a simple regression problem, with targets being the four coordinates that specify a bounding box (e.g., coordinates of upper-left and lower-right corners, or region center coordinates along with region width and height). It is reasonable to consider sharing early layers",
"of a deep CNN, such as those performing convolution and max pooling, between both an image classification network and an object localization network. Indeed, taking such a multitask learning approach",
"BID2 can allow for both object category and object location training data to shape connection weights throughout the network. Thus, the deep CNN would have \"two heads\", one for",
"image classification, using a classification cross-entropy loss function, and one for object localization, reducing the 2 norm between ground truth and predicted bounding box coordinates BID14 . While this approach can produce a network that quickly",
"outputs location information, extensive training on large data sets containing ground truth bounding box information is necessary to produce good generalization.In this paper, we introduce an approach to object localization that is both very fast and robust in the face of limited ground truth bounding box training data. This approach is rooted in the assertion that any deep",
"CNN for image classification must contain, implicit in its connection weights, knowledge about the location of recognized objects BID20 . The goal, then, is to interpret the flow of activation",
"in an object recognition network when it is performing image classification so as to extract information about object location. Furthermore, the goal is to do this quickly. Thus, this",
"approach aims to leverage location knowledge",
"that is already latent in extensively trained and tuned image classification networks, without requiring a separate learning process for localization.Our method makes use of the notion of a sensitivity analysis BID26 . We propose estimating the sensitivity of the category outputs",
", or activation patterns at internal network layers, of an image classification CNN to variance in each input pixel, given a specific input image. The result is a numeric value for each pixel in the input image",
"that captures the degree to which small changes in that pixel (locally, around its current value) give rise to large changes in the output category. Together, these numeric values form a sensitivity map of the image",
", encoding image regions that are important for the current classification. Our proposed measure of sensitivity is the partial derivative of",
"activity with regard to each pixel value, evaluated for the current image. For a deep CNN that formally embodies a differentiable mapping (",
"at least approximately) from image pixels to output categories, this partial derivative can be quickly calculated. While many tools currently exist for efficiently calculating such",
"derivatives, we provide a simple algorithm that computes these values through a single backward pass through the image classification network, similar to that used to calculate unit error (delta) values in the backpropagation of error learning algorithm BID18 . Thus, we can generate a sensitivity map for an image in about the",
"same amount of time as it takes the employed image classification network to produce an output. Some example sensitivity maps are shown in Figure 1 .The idea of",
"using sensitivity information, like that in our sensitivity",
"maps, for a variety of tasks, including localization, has previously appeared in the literature BID24 ; BID28 BID20 . Indeed, some of these past efforts have used more sophisticated measures",
"of sensitivity. In this paper, we show that even our very simple sensitivity measure can",
"produce strong localization performance, and it can do so quickly, without any modifications to the classification network, and even for object categories on which the classification network was not trained. The relationship of the results reported here to previously reported work",
"is discussed further in Section 4.As previously mentioned, object localization methods typically encode object location as a bounding box. Since our sensitivity maps encode location differently, in terms of pixels",
", we propose learning a simple linear mapping from sensitivity maps to bounding box coordinates, allowing our method to output a bounding box for each classified image. We suggest that this linear mapping can be robustly learned from a relatively",
"small training set of images with ground truth bounding boxes, since the sensitivity maps form a much more simple input than the original images.The primary contributions of this paper may be summarized as follows:• We propose a new general approach to performing object localization, interpreting a previously trained image classification network by performing a sensitivity analysis, identifying pixels to which the category output, or a more general internal representation, is particularly sensitive.• We demonstrate how a linear function from the resulting sensitivity maps to",
"object location bounding box coordinates may be learned from training images containing ground truth location information.• We provide a preliminary assessment of our approach, measuring object localization",
"performance on the ImageNet and PASCAL VOC data sets using the VGG16 image classification CNN, showing strong accuracy while maintaining short computation times.",
"We have presented an approach to object localization based on performing a sensitivity analysis of a previously trained image classification deep CNN.",
"Our method is fast enough to be used in online applications, and it demonstrates accuracy that is superior to some methods that are much slower.",
"It is likely that even better accuracy could be had by incorporating sensitivity analysis information into a more sophisticated bounding box estimator.As previously noted, the idea of using sensitivity information has appeared in previously published work.",
"There are ways in which the results reported in this paper are distinct, however.",
"We have moved beyond visualization of network function using sensitivity (or saliency) BID24 to performing direct comparisons between different methods on the localization task.",
"We have shown that using a fast and simple measure of sensitivity can produce comparable performance to that of much slower methods.",
"Our approach produces good generalization without modifying the classification network, as is done in Class Activation Mapping (CAM) BID28 .",
"With our PASCAL VOC 2007 results, we have shown that our approach can successfully be applied to attention maps, even when the image contains objects belonging to a class on which the classification network was not trained, distinguishing it from Grad-CAM Selvaraju et al. (2016) .",
"In short, we have demonstrated the power of a simple sensitivity measure for performing localization.Note that our approach may be used with image classifiers other than CNNs.",
"The proposed sensitivity analysis can be conducted on any differentiable classifier, though performance will likely depend on classifer specifics.",
"Indeed, at a substantial time cost, even a black box classifier could be approximately analyzed by making small changes to pixels and observing the effects on activation patterns.The proposed approach is quite general.",
"Indeed, we are currently working on applying sensitivity analysis to deep networks trained on other tasks, with the goal of interpreting network performance on the current input in a useful way.",
"Thus, we see a potentially large range of uses for sensitivity analysis in neural network applications."
] | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.06666666269302368,
0.1904761791229248,
0,
0.17142856121063232,
0.27272728085517883,
0.11764705181121826,
0.19354838132858276,
0.25641024112701416,
0.1538461446762085,
0.04999999329447746,
0.11538460850715637,
0.09090908616781235,
0.0416666604578495,
0.20408162474632263,
0.10256409645080566,
0.1304347813129425,
0.11428570747375488,
0.05405404791235924,
0.23529411852359772,
0.12121211737394333,
0.07017543166875839,
0.07692307233810425,
0.10526315122842789,
0.1395348757505417,
0,
0.17910447716712952,
0.08510638028383255,
0.1071428507566452,
0.23255813121795654,
0.22727271914482117,
0.23529411852359772,
0.1875,
0.08888888359069824,
0.09302324801683426,
0.0833333283662796,
0.11320754140615463,
0.1666666567325592,
0.0833333283662796,
0.05405404791235924,
0.19999998807907104,
0,
0.0714285671710968,
0.04444443807005882,
0.07999999821186066,
0.08695651590824127,
0,
0.15094339847564697,
0.08888888359069824,
0.03999999538064003,
0.14117646217346191,
0.09090908616781235,
0.19999998807907104,
0.3589743673801422,
0.04999999329447746,
0.11538460850715637,
0.06666666269302368,
0.1428571343421936,
0.15789473056793213,
0.10810810327529907,
0.13333332538604736,
0.1304347813129425,
0.0555555522441864,
0.19607841968536377,
0.21739129722118378,
0.05882352590560913
] | rkzUYjCcFm | true | [
"Proposing a novel object localization(detection) approach based on interpreting the deep CNN using internal representation and network's thoughts"
] |
[
"We present trellis networks, a new architecture for sequence modeling.",
"On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers.",
"On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices.",
"Thus trellis networks with general weight matrices generalize truncated recurrent networks.",
"We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models.",
"Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention.",
"The code is available at https://github.com/locuslab/trellisnet ."
] | [
0,
0,
0,
0,
0,
1,
0
] | [
0.3030303120613098,
0.19999998807907104,
0.2222222238779068,
0.12121211737394333,
0.2790697515010834,
0.4363636374473572,
0
] | HyeVtoRqtQ | false | [
"Trellis networks are a new sequence modeling architecture that bridges recurrent and convolutional models and sets a new state of the art on word- and character-level language modeling."
] |
[
"We propose an end-to-end framework for training domain specific models (DSMs) to obtain both high accuracy and computational efficiency for object detection tasks.",
"DSMs are trained with distillation and focus on achieving high accuracy at a limited domain (e.g. fixed view of an intersection).",
"We argue that DSMs can capture essential features well even with a small model size, enabling higher accuracy and efficiency than traditional techniques. ",
"In addition, we improve the training efficiency by reducing the dataset size by culling easy to classify images from the training set.",
"For the limited domain, we observed that compact DSMs significantly surpass the accuracy of COCO trained models of the same size.",
"By training on a compact dataset, we show that with an accuracy drop of only 3.6%, the training time can be reduced by 93%."
] | [
0,
0,
0,
0,
0,
1
] | [
0.31578946113586426,
0.15789473056793213,
0.14999999105930328,
0.1764705777168274,
0.23529411852359772,
0.3499999940395355
] | HyzWeVX_jQ | false | [
"High object-detection accuracy can be obtained by training domain specific compact models and the training can be very short."
] |
[
"We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, $Q$-functions, and dynamics. ",
"We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal $Q$-functions and policies are much more complex than the dynamics.",
"We hypothesize many real-world MDPs also have a similar property.",
"For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization.",
"Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weak $Q$-function into a stronger policy.",
"Empirical results show that applying BOOTS on top of model-based or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks.",
"Model-based deep reinforcement learning (RL) algorithms offer a lot of potentials in achieving significantly better sample efficiency than the model-free algorithms for continuous control tasks.",
"We can largely categorize the model-based deep RL algorithms into two types:",
"1. model-based policy optimization algorithms which learn policies or Q-functions, parameterized by neural networks, on the estimated dynamics, using off-the-shelf model-free algorithms or their variants (Luo et al., 2019; Janner et al., 2019; Kaiser et al., 2019; Kurutach et al., 2018; Feinberg et al., 2018; Buckman et al., 2018) , and",
"2. model-based planning algorithms, which plan with the estimated dynamics Nagabandi et al. (2018) ; Chua et al. (2018) ; Wang & Ba (2019) .",
"A deeper theoretical understanding of the pros and cons of model-based and the model-free algorithms in the continuous state space case will provide guiding principles for designing and applying new sample-efficient methods.",
"The prior work on the comparisons of model-based and model-free algorithms mostly focuses on their sample efficiency gap, in the case of tabular MDPs (Zanette & Brunskill, 2019; Jin et al., 2018) , linear quadratic regulator (Tu & Recht, 2018) , and contextual decision process with sparse reward (Sun et al., 2019) .",
"In this paper, we theoretically compare model-based RL and model-free RL in the continuous state space through the lens of approximability by neural networks, and then use the insight to design practical algorithms.",
"What is the representation power of neural networks for expressing the Qfunction, the policy, and the dynamics?",
"How do the model-based and model-free algorithms utilize the expressivity of neural networks?",
"Our main finding is that even for the case of one-dimensional continuous state space, there can be a massive gap between the approximability of Q-function and the policy and that of the dynamics:",
"The optimal Q-function and policy can be significantly more complex than the dynamics.",
"We construct environments where the dynamics are simply piecewise linear functions with constant pieces, but the optimal Q-functions and the optimal policy require an exponential (in the horizon) number of linear pieces, or exponentially wide neural networks, to approximate.",
"1 The approximability gap can also be observed empirically on (semi-) randomly generated piecewise linear dynamics with a decent chance.",
"(See Figure 1 for two examples.)",
"When the approximability gap occurs, any deep RL algorithms with policies parameterized by neural networks will suffer from a sub-optimal performance.",
"These algorithms include both model-free algorithms such as DQN (Mnih et al., 2015) and SAC (Haarnoja et al., 2018) , and model-based policy optimization algorithms such as SLBO (Luo et al., 2019) and MBPO (Janner et al., 2019) .",
"To validate the intuition, we empirically apply these algorithms to the constructed or the randomly generated MDPs.",
"Indeed, they fail to converge to the optimal rewards even with sufficient samples, which suggests that they suffer from the lack of expressivity.",
"However, in such cases, model-based planning algorithms should not suffer from the lack of expressivity, because they only use the learned, parameterized dynamics, which are easy to express.",
"The policy obtained from the planning is the maximizer of the total future reward on the learned dynamics, and can have an exponential (in the horizon) number of pieces even if the dynamics has only a constant number of pieces.",
"In fact, even a partial planner can help improve the expressivity of the policy.",
"If we plan for k steps and then resort to some Q-function for estimating the total reward of the remaining steps, we can obtain a policy with 2 k more pieces than what Q-function has.",
"We hypothesize that the real-world continuous control tasks also have a more complex optimal Qfunction and a policy than the dynamics.",
"The theoretical analysis of the synthetic dynamics suggests that a model-based few-steps planner on top of a parameterized Q-function will outperform the original Q-function because of the addtional expressivity introduced by the planning.",
"We empirically verify the intuition on MuJoCo benchmark tasks.",
"We show that applying a model-based planner on top of Q-functions learned from model-based or model-free policy optimization algorithms in the test time leads to significant gains over the original Q-function or policy.",
"In summary, our contributions are:",
"1. We construct continuous state space MDPs whose Q-functions and policies are proved to be more complex than the dynamics (Sections 4.1 and 4.2.)",
"2. We empirically show that with a decent chance, (semi-)randomly generated piecewise linear MDPs also have complex Q-functions (Section 4.3.)",
"3. We show theoretically and empirically that the model-free RL or model-based policy optimization algorithms suffer from the lack of expressivity for the constructed MDPs (Sections 4.3), whereas model-based planning solve the problem efficiently (Section 5.2.)",
"4. Inspired by the theory, we propose a simple model-based bootstrapping planner (BOOTS), which can be applied on top of any model-free or model-based Q-learning algorithms at the test time.",
"Empirical results show that BOOTS improves the performance on MuJoCo benchmark tasks, and outperforms previous state-of-the-art on MuJoCo humanoid environment.",
"Our study suggests that there exists a significant representation power gap of neural networks between for expressing Q-function, the policy, and the dynamics in both constructed examples and empirical benchmarking environments.",
"We show that our model-based bootstrapping planner BOOTS helps to overcome the approximation issue and improves the performance in synthetic settings and in the difficult MuJoCo environments.",
"We raise some interesting open questions.",
"• Can we theoretically generalize our results to high-dimensional state space, or continuous actions space?",
"Can we theoretically analyze the number of pieces of the optimal Q-function of a stochastic dynamics?",
"• In this paper, we measure the complexity by the size of the neural networks.",
"It's conceivable that for real-life problems, the complexity of a neural network can be better measured by its weights norm.",
"Could we build a more realistic theory with another measure of complexity?",
"• The BOOTS planner comes with a cost of longer test time.",
"How do we efficiently plan in high-dimensional dynamics with a long planning horizon?",
"• The dynamics can also be more complex (perhaps in another sense) than the Q-function in certain cases.",
"How do we efficiently identify the complexity of the optimal Q-function, policy, and the dynamics, and how do we deploy the best algorithms for problems with different characteristics?",
"(Luo et al., 2019) , the stochasticity in the dynamics can play a similar role as the model ensemble.",
"Our algorithm is a few times faster than MBPO in wall-clock time.",
"It performs similarlty to MBPO on Humanoid, but a bit worse than MBPO in other environments.",
"In MBSAC, we use SAC to optimize the policy π β and the Q-function Q ϕ .",
"We choose SAC due to its sample-efficiency, simplicity and off-policy nature.",
"We mix the real data from the environment and the virtual data which are always fresh and are generated by our learned dynamics modelf θ ."
] | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.6341463327407837,
0.17777776718139648,
0.06896550953388214,
0.20408162474632263,
0.1538461446762085,
0.23255813121795654,
0.23255813121795654,
0.3870967626571655,
0.2545454502105713,
0.1538461446762085,
0.260869562625885,
0.19354838132858276,
0.4583333432674408,
0.3030303120613098,
0.4516128897666931,
0.17777776718139648,
0.1875,
0.23076923191547394,
0.10256409645080566,
0,
0.4000000059604645,
0.1818181723356247,
0.11764705181121826,
0.10256409645080566,
0.17391303181648254,
0.1599999964237213,
0.1249999925494194,
0.12244897335767746,
0.21052631735801697,
0.2222222238779068,
0.1428571343421936,
0.25,
0,
0.1860465109348297,
0.04878048226237297,
0.29629629850387573,
0.25531914830207825,
0.10810810327529907,
0.25,
0.1904761791229248,
0.07999999821186066,
0,
0.1249999925494194,
0.3125,
0.20512819290161133,
0.06451612710952759,
0.06451612710952759,
0.0624999962747097,
0.1111111044883728,
0.19512194395065308,
0.1111111044883728,
0,
0,
0.11764705181121826,
0.13333332538604736,
0.25641024112701416
] | Hye4WaVYwr | true | [
"We compare deep model-based and model-free RL algorithms by studying the approximability of $Q$-functions, policies, and dynamics by neural networks. "
] |