_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
2f991be8d35e4c1a45bfb0d646673b1ef5239a1f | Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as blackbox functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges. |
546add32740ac350dda44bab06f56d4e206622ab | Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on image manipulations, such as scratches or changes to camera angle or lighting conditions, and define safety for an image classification decision in terms of invariance of the classification with respect to manipulations of the original image within a region of images that are close to it. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness. |
8db9df2eadea654f128c1887722c677c708e8a47 | Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles. INTRODUCTION A robot car that drives autonomously is a long-standing goal of Artificial Intelligence. Driving a vehicle is a task that requires high level of skill, attention and experience from a human driver. Although computers are more capable of sustained attention and focus than humans, fully autonomous driving requires a level of intelligence that surpasses that achieved so far by AI agents. The tasks involved in creating an autonomous driving agent can be divided into 3 categories, as shown in Figure 1: 1) Recognition: Identifying components of the surrounding environment. Examples of this are pedestrian detection, traffic sign recognition, etc. Although far from trivial, recognition is a relatively easy task nowadays thanks to advances in Deep Learning (DL) algorithms, which have reached human level recognition or above at several object detection and classification problems [8] [2]. Deep learning models are able to learn complex feature representations from raw input data, omitting the need for handcrafted features [15] [2] [7]. In this regard, Convolutional Neural Networks (CNNs) are probably the most successful deep learning model, and have formed the basis of every winning entry on the ImageNet challenge since AlexNet [8]. This success has been replicated in lane & vehicle detection for autonomous driving [6]. 2) Prediction: It is not enough for an autonomous driving agent to recognize its environment; it must also be able to build internal models that predict the future states of the environment. Examples of this class of problem include building a map of the environment or tracking an object. To be able to predict the future, it is important to integrate past information. As such, Recurrent Neural Networks (RNNs) are essential to this class of problem. Long-Short Term Memory (LSTM) networks [5] are one such category of RNN that have been used in end-to-end scene labeling systems [14]. More recently, RNNs have also been used to improve object tracking performance in the DeepTracking model [13]. 3) Planning: The generation of an efficient model that incorporates recognition and prediction to plan the future sequence of driving actions that will enable the vehicle to navigate successfully. Planning is the hardest task of the three. The difficulty lies in integrating the ability of the model to understand the environment (recognition) and its dynamics (prediction) in a way that enables it to plan the future actions such that it avoids unwanted situations (penalties) and drives safely to its destination (rewards). Figure 1: High level autonomous driving tasks The Reinforcement Learning (RL) framework [17] [20] has been used for a long time in control tasks. The mixture of RL with DL was pointed out to be one of the most promising approaches to achieve human-level control in [9]. In [12] and [11] this humanlevel control was demonstrated on Atari games using the Deep Q Networks (DQN) model, in which RL is responsible for the planning part while DL is responsible for the representation learning part. Later, RNNs were integrated in the mixture to account for partial observable scenarios [4]. Autonomous driving requires the integration of information ar X iv :1 70 4. 02 53 2v 1 [ st at .M L ] 8 A pr 2 01 7 from multiple sensors. Some of them are low dimensional, like LIDAR, while others are high dimensional, like cameras. It is noteworthy in this particular example, however, that although raw camera images are high dimensional, the useful information needed to achieve the autonomous driving task is of much lower dimension. For example, the important parts of the scene that affect driving decisions are limited to the moving vehicle, free space on the road ahead, the position of kerbs, etc. Even the fine details of vehicles are not important, as only their spatial location is truly necessary for the problem. Hence the memory bandwidth for relevant information is much lower. If this relevant information can be extracted, while the other non-relevant parts are filtered out, it would improve both the accuracy and efficiency of autonomous driving systems. Moreover, this would reduce the computation and memory requirements of the system, which are critical constraints on embedded systems that will contain the autonomous driving control unit. Attention models are a natural fit for such an information filtering process. Recently, these models were successfully deployed for image recognition in [23] and [10], wherein RL was mixed with RNNs to obtain the parts of the image to attend to. Such models are easily extended and integrated to the DQN [11] and Deep Recurrent Q Networks (DRQN) [4] models. This integration was performed in [16]. The success of attention models drives us to propose them for the extraction of low level information from the raw sensory information to perform autonomous driving. In this paper, we propose a framework for an end-end autonomous driving model that takes in raw sensor inputs and outputs driving actions. The model is able to handle partially observable scenarios. Moreover, we propose to integrate the recent advances in attention models in order to extract only relevant information from the received sensors data, thereby making it suitable for real-time embedded systems. The main contributions of this paper: 1) presenting a survey of the recent advances of deep reinforcement learning and 2) introducing a framework for endend autonomous driving using deep reinforcement learning to the automotive community. The rest of the paper is divided into two parts. The first part provides a survey of deep reinforcement learning algorithms, starting with the traditional MDP framework and Q-learning, followed by the the DQN, DRQN and Deep Attention Recurrent Q Networks (DARQN). The second part of the paper describes the proposed framework that integrates the recent advances in deep reinforcement learning. Finally, we conclude and suggest directions for future work. REVIEW OF REINFORCEMENT LEARNING For a comprehensive overview of reinforcement learning, please refer to the second edition of Rich Sutton’s textbook [18]. We provide a short review of important topics in this section. The Reinforcement Learning framework was formulated in [17] as a model to provide the best policy an agent can follow (best action to take in a given state), such that the total accumulated rewards are maximized when the agent follows that policy from the current and until a terminal state is reached. Motivation for RL Paradigm Driving is a multi-agent interaction problem. As a human driver, it is much easier to keep within a lane without any interaction with other cars than to change lanes in heavy traffic. The latter is more difficult because of the inherent uncertainty in behavior of other drivers. The number of interacting vehicles, their geometric configuration and the behavior of the drivers could have large variability and it is challenging to design a supervised learning dataset with exhaustive coverage of all scenarios. Human drivers employ some sort of online reinforcement learning to understand the behavior of other drivers such as whether they are defensive or aggressive, experienced or in-experienced, etc. This is particularly useful in scenarios which need negotiation, namely entering a roundabout, navigating junctions without traffic lights, lane changes during heavy traffic, etc. The main challenge in autonomous driving is to deal with corner cases which are unexpected even for a human driver, like recovering from being lost in an unknown area without GPS or dealing with disaster situations like flooding or appearance of a sinkhole on the ground. The RL paradigm models uncharted territory and learns from its own experience by taking actions. Additionally, RL may be able to handle non-differentiable cost functions which can create challenges for supervised learning problems. Currently, the standard approach for autonomous driving is to decouple the system into isolated sub-problems, typically supervised-learning-like object detection, visual odometry, etc and then having a post processing layer to combine all the results of the previous steps. There are two main issues with this approach: Firstly, the sub-problems which are solved may be more difficult than autonomous driving. For example, one might be solving object detection by semantic segmentation which is both challenging and unnecessary. Human drivers don’t detect and classify all visible objects while driving, only the most relevant ones. Secondly, the isolated sub-problems may not combine coherently to achieve |
a4d513cfc9d4902ef1a80198582f29b8ba46ac28 | This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders. |
b5a047dffc3d70dce19de61257605dfc8c69535c | Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU ) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods. |
b4bd9fab8439da4939a980a950838d1299a9b030 | Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact [email protected]. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. © 1990 INFORMS |
5288d14f6a3937df5e10109d4e23d79b7ddf080f | |
c9946fedf333df0c6404765ba6ccbf8006779753 | Autonomous driving has shown the capability of providing driver convenience and enhancing safety. While introducing autonomous driving into our current traffic system, one significant issue is to make the autonomous vehicle be able to react in the same way as real human drivers. In order to ensure that an autonomous vehicle of the future will perform like human drivers, this paper proposes a vehicle motion planning model, which can represent how drivers control vehicles based on the assessment of traffic environments in the real signalized intersection. The proposed motion planning model comprises functions of pedestrian intention detection, gap detection and vehicle dynamic control. The three functions are constructed based on the analysis of actual data collected from real traffic environments. Finally, this paper demonstrates the performance of the proposed method by comparing the behaviors of our model with the behaviors of real pedestrians and human drivers. The experimental results show that our proposed model can achieve 85% recognition rate for the pedestrian crossing intention. Moreover, the vehicle controlled by the proposed motion planning model and the actual human-driven vehicle are highly similar with respect to the gap acceptance in intersections. |
061356704ec86334dbbc073985375fe13cd39088 | In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achi eved by pushing the depth to 16–19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first a nd he second places in the localisation and classification tracks respec tively. We also show that our representations generalise well to other datasets, whe re t y achieve the stateof-the-art results. Importantly, we have made our two bestp rforming ConvNet models publicly available to facilitate further research o n the use of deep visual representations in computer vision. |
14318685b5959b51d0f1e3db34643eb2855dc6d9 | We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. |
1827de6fa9c9c1b3d647a9d707042e89cf94abf0 | Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a stateof-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters. |
6e80768219b2ab5a3247444cfb280e8d33d369f0 | An ultra-wideband (UWB) power divider is designed in this paper. The UWB performance of this power divider is obtained by using a tapered microstrip line that consists of exponential and elliptic sections. The coarse grained parallel micro-genetic algorithm (PMGA) and CST Microwave Studio are combined to achieve an automated parallel design process. The method is applied to optimize the UWB power divider. The optimized power divider is fabricated and measured. The measured results show relatively low insertion loss, good return loss, and high isolation between the output ports across the whole UWB (3.1–10.6 GHz). |
2532d0567c8334e4cadf282a73ffe399c1c32476 | This talk proposes a very simple “baseline architecture” for a learning agent that can handle stochastic, partially observable environments. The architecture uses reinforcement learning together with a method for representing temporal processes as graphical models. I will discuss methods for leaming the parameters and structure of such representations from sensory inputs, and for computing posterior probabilities. Some open problems remain before we can try out the complete agent; more arise when we consider scaling up. A second theme of the talk will be whether reinforcement learning can provide a good model of animal and human learning. To answer this question, we must do inverse reinforcement learning: given the observed behaviour, what reward signal, if any, is being optimized? This seems to be a very interesting problem for the COLT, UAI, and ML communities, and has been addressed in econometrics under the heading of structural estimation of Markov decision processes. 1 Learning in uncertain environments AI is about the construction of intelligent agents, i.e., systems that perceive and act effectively (according to some performance measure) in an environment. I have argued elsewhere Russell and Norvig (1995) that most AI research has focused on environments that are static, deterministic, discrete, and fully observable. What is to be done when, as in the real world, the environment is dynamic, stochastic, continuous, and partially observable? ‘This paper draws on a variety of research efforts supported by NSF @I-9634215), ONR (N00014-97-l-0941), and AR0 (DAAH04-96-1-0341). Permission to make digital or hard copies of all or p.art of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prolit or commercial adwantage and that copies bear this notice and the full citation on the first page. To copy otherwise. to republish, to post on servers or to redistribute to lists, requires prior specific pemlission and/or a fee. COLT 98 Madison WI IJSA Copyright ACM 1998 1-5X1 13-057--0/9X/ 7...$5.00 In recent years, reinforcement learning (also called neurodynamic programming) has made rapid progress as an approachfor building agents automatically (Sutton, 1988; Kaelbling et al., 1996; Bertsekas & Tsitsiklis, 1996). The basic idea is that the performance measure is made available to the agent in the form of a rewardfunction specifying the reward for each state that the agent passes through. The performance measure is then the sum of the rewards obtained. For example, when a bumble bee forages, the reward function at each time step might be some combination of the distance flown (weighted negatively) and the nectar ingested. Reinforcement learning (RL) methods are essentially online algorithmd for solving Markovdecisionprocesses (MDPs). An MDP is defined by the reward function and a model, that is, the state transition probabilities conditioned on each possible action. RL algorithms can be model-based, where the agent learns a model, or model-free-e.g., Q-learning citeWatkins: 1989, which learns just a function Q(s, a) specifying the long-term value of taking action a in state s and acting optimally thereafter. Despite their successes, RL methods have been restricted largely tofully observable MDPs, in which the sensory input at each state is sufficient to identify the state. Obviously, in the real world, we must often deal with partially observable MDPs (POMDPs). Astrom (1965) proved that optimal decisions in POMDPs depend on the belief state b at each point in time, i.e., the posterior probability distribution over all possible actual states, given all evidence to date. The functions V and Q then become functions of b instead of s. Parr and Russell (1995) describes a very simple POMDP RL algorithm using an explicit representation of b as a vector of probabilities, and McCallum (1993) shows a way to approximate the belief state using recent percept sequences. Neither approach is likely to scale up to situations with large numbers of state variables and long-term temporal dependencies. What is needed is a way of representing the model compactly and updating the belief state efficiently given the model and each new observation. Dynamic Bayesian networks (Dean & Kanazawa, 1989) seem to have some of the required properties; in particular, they have significant advantages over other approaches such as Kalman filters and hidden Markov models. Our baseline architecture, shown in Figure 1, uses DBNs to represent and update the belief state as new sensor information arrives. Given a representation for b, the reward signal is used to learn a Q-function represented by some “black-box” function approximator such as a neural network. Provided we can handle hybrid (dis- |
6f20506ce955b7f82f587a14301213c08e79463b | |
d14ddc01cff72066c6655aa39f3e207e34fb8591 | This paper deals with a relatively new area of radio-frequency (RF) technology based on microelectromechanical systems (MEMS). RF MEMS provides a class of new devices and components which display superior high-frequency performance relative to conventional (usually semiconductor) devices, and which enable new system capabilities. In addition, MEMS devices are designed and fabricated by techniques similar to those of very large-scale integration, and can be manufactured by traditional batch-processing methods. In this paper, the only device addressed is the electrostatic microswitch—perhaps the paradigm RF-MEMS device. Through its superior performance characteristics, the microswitch is being developed in a number of existing circuits and systems, including radio front-ends, capacitor banks, and time-delay networks. The superior performance combined with ultra-low-power dissipation and large-scale integration should enable new system functionality as well. Two possibilities addressed here are quasi-optical beam steering and electrically reconfigurable antennas. |
9d5f36b92ac155fccdae6730660ab44d46ad501a | Risk parity is an allocation method used to build diversified portfolios that does not rely on any assumptions of expected returns, thus placing risk management at the heart of the strategy. This explains why risk parity became a popular investment model after the global financial crisis in 2008. However, risk parity has also been criticized because it focuses on managing risk concentration rather than portfolio performance, and is therefore seen as being closer to passive management than active management. In this article, we show how to introduce assumptions of expected returns into risk parity portfolios. To do this, we consider a generalized risk measure that takes into account both the portfolio return and volatility. However, the trade-off between performance and volatility contributions creates some difficulty, while the risk budgeting problem must be clearly defined. After deriving the theoretical properties of such risk budgeting portfolios, we apply this new model to asset allocation. First, we consider long-term investment policy and the determination of strategic asset allocation. We then consider dynamic allocation and show how to build risk parity funds that depend on expected returns. |
006df3db364f2a6d7cc23f46d22cc63081dd70db | An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal. |
25a26b86f4a2ebca2b154effbaf894aef690c03c | Recently there has been signi cant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies to datasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminative in nature and robust to the assumptions of their embedded classi ers. |
78beead3a05f7e8f2dc812298f813c5bacdc3061 | |
1d6889c44e11141cc82ef28bba1afe07f3c0a2b4 | In the last few years the Internet of Things (IoT) has seen widespread application and can be found in each field. Authentication and access control are important and critical functionalities in the context of IoT to enable secure communication between devices. Mobility, dynamic network topology and weak physical security of low power devices in IoT networks are possible sources for security vulnerabilities. It is promising to make an authentication and access control attack resistant and lightweight in a resource constrained and distributed IoT environment. This paper presents the Identity Authentication and Capability based Access Control (IACAC) model with protocol evaluation and performance analysis. To protect IoT from man-in-the-middle, replay and denial of service (Dos) attacks, the concept of capability for access control is introduced. The novelty of this model is that, it presents an integrated approach of authentication and access control for IoT devices. The results of other related study have also been analyzed to validate and support our findings. Finally, the proposed protocol is evaluated by using security protocol verification tool and verification results shows that IACAC is secure against aforementioned attacks. This paper also discusses performance analysis of the protocol in terms of computational time compared to other Journal of Cyber Security and Mobility, Vol. 1, 309–348. c © 2013 River Publishers. All rights reserved. 310 P.N. Mahalle et al. existing solutions. Furthermore, this paper addresses challenges in IoT and security attacks are modelled with the use cases to give an actual view of IoT networks. |
310b72fbc3d384ca88ca994b33476b8a2be2e27f | We present Sentiment Analyzer (SA) that extracts sentiment (or opinion) about a subject from online text documents. Instead of classifying the sentiment of an entire document about a subject, SA detects all references to the given subject, and determines sentiment in each of the references using natural language processing (NLP) techniques. Our sentiment analysis consists of 1) a topic specific feature term extraction, 2) sentiment extraction, and 3) (subject, sentiment) association by relationship analysis. SA utilizes two linguistic resources for the analysis: the sentiment lexicon and the sentiment pattern database. The performance of the algorithms was verified on online product review articles (“digital camera” and “music” reviews), and more general documents including general webpages and news articles. |
59d9160780bf3eac8c621983a36ff332a3497219 | Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral. |
7c89cbf5d860819c9b5e5217d079dc8aafcba336 | In this paper, we describe a case study of a sentence-level categorization in which tagging instructions are developed and used by four judges to classify clauses from the Wall Street Journal as either subjective or objective. Agreement among the four judges is analyzed, and, based on that analysis, each clause is given a nal classiication. To provide empirical support for the classiications, correlations are assessed in the data between the subjective category and a basic semantic class posited by Quirk et al. (1985). |
9141d85998eadb1bca5cca027ae07670cfafb015 | Identifying sentiments (the affective parts of opinions) is a challenging problem. We present a system that, given a topic, automatically finds the people who hold opinions about that topic and the sentiment of each opinion. The system contains a module for determining word sentiment and another for combining sentiments within a sentence. We experiment with various models of classifying and combining sentiment at word and sentence levels, with promising results. |
c2ac213982e189e4ad4c7f60608914a489ec9051 | From our three year experience of developing a large-scale corpus of annotated Arabic text, our paper will address the following: (a) review pertinent Arabic language issues as they relate to methodology choices, (b) explain our choice to use the Penn English Treebank style of guidelines, (requiring the Arabic-speaking annotators to deal with a new grammatical system) rather than doing the annotation in a more traditional Arabic grammar style (requiring NLP researchers to deal with a new system); (c) show several ways in which human annotation is important and automatic analysis difficult, including the handling of orthographic ambiguity by both the morphological analyzer and human annotators; (d) give an illustrative example of the Arabic Treebank methodology, focusing on a particular construction in both morphological analysis and tagging and syntactic analysis and following it in detail through the entire annotation process, and finally, (e) conclude with what has been achieved so far and what remains to be done. |
e33a3487f9b656631159186db4b2aebaed230b36 | As digital platforms are transforming almost every industry today, they are slowly finding their way into the mainstream information systems (IS) literature. Digital platforms are a challenging research object because of their distributed nature and intertwinement with institutions, markets and technologies. New research challenges arise as a result of the exponentially growing scale of platform innovation, the increasing complexity of platform architectures, and the spread of digital platforms to many different industries. This paper develops a research agenda for digital platforms research in IS. We recommend researchers seek to (1) advance conceptual clarity by providing clear definitions that specify the unit of analysis, degree of digitality and the sociotechnical nature of digital platforms; (2) define the proper scoping of digital platform concepts by studying platforms on different architectural levels and in different industry settings; and (3) advance methodological rigour by employing embedded case studies, longitudinal studies, design research, data-driven modelling and visualization techniques. Considering current developments in the business domain, we suggest six questions for further research: (1) Are platforms here to stay?; (2) How should platforms be designed?; (3) How do digital platforms transform industries?; (4) How can data-driven approaches inform digital platforms research?; (5) How should researchers develop theory for digital platforms?; and (6) How do digital platforms affect everyday life? |
1be8cab8701586e751d6ed6d186ca0b6f58a54e7 | The usefulness of a system specification depends in part on the completeness of the requirements. However, enumerating all necessary requirements is difficult, especially when requirements interact with an unpredictable environment. A specification built with an idealized environmental view is incomplete if it does not include requirements to handle non-idealized behavior. Often incomplete requirements are not detected until implementation, testing, or worse, after deployment. Even when performed during requirements analysis, detecting incomplete requirements is typically an error prone, tedious, and manual task. This paper introduces Ares, a design-time approach for detecting incomplete requirements decomposition using symbolic analysis of hierarchical requirements models. We illustrate our approach by applying Ares to a requirements model of an industry-based automotive adaptive cruise control system. Ares is able to automatically detect specific instances of incomplete requirements decompositions at design-time, many of which are subtle and would be difficult to detect, either manually or with testing. |
155ed7834a8a44a195b80719985a8b4ca11e6fdc | Multiple-input multiple-output (MIMO) radar can achieve superior performance through waveform diversity over conventional phased-array radar systems. When a MIMO radar transmits orthogonal waveforms, the reflected signals from scatterers are linearly independent of each other. Therefore, adaptive receive filters, such as Capon and amplitude and phase estimation (APES) filters, can be directly employed in MIMO radar applications. High levels of noise and strong clutter, however, significantly worsen detection performance of the data-dependent beamformers due to a shortage of snapshots. The iterative adaptive approach (IAA), a nonparametric and user parameter-free weighted least-squares algorithm, was recently shown to offer improved resolution and interference rejection performance in several passive and active sensing applications. In this paper, we show how IAA can be extended to MIMO radar imaging, in both the negligible and nonnegligible intrapulse Doppler cases, and we also establish some theoretical convergence properties of IAA. In addition, we propose a regularized IAA algorithm, referred to as IAA-R, which can perform better than IAA by accounting for unrepresented additive noise terms in the signal model. Numerical examples are presented to demonstrate the superior performance of MIMO radar over single-input multiple-output (SIMO) radar, and further highlight the improved performance achieved with the proposed IAA-R method for target imaging. |
0cfe588996f1bc319f87c6f75160d1cf1542d9a9 | |
20efcba63a0d9f12251a5e5dda745ac75a6a84a9 | |
ccaab0cee02fe1e5ffde33b79274b66aedeccc65 | As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethics. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems are typically being presented in the form of an idealized unsolvable decision-making problem, the so-called trolley problem, which is grossly misleading. We argue that an applied engineering ethical approach for the development of new technology is what is needed; the approach should be applied, meaning that it should focus on the analysis of complex real-world engineering problems. Software plays a crucial role for the control of self-driving cars; therefore, software engineering solutions should seriously handle ethical and social considerations. In this paper we take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services and we present practical social and ethical challenges that have to be met, as well as novel expectations for software engineering. |
288c67457f09c0c30cadd7439040114e9c377bc3 | Association rules, introduced by Agrawal, Imielinski, and Swami, are rules of the form “for 90% of the rows of the relation, if the row has value 1 in the columns in set W, then it has 1 also in column B”. Efficient methods exist for discovering association rules from large collections of data. The number of discovered rules can, however, be so large that browsing the rule set and finding interesting rules from it can be quite difficult for the user. We show how a simple formalism of rule templates makes it possible to easily describe the structure of interesting rules. We also give examples of visualization of rules, and show how a visualization tool interfaces with rule templates. |
384bb3944abe9441dcd2cede5e7cd7353e9ee5f7 | |
47f0f6a2fd518932734cc90936292775cc95aa5d | |
b336f946d34cb427452517f503ada4bbe0181d3c | Despite the recent progress in video understanding and the continuous rate of improvement in temporal action localization throughout the years, it is still unclear how far (or close?) we are to solving the problem. To this end, we introduce a new diagnostic tool to analyze the performance of temporal action detectors in videos and compare different methods beyond a single scalar metric. We exemplify the use of our tool by analyzing the performance of the top rewarded entries in the latest ActivityNet action localization challenge. Our analysis shows that the most impactful areas to work on are: strategies to better handle temporal context around the instances, improving the robustness w.r.t. the instance absolute and relative size, and strategies to reduce the localization errors. Moreover, our experimental analysis finds the lack of agreement among annotator is not a major roadblock to attain progress in the field. Our diagnostic tool is publicly available to keep fueling the minds of other researchers with additional insights about their algorithms. |
160404fb0d05a1a2efa593c448fcb8796c24b873 | The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language. |
65c85498be307ee940976db668dae4546943a4c8 | |
761f2288b1b0cea385b0b9a89bb068593d94d6bd | 3D face recognition has become a trending research direction in both industry and academia. It inherits advantages from traditional 2D face recognition, such as the natural recognition process and a wide range of applications. Moreover, 3D face recognition systems could accurately recognize human faces even under dim lights and with variant facial positions and expressions, in such conditions 2D face recognition systems would have immense difficulty to operate. This paper summarizes the history and the most recent progresses in 3D face recognition research domain. The frontier research results are introduced in three categories: pose-invariant recognition, expression-invariant recognition, and occlusion-invariant recognition. To promote future research, this paper collects information about publicly available 3D face databases. This paper also lists important open problems. |
2d2b1f9446e9b4cdb46327cda32a8d9621944e29 | Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences. |
e9c525679fed4dad85699d09b5ce1ccaffe8f11d | |
192687300b76bca25d06744b6586f2826c722645 | In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples. |
2cac0942a692c3dbb46bcf826d71d202ab0f2e02 | We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization. |
722fcc35def20cfcca3ada76c8dd7a585d6de386 | Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments.
Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia. |
fd50fa6954e1f6f78ca66f43346e7e86b196b137 | With the ever-increasing urbanization process, systematically modeling people’s activities in the urban space is being recognized as a crucial socioeconomic task. This task was nearly impossible years ago due to the lack of reliable data sources, yet the emergence of geo-tagged social media (GTSM) data sheds new light on it. Recently, there have been fruitful studies on discovering geographical topics from GTSM data. However, their high computational costs and strong distributional assumptions about the latent topics hinder them from fully unleashing the power of GTSM. To bridge the gap, we present CrossMap, a novel crossmodal representation learning method that uncovers urban dynamics with massive GTSM data. CrossMap first employs an accelerated mode seeking procedure to detect spatiotemporal hotspots underlying people’s activities. Those detected hotspots not only address spatiotemporal variations, but also largely alleviate the sparsity of the GTSM data. With the detected hotspots, CrossMap then jointly embeds all spatial, temporal, and textual units into the same space using two different strategies: one is reconstructionbased and the other is graph-based. Both strategies capture the correlations among the units by encoding their cooccurrence and neighborhood relationships, and learn lowdimensional representations to preserve such correlations. Our experiments demonstrate that CrossMap not only significantly outperforms state-of-the-art methods for activity recovery and classification, but also achieves much better efficiency. |
ce8d99e5b270d15dc09422c08c500c5d86ed3703 | Analysis of human gait helps to find an intrinsic gait signature through which ubiquitous human identification and medical disorder problems can be investigated in a broad spectrum. The gait biometric provides an unobtrusive feature by which video gait data can be captured at a larger distance without prior awareness of the subject. In this paper, a new technique has been addressed to study the human gait analysis with Kinect Xbox device. It ensures us to minimize the segmentation errors with automated background subtraction technique. The closely similar human skeleton model can be generated from background subtracted gait images, altered by covariate conditions, such as change in walking speed and variations in clothing type. The gait signatures are captured from joint angle trajectories of left hip, left knee, right hip and right knee of subject's skeleton model. The experimental verification on Kinect gait data has been compared with our in-house development of sensor based biometric suit, Intelligent Gait Oscillation Detector (IGOD). An endeavor has been taken to investigate whether this sensor based biometric suit can be altered with a Kinect device for the proliferation of robust gait identification system. The Fisher discriminant analysis has been applied on training gait signature to look into the discriminatory power of feature vector. The Naïve Bayesian classifier demonstrates an encouraging classification result with estimation of errors on limited dataset captured by Kinect sensor. |
582ea307db25c5764e7d2ed82c4846757f4e95d7 | Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additive expansions based on any tting criterion. Speci c algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classi cation. Special enhancements are derived for the particular case where the individual additive components are decision trees, and tools for interpreting such \TreeBoost" models are presented. Gradient boosting of decision trees produces competitive, highly robust, interpretable procedures for regression and classi cation, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire 1996, and Friedman, Hastie, and Tibshirani 1998 are discussed. 1 Function estimation In the function estimation problem one has a system consisting of a random \output" or \response" variable y and a set of random \input" or \explanatory" variables x = fx1; ; xng. Given a \training" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)) = argmin F (x) Ex [Ey( (y; F (x)) jx] : (1) Frequently employed loss functions (y; F ) include squared{error (y F ) and absolute error jy F j for y 2 R (regression), and negative binomial log{likelihood, log(1 + e 2yF ), when y 2 f 1; 1g (classi cation). A common procedure is to take F (x) to be a member of a parameterized class of functions F (x;P), where P = fP1; P2; g is a set of parameters. In this paper we focus on \additive" expansions of the form |
6a7c63a73724c0ca68b1675e256bb8b9a35c94f4 | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/econosoc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. |
8eca169f19425c76fa72078824e6a91a5b37f470 | For the successful design of low-cost and high-performance radar systems accurate and efficient system simulation is a key requirement. In this paper we present a new versatile simulation environment for frequency-modulated continuous-wave radar systems. Besides common hardware simulation it covers integrated system simulation and concept analysis from signal synthesis to baseband. It includes a flexible scenario generator, accurate noise modeling, and efficiently delivers simulation data for development and testing of signal processing algorithms. A comparison of simulations and measurement results for an integrated 77-GHz radar prototype shows the capabilities of the simulator on two different scenarios. |
71337276460b50a2cb37959a2d843e593dc4fdcc | A novel non-isolated three-port converter (NI-TPC) is proposed interfacing one PV port, one bidirectional battery port and one load port. Single stage power conversion between any two of the three ports is achieved. The topology is derived by decoupling the bidirectional power flow path of the conventional structure into two unidirectional ones. Two of the three ports can be tightly regulated to achieve maximum power harvesting for PV or charge control for battery, and maintain the load voltage constant at the same time, while the third port is left flexible to compensate the power imbalance of the converter. Operation states are analyzed. The multi-regulator competition control strategy is presented to achieve autonomous and smooth state switching when the PV input power fluctuates. The analysis is verified by the experimental results. |
ac8877b0e87625e26f52ab75e84c534a576b1e77 | In the digital world, business executives have a heightened awareness of the strategic importance of information and information management to their companies’ value creation. This presents both leadership opportunities and challenges for CIOs. To prevent the CIO position from being marginalized and to enhance CIOs’ contribution to business value creation, they must move beyond being competent IT utility managers and play an active role in helping their companies build a strong information usage culture. The purpose of this article is to provide a better understanding of the leadership approaches that CIOs and business executives can adopt to improve their companies’ information orientation. Based on our findings from four case studies, we have created a four-quadrant leadership-positioning framework. This framework is constructed from the CIO’s perspective and indicates that a CIO may act as a leader, a follower or a nonplayer in developing the company’s information orientation to achieve its strategic focus. The article concludes with guidelines that CIOs can use to help position their leadership challenges in introducing or sustaining their companies’ information orientation initiatives and recommends specific leadership approaches depending on CIOs’ particular situations. |
5c6b51bb44c9b2297733b58daaf26af01c98fe09 | The paper systematically compares two feature extraction algorithms to mine product features commented on in customer reviews. The first approach [17] identifies candidate features by applying a set of POS patterns and pruning the candidate set based on the log likelihood ratio test. The second approach [11] applies association rule mining for identifying frequent features and a heuristic based on the presence of sentiment terms for identifying infrequent features. We evaluate the performance of the algorithms on five product specific document collections regarding consumer electronic devices. We perform an analysis of errors and discuss advantages and limitations of the algorithms. |
623fd6adaa5585707d8d7339b5125185af6e3bf1 | The present study is a quasi-experimental, prospective study of interventions for internet gaming disorder (IGD). One hundred four parents and their adolescent children were enrolled and allocated to one of the four treatment groups; 7-day Siriraj Therapeutic Residential Camp (S-TRC) alone, 8-week Parent Management Training for Game Addiction (PMT-G) alone, combined S-TRC and PMT-G, and basic psychoeducation (control). The severity of IGD was measured by the Game Addiction Screening Test (GAST). The mean difference among groups in GAST scores was statistically significant, with P values of < 0.001, 0.002, and 0.005 at 1, 3, and 6 months post-intervention, respectively. All groups showed improvement over the control group. The percentage of adolescents who remained in the addicted or probably addicted groups was less than 50% in the S-TRC, PMT-G, and combined groups. In conclusion, both S-TRC and PMT-G were effective psychosocial interventions for IGD and were superior to basic psychoeducation alone. |
aca437e9e2a453c84a38d716ca9a7a7683ae58b6 | This paper presents a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics. Our approach utilizes a simple observation that, by human design, objects in static scenes should be stable in the gravity field and be safe with respect to various physical disturbances such as human activities. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Given a 3D point cloud captured for a static scene by depth cameras, our method consists of three steps: (i) recovering solid 3D volumetric primitives from voxels; (ii) reasoning stability by grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior; and (iii) reasoning safety by evaluating the physical risks for objects under physical disturbances, such as human activity, wind or earthquakes. We adopt a novel intuitive physics model and represent the energy landscape of each primitive and object in the scene by a disconnectivity graph (DG). We construct a contact graph with nodes being 3D volumetric primitives and edges representing the supporting relations. Then we adopt a Swendson–Wang Cuts algorithm to partition the contact graph into groups, each of which is a stable object. In order to detect unsafe objects in a static scene, our method further infers hidden and situated causes (disturbances) in the scene, and then introduces intuitive physical mechanics to predict possible effects (e.g., falls) as consequences of the disturbances. In experiments, we demonstrate that the algorithm achieves a substantially better performance for (i) object segmentation, (ii) 3D volumetric recovery, and (iii) scene understanding with respect to other state-of-the-art methods. We also compare the safety prediction from the intuitive mechanics model with human judgement. |
7e9507924ceebd784503fd25128218a7119ff722 | This paper presents a visual analytics approach to analyzing a full picture of relevant topics discussed in multiple sources, such as news, blogs, or micro-blogs. The full picture consists of a number of common topics covered by multiple sources, as well as distinctive topics from each source. Our approach models each textual corpus as a topic graph. These graphs are then matched using a consistent graph matching method. Next, we develop a level-of-detail (LOD) visualization that balances both readability and stability. Accordingly, the resulting visualization enhances the ability of users to understand and analyze the matched graph from multiple perspectives. By incorporating metric learning and feature selection into the graph matching algorithm, we allow users to interactively modify the graph matching result based on their information needs. We have applied our approach to various types of data, including news articles, tweets, and blog data. Quantitative evaluation and real-world case studies demonstrate the promise of our approach, especially in support of examining a topic-graph-based full picture at different levels of detail. |
b04a503487bc6505aa8972fd690da573f771badb | Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-tointerpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network’s output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network endto- end from images to steering angle. The attention model highlights image regions that potentially influence the network’s output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network’s behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving. |
4954bb26107d69eb79bb32ffa247c8731cf20fcf | Attribute based encryption (ABE) [13] determines decryption ability based on a user's attributes. In a multi-authority ABE scheme, multiple attribute-authorities monitor different sets of attributes and issue corresponding decryption keys to users, and encryptors can require that a user obtain keys for appropriate attributes from each authority before decrypting a message. Chase [5] gave a multi-authority ABE scheme using the concepts of a trusted central authority (CA) and global identifiers (GID). However, the CA in that construction has the power to decrypt every ciphertext, which seems somehow contradictory to the original goal of distributing control over many potentially untrusted authorities. Moreover, in that construction, the use of a consistent GID allowed the authorities to combine their information to build a full profile with all of a user's attributes, which unnecessarily compromises the privacy of the user. In this paper, we propose a solution which removes the trusted central authority, and protects the users' privacy by preventing the authorities from pooling their information on particular users, thus making ABE more usable in practice. |
25098861749fe9eab62fbe90c1ebeaed58c211bb | In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion. For the two most commonly used criteria (exponential and binomial log-likelihood), we further show that as the constraint is relaxed—or equivalently as the boosting iterations proceed—the solution converges (in the separable case) to an “l1-optimal” separating hyper-plane. We prove that this l1-optimal separating hyper-plane has the property of maximizing the minimal l1-margin of the training data, as defined in the boosting literature. An interesting fundamental similarity between boosting and kernel support vector machines emerges, as both can be described as methods for regularized optimization in high-dimensional predictor space, using a computational trick to make the calculation practical, and converging to margin-maximizing solutions. While this statement describes SVMs exactly, it applies to boosting only approximately. |
0825788b9b5a18e3dfea5b0af123b5e939a4f564 | Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. |
326cfa1ffff97bd923bb6ff58d9cb6a3f60edbe5 | We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances. |
508d8c1dbc250732bd2067689565a8225013292f | A novel dual photoplethysmograph (PPG) probe and measurement system for local pulse wave velocity (PWV) is proposed and demonstrated. The developed probe design employs reflectance PPG transducers for non-invasive detection of blood pulse propagation waveforms from two adjacent measurement points (28 mm apart). Transit time delay between the continuously acquired dual pulse waveform was used for beat-to-beat local PWV measurement. An in-vivo experimental validation study was conducted on 10 healthy volunteers (8 male and 2 female, 21 to 33 years of age) to validate the PPG probe design and developed local PWV measurement system. The proposed system was able to measure carotid local PWV from multiple subjects. Beat-to-beat variation of baseline carotid PWV was less than 7.5% for 7 out of 10 subjects, a maximum beat-to-beat variation of 16% was observed during the study. Variation in beat-to-beat carotid local PWV and brachial blood pressure (BP) values during post-exercise recovery period was also examined. A statistically significant correlation between intra-subject local PWV variation and brachial BP parameters was observed (r > 0.85, p < 0.001). The results demonstrated the feasibility of proposed PPG probe for continuous beat-to-beat local PWV measurement from the carotid artery. Such a non-invasive local PWV measurement unit can be potentially used for continuous ambulatory BP measurements. |
79465f3bac4fb9f8cc66dcbe676022ddcd9c05c6 | This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90% recognition accuracy were achieved by sampling only about 1% 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation. |
46fd85775cab39ecb32cf2e41642ed2d0984c760 | The paper examines today’s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is—as always has been—a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today’s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars? |
0943ed739c909d17f8686280d43d50769fe2c2f8 | We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically uncovers a mapping between gestures from one human participant (an action) and a subsequent gesture (a reaction) from another participant. A probabilistic model is trained from data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The system drives a graphical interactive character which probabilistically predicts the most likely response to the user's behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replace one of them and interact with a single remaining user. |
272216c1f097706721096669d85b2843c23fa77d | We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. |
05aba481e8a221df5d8775a3bb749001e7f2525e | We present a new family of subgradient methods that dynamica lly incorporate knowledge of the geometry of the data observed in earlier iterations to perfo rm more informative gradient-based learning. Metaphorically, the adaptation allows us to find n eedles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems fro m recent advances in stochastic optimization and online learning which employ proximal funct ions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adap tively modifying the proximal function, which significantly simplifies setting a learning rate nd results in regret guarantees that are provably as good as the best proximal function that can be cho sen in hindsight. We give several efficient algorithms for empirical risk minimization probl ems with common and important regularization functions and domain constraints. We experimen tally study our theoretical analysis and show that adaptive subgradient methods outperform state-o f-the-art, yet non-adaptive, subgradient algorithms. |
f2bc77fdcea85738d1062da83d84dfa3371d378d | This paper describes a 6.25-Gb/s 14-mW transceiver in 90-nm CMOS for chip-to-chip applications. The transceiver employs a number of features for reducing power consumption, including a shared LC-PLL clock multiplier, an inductor-loaded resonant clock distribution network, a low- and programmable-swing voltage-mode transmitter, software-controlled clock and data recovery (CDR) and adaptive equalization within the receiver, and a novel PLL-based phase rotator for the CDR. The design can operate with channel attenuation of -15 dB or greater at a bit-error rate of 10-15 or less, while consuming less than 2.25 mW/Gb/s per transceiver. |
9da870dbbc32c23013ef92dd9b30db60a3cd7628 | Non-rigid registration of 3D shapes is an essential task of increasing importance as commodity depth sensors become more widely available for scanning dynamic scenes. Non-rigid registration is much more challenging than rigid registration as it estimates a set of local transformations instead of a single global transformation, and hence is prone to the overfitting issue due to underdetermination. The common wisdom in previous methods is to impose an ℓ2-norm regularization on the local transformation differences. However, the ℓ2-norm regularization tends to bias the solution towards outliers and noise with heavy-tailed distribution, which is verified by the poor goodness-of-fit of the Gaussian distribution over transformation differences. On the contrary, Laplacian distribution fits well with the transformation differences, suggesting the use of a sparsity prior. We propose a sparse non-rigid registration (SNR) method with an ℓ1-norm regularized model for transformation estimation, which is effectively solved by an alternate direction method (ADM) under the augmented Lagrangian framework. We also devise a multi-resolution scheme for robust and progressive registration. Results on both public datasets and our scanned datasets show the superiority of our method, particularly in handling large-scale deformations as well as outliers and noise. |
e36ecd4250fac29cc990330e01c9abee4c67a9d6 | A novel Ka-band dual-band dual-circularly-polarized antenna array is presented in this letter. A dual-band antenna with left-hand circular polarization for the Ka-band downlink frequencies and right-hand circular polarization for the Ka-band uplink frequencies is realized with compact annular ring slots. By applying the sequential rotation technique, a 2 × 2 subarray with good performance is obtained. This letter describes the design process and presents simulation and measurement results. |
0bb71e91b29cf9739c0e1334f905baad01b663e6 | In this paper the scheduling and transmit power control are investigated to minimize the energy consumption for battery-driven devices deployed in LTE networks. To enable efficient scheduling for a massive number of machine-type subscribers, a novel distributed scheme is proposed to let machine nodes form local clusters and communicate with the base-station through the cluster-heads. Then, uplink scheduling and power control in LTE networks are introduced and lifetime-aware solutions are investigated to be used for the communication between cluster-heads and the base-station. Beside the exact solutions, low-complexity suboptimal solutions are presented in this work which can achieve near optimal performance with much lower computational complexity. The performance evaluation shows that the network lifetime is significantly extended using the proposed protocols. |
6dc4be33a07c277ee68d42c151b4ee866108281f | The estimation of covariance matrices from compressive measurements has recently attracted considerable research efforts in various fields of science and engineering. Owing to the small number of observations, the estimation of the covariance matrices is a severely ill-posed problem. This can be overcome by exploiting prior information about the structure of the covariance matrix. This paper presents a class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness. To solve the optimization problems, we introduce the Co-Variance by Augmented Lagrangian Shrinkage Algorithm (CoVALSA), which is an instance of the Split Augmented Lagrangian Shrinkage Algorithm (SALSA). We illustrate the effectiveness of our approach in comparison with state-of-the-art algorithms. |
05357314fe2da7c2248b03d89b7ab9e358cbf01e | All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. |
06d0a9697a0f0242dbdeeff08ec5266b74bfe457 | We presenta novel generati ve model for natural languagetree structuresin whichsemantic(lexical dependenc y) andsyntacticstructuresare scoredwith separatemodels.Thisfactorizationprovidesconceptual simplicity, straightforwardopportunitiesfor separatelyimproving the componentmodels,anda level of performancealreadycloseto thatof similar, non-factoredmodels.Most importantly, unlikeothermodernparsing models,thefactoredmodeladmitsanextremelyeffectiveA parsingalgorithm,which makesefficient,exactinferencefeasible. |
8f76334bd276a2b92bd79203774f292318f42dc6 | This paper deals with a circular horn antenna fed by an L-shaped probe. The design process for broadband matching to a 50 Omega coaxial cable, and the antenna performance in axial ratio and gain are presented. The simulation results of this paper were obtained using Ansoft HFSS 9.2 |
41c987b8a7e916d56fed2ea7311397e0f2286f3b | Unlike traditional approaches that focus on the quantization at the network level, in this work we propose to minimize the quantization effect at the tensor level. We analyze the trade-off between quantization noise and clipping distortion in low precision networks. We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping. By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping. For example, just by choosing the accurate clipping values, more than 40% accuracy improvement is obtained for the quantization of VGG16-BN to 4-bits of precision. Our results have many applications for the quantization of neural networks at both training and inference time. One immediate application is for a rapid deployment of neural networks to low-precision accelerators without time-consuming fine tuning or the availability of the full datasets. |
1bde4205a9f1395390c451a37f9014c8bea32a8a | Recognizing and localizing queried objects in range images plays an important role for robotic manipulation and navigation. Even though it has been steadily studied, it is still a challenging task for scenes with occlusion and clutter. |
242caa8e04b73f56a8d4adae36028cc176364540 | We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor. |
5df318e4aac5313124571ecc7e186cba9e84a264 | The increasing number of repeated malware penetrations into official mobile app markets poses a high security threat to the confidentiality and privacy of end users’ personal and sensitive information. Protecting end user devices from falling victims to adversarial apps presents a technical and research challenge for security researchers/engineers in academia and industry. Despite the security practices and analysis checks deployed at app markets, malware sneak through the defenses and infect user devices. The evolution of malware has seen it become sophisticated and dynamically changing software usually disguised as legitimate apps. Use of highly advanced evasive techniques, such as encrypted code, obfuscation and dynamic code updates, etc., are common practices found in novel malware. With evasive usage of dynamic code updates, a malware pretending as benign app bypasses analysis checks and reveals its malicious functionality only when installed on a user’s device. This dissertation provides a thorough study on the use and the usage manner of dynamic code updates in Android apps. Moreover, we propose a hybrid analysis approach, StaDART, that interleaves static and dynamic analysis to cover the inherent shortcomings of static analysis techniques to analyze apps in the presence of dynamic code updates. Our evaluation results on real world apps demonstrate the effectiveness of StaDART. However, typically dynamic analysis, and hybrid analysis too for that matter, brings the problem of stimulating the app’s behavior which is a non-trivial challenge for automated analysis tools. To this end, we propose a backward slicing based targeted inter component code paths execution technique, TeICC. TeICC leverages a backward slicing mechanism to extract code paths starting from a target point in the app. It makes use of a system dependency graph to extract code paths that involve inter component communication. The extracted code paths are then instrumented and executed inside the app context to capture sensitive dynamic behavior, resolve dynamic code updates and obfuscation. Our evaluation of TeICC shows that it can be effectively used for targeted execution of inter component code paths in obfuscated Android apps. Also, still not ruling out the possibility of adversaries reaching the user devices, we propose an on-phone API hooking |
5ed4b57999d2a6c28c66341179e2888c9ca96a25 | In this article, we work towards the goal of developing agents that can learn to act in complex worlds. We develop a probabilistic, relational planning rule representation that compactly models noisy, nondeterministic action effects, and show how such rules can be effectively learned. Through experiments in simple planning domains and a 3D simulated blocks world with realistic physics, we demonstrate that this learning algorithm allows agents to effectively model world dynamics. |
55c769b5829ca88ba940e0050497f4956c233445 | Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by structure from motion using the previously estimated motion, and salient visual features for which depth is unavailable. Therefore, the method is able to extend RGBD visual odometry to large scale, open environments where depth often cannot be sufficiently acquired. The core of our method is a bundle adjustment step that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #4 on the KITTI odometry benchmark irrespective of sensing modality—compared to stereo visual odometry methods which retrieve depth by triangulation. The resulting average position error is 1.14% of the distance traveled. |
0c668ee24d58ecca165f788d40765e79ed615471 | |
9cccd211c9208f790d71fa5b3499d8f827744aa0 | Various educational oriented problems are resolved through Educational Data Mining, which is the most prevalent applications of data mining. One of the crucial goals of this paper is to study the most recent works carried out on EDM and analyze their merits and drawbacks. This paper also highlights the cumulative results of the various data mining practices and techniques applied in the surveyed articles, and thereby suggesting the researchers on the future directions on EDM. In addition, an experiment was also conducted to evaluate, certain classification and clustering algorithms to observe the most reliable algorithms for future researches. |
197a7fc2f8d57d93727b348851b59b34ce990afd | SRILM is a collection of C++ libraries, executable programs, and helper scripts designed to allow both production of and experimentation with statistical language models for speech recognition and other applications. SRILM is freely available for noncommercial purposes. The toolkit supports creation and evaluation of a variety of language model types based on N-gram statistics, as well as several related tasks, such as statistical tagging and manipulation of N-best lists and word lattices. This paper summarizes the functionality of the toolkit and discusses its design and implementation, highlighting ease of rapid prototyping, reusability, and combinability of tools. |
12f661171799cbd899e1ff4ae0a7e2170c3d547b | Statistical language models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them, point to a few promising directions, and argue for a Bayesian approach to integration of linguistic theories with data. |
395f4b41578c3ff5139ddcf9e90eb60801b50394 | The CMU Statistical Language Modeling toolkit was re leased in in order to facilitate the construction and testing of bigram and trigram language models It is currently in use in over academic government and industrial laboratories in over countries This paper presents a new version of the toolkit We outline the con ventional language modeling technology as implemented in the toolkit and describe the extra e ciency and func tionality that the new toolkit provides as compared to previous software for this task Finally we give an exam ple of the use of the toolkit in constructing and testing a simple language model |
0b8f4edf1a7b4d19d47d419f41cde432b9708ab7 | We present a technology for the manufacturing of silicon-filled integrated waveguides enabling the realization of low-loss high-performance millimeter-wave passive components and high gain array antennas, thus facilitating the realization of highly integrated millimeter-wave systems. The proposed technology employs deep reactive-ion-etching (DRIE) techniques with aluminum metallization steps to integrate rectangular waveguides with high geometrical accuracy and continuous metallic side walls. Measurement results of integrated rectangular waveguides are reported exhibiting losses of 0.15 dB/ λg at 105 GHz. Moreover, ultra-wideband coplanar to waveguide transitions with 0.6 dB insertion loss at 105 GHz and return loss better than 15 dB from 80 to 110 GHz are described and characterized. The design, integration and measured performance of a frequency scanning slotted-waveguide array antenna is reported, achieving a measured beam steering capability of 82 ° within a band of 23 GHz and a half-power beam-width (HPBW) of 8.5 ° at 96 GHz. Finally, to showcase the capability of this technology to facilitate low-cost mm-wave system level integration, a frequency modulated continuous wave (FMCW) transmit-receive IC for imaging radar applications is flip-chip mounted directly on the integrated array and experimentally characterized. |
31864e13a9b3473ebb07b4f991f0ae3363517244 | This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. |
b41c45b2ca0c38a4514f0779395ebdf3d34cecc0 | |
7e19f7a82528fa79349f1fc61c7f0d35a9ad3a5e | Faces represent complex, multidimensional, meaningful visual stimuli an d developing a computational model for face recognition is difficult [42]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sam pling, a self-organizing map neural network, and a convolutional neural network. The self-organizi ng map provides a quantization of the image samples into a topological space where inputs that are n e rby in the original space are also nearby in the output space, thereby providing dimensionality red uction and invariance to minor changes in the image sample, and the convolutional neural network prov ides for partial invariance to translation, rotation, scale, and deformation. The convolutional net work extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen -Loève transform in place of the self-organizing map, and a multi-layer perceptron in place of the con volutional network. The Karhunen-Loève transform performs almost as well (5.3% error versus 3 .8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is cap able of rapid classification, requires only fast, approximate normalization and preprocessing, and cons istently exhibits better classification performance than the eigenfaces approach [42] on the database consider ed as the number of images per person in the training database is varied from 1 to 5. With 5 imag es per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recogni zer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which cont ains quite a high degree of variability in expression, pose, and facial details. We analyze computati onal complexity and discuss how new classes could be added to the trained recognizer. |
5dd9dc47c4acc9ea3e597751194db52119398ac6 | The shift register is a type of sequential logic circuit which is mostly used for storing digital data or the transferring of data in the form of binary numbers in radio frequency identification (RFID) applications to improve the security of the system. A power-efficient shift register utilizing a new flip-flop with an implicit pulse-triggered structure is presented in this article. The proposed flip-flop has features of high performance and low power. It is composed of a sampling circuit implemented by five transistors, a C-element for rise and fall paths, and a keeper stage. The speed is enhanced by executing four clocked transistors together with a transition condition technique. The simulation result confirms that the proposed topology consumes the lowest amounts of power of 30.1997 and 22.7071 nW for parallel in –parallel out (PIPO) and serial in –serial out (SISO) shift register respectively covering 22 μm2 chip area. The overall design consist of only 16 transistors and is simulated in 130 nm complementary-metal-oxide-semiconductor (CMOS) technology with a 1.2 V power supply. |
d76beb59a23c01c9bec1940c4cec1ca26e00480a | The Air Force Research Laboratory has implemented and evaluated two brain-computer interfaces (BCI's) that translate the steady-state visual evoked response into a control signal for operating a physical device or computer program. In one approach, operators self-regulate the brain response; the other approach uses multiple evoked responses. |
8a65dc637d39c14323dccd5cbcc08eed2553880e | This article describes the initial period (1994–2001) of an ongoing action research project to develop health information systems to support district management in South Africa. The reconstruction of the health sector in postapartheid South Africa striving for equity in health service delivery and building of a decentralized structure based on health districts. In terms of information systems (IS) development, this reform process translates into standardization of health data in ways that inscribe the goals of the new South Africa by enhancing local control and integration of information handling. We describe our approach to action research and use concepts from actor-network and structuration theories in analyzing the case material. In the detailed description and analysis of the process of IS development provided, we focus on the need to balance standardization and local exibility (localization); standardization is thus seen as bottom-up alignment of an array of heterogeneous actors. Building on a social system model of information systems, we conceptualize the IS design strategy developed and used as the cultivation of processes whereby these actors are translating and aligning their interests. We develop a modular hierarchy of global and local datasets as a framework within which the tensions between standardization and localization may be understood and addressed. Finally, we discuss the possible relevance of the results of the research in other countries. |
600434c6255c160b53ad26912c1c0b96f0d48ce6 | Random Forest is a computationally efficient technique that can operate quickly over large datasets. It has been used in many recent research projects and real-world applications in diverse domains. However, the associated literature provides almost no directions about how many trees should be used to compose a Random Forest. The research reported here analyzes whether there is an optimal number of trees within a Random Forest, i.e., a threshold from which increasing the number of trees would bring no significant performance gain, and would only increase the computational cost. Our main conclusions are: as the number of trees grows, it does not always mean the performance of the forest is significantly better than previous forests (fewer trees), and doubling the number of trees is worthless. It is also possible to state there is a threshold beyond which there is no significant gain, unless a huge computational environment is available. In addition, it was found an experimental relationship for the AUC gain when doubling the number of trees in any forest. Furthermore, as the number of trees grows, the full set of attributes tend to be used within a Random Forest, which may not be interesting in the biomedical domain. Additionally, datasets’ density-based metrics proposed here probably capture some aspects of the VC dimension on decision trees and low-density datasets may require large capacity machines whilst the opposite also seems to be true. |
4cbadc5f4afe9ac178fd14a6875ef1956a528313 | During the last several years the advancements in technology made it possible for small sensor nodes to communicate wirelessly with the rest of the Internet. With this achievement the question of securing such IP-enabled Wireless Sensor Networks (IP-WSNs) emerged and has been an important research topic since. In this thesis we discuss our implementation of TLS and DTLS protocols using a pre-shared key cipher suite (TLS PSK WITH AES 128 CCM 8) for the Contiki operating system. Apart from simply adding a new protocol to the set of protocols supported by the Contiki OS, this project allows us to evaluate how suitable the transport-layer security and pre-shared key management schemes are for IP-WSNs. |
0ab99aa04e3a8340a7552355fb547374a5604b24 | D EEP learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013 [1]. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data [2]. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications. Promising results are emerging. In medical imaging, the accurate diagnosis and/or assessment of a disease depends on both image acquisition and image interpretation. Image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue. Many diagnostic tasks require an initial search process to detect abnormalities, and to quantify measurements and changes over time. Computerized tools, specifically image analysis and machine learning, are the key enablers to improve diagnosis, by facilitating identification of the findings that require treatment and to support the expert’s workflow. Among these tools, deep learning is rapidly proving to be the state-of-the-art foundation, leading to improved accuracy. It has also opened up new frontiers in data analysis with rates of progress not before experienced. |
5343b6d5c9f3a2c4d9648991162a6cc13c1c5e70 | Unsupervised image translation, which aims in translating two independent sets of images, is challenging in discovering the correct correspondences without paired data. Existing works build upon Generative Adversarial Networks (GANs) such that the distribution of the translated images are indistinguishable from the distribution of the target set. However, such set-level constraints cannot learn the instance-level correspondences (e.g. aligned semantic parts in object transfiguration task). This limitation often results in false positives (e.g. geometric or semantic artifacts), and further leads to mode collapse problem. To address the above issues, we propose a novel framework for instance-level image translation by Deep Attention GAN (DA-GAN). Such a design enables DA-GAN to decompose the task of translating samples from two sets into translating instances in a highly-structured latent space. Specifically, we jointly learn a deep attention encoder, and the instance-level correspondences could be consequently discovered through attending on the learned instances. Therefore, the constraints could be exploited on both set-level and instance-level. Comparisons against several state-of-the-arts demonstrate the superiority of our approach, and the broad application capability, e.g, pose morphing, data augmentation, etc., pushes the margin of domain translation problem.1 |
f1526054914997591ffdb8cd523bea219ce7a26e | In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. |
50ca90bc847694a7a2d9a291f0d903a15e408481 | We propose a generalized approach to human gesture recognition based on multiple data modalities such as depth video, articulated pose and speech. In our system, each gesture is decomposed into large-scale body motion and local subtle movements such as hand articulation. The idea of learning at multiple scales is also applied to the temporal dimension, such that a gesture is considered as a set of characteristic motion impulses, or dynamic poses. Each modality is first processed separately in short spatio-temporal blocks, where discriminative data-specific features are either manually extracted or learned. Finally, we employ a Recurrent Neural Network for modeling large-scale temporal dependencies, data fusion and ultimately gesture classification. Our experiments on the 2013 Challenge on Multimodal Gesture Recognition dataset have demonstrated that using multiple modalities at several spatial and temporal scales leads to a significant increase in performance allowing the model to compensate for errors of individual classifiers as well as noise in the separate channels. |
586d7b215d1174f01a1dc2f6abf6b2eb0f740ab6 | We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples. |
80bcfbb1a30149e636ff1a08aeb715dad6dd9285 | The design and performance of two high efficiency Ka-band power amplifier MMICs utilizing a 0.15μm GaN HEMT process technology is presented. Measured in-fixture continuous wave (CW) results for the 3-stage balanced amplifier demonstrates up to 11W of output power and 30% power added efficiency (PAE) at 30GHz. The 3-stage single-ended design produced over 6W of output power and up to 34% PAE. The die size for the balanced and single-ended MMICs are 3.24×3.60mm2 and 1.74×3.24mm2 respectively. |
284de726e700a6c52f9f8fb9f3de4d4b0ff778bb | Recurrent neural networks (RNNs) are naturally suitable for speech recognition because of their ability of utilizing dynamically changing temporal information. Deep RNNs have been argued to be able to model temporal relationships at different time granularities, but suffer vanishing gradient problems. In this paper, we extend stacked long short-term memory (LSTM) RNNs by using grid LSTM blocks that formulate computation along not only the temporal dimension, but also the depth dimension, in order to alleviate this issue. Moreover, we prioritize the depth dimension over the temporal one to provide the depth dimension more updated information, since the output from it will be used for classification. We call this model the prioritized Grid LSTM (pGLSTM). Extensive experiments on four large datasets (AMI, HKUST, GALE, and MGB) indicate that the pGLSTM outperforms alternative deep LSTM models, beating stacked LSTMs with 4% to 7% relative improvement, and achieve new benchmarks among uni-directional models on all datasets. |