_id
stringlengths
40
40
text
stringlengths
0
10k
1b2a0e8af5c1f18e47e71244973ce4ace4ac6034
Hierarchical Pitman-Yor Process priors are compelling methods for learning language models, outperforming point-estimate based methods. However, these models remain unpopular due to computational and statistical inference issues, such as memory and time usage, as well as poor mixing of sampler. In this work we propose a novel framework which represents the HPYP model compactly using compressed suffix trees. Then, we develop an efficient approximate inference scheme in this framework that has a much lower memory footprint compared to full HPYP and is fast in the inference time. The experimental results illustrate that our model can be built on significantly larger datasets compared to previous HPYP models, while being several orders of magnitudes smaller, fast for training and inference, and outperforming the perplexity of the state-of-the-art Modified Kneser-Ney countbased LM smoothing by up to 15%.
6c9bd4bd7e30470e069f8600dadb4fd6d2de6bc1
This paper describes a new language resource of events and semantic roles that characterize real-world situations. Narrative schemas contain sets of related events (edit and publish), a temporal ordering of the events (edit before publish), and the semantic roles of the participants (authors publish books). This type of world knowledge was central to early research in natural language understanding. Scripts were one of the main formalisms, representing common sequences of events that occur in the world. Unfortunately, most of this knowledge was hand-coded and time consuming to create. Current machine learning techniques, as well as a new approach to learning through coreference chains, has allowed us to automatically extract rich event structure from open domain text in the form of narrative schemas. The narrative schema resource described in this paper contains approximately 5000 unique events combined into schemas of varying sizes. We describe the resource, how it is learned, and a new evaluation of the coverage of these schemas over unseen documents.
8e508720cdb495b7821bf6e43c740eeb5f3a444a
Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable.
033b62167e7358c429738092109311af696e9137
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., “subtle nuances”) and a negative semantic orientation when it has bad associations (e.g., “very cavalier”). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word “excellent” minus the mutual information between the given phrase and the word “poor”. A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.
0eaa75861d9e17f2c95bd3f80f48db95bf68a50c
Electromigration (EM) is one of the key concerns going forward for interconnect reliability in integrated circuit (IC) design. Although analog designers have been aware of the EM problem for some time, digital circuits are also being affected now. This talk addresses basic design issues and their effects on electromigration during interconnect physical design. The intention is to increase current density limits in the interconnect by adopting electromigration-inhibiting measures, such as short-length and reservoir effects. Exploitation of these effects at the layout stage can provide partial relief of EM concerns in IC design flows in future.
45e2e2a327ea696411b212492b053fd328963cc3
BACKGROUND Mobile apps hold promise for serving as a lifestyle intervention in public health to promote wellness and attenuate chronic conditions, yet little is known about how individuals with chronic illness use or perceive mobile apps. OBJECTIVE The objective of this study was to explore behaviors and perceptions about mobile phone-based apps for health among individuals with chronic conditions. METHODS Data were collected from a national cross-sectional survey of 1604 mobile phone users in the United States that assessed mHealth use, beliefs, and preferences. This study examined health app use, reason for download, and perceived efficacy by chronic condition. RESULTS Among participants, having between 1 and 5 apps was reported by 38.9% (314/807) of respondents without a condition and by 6.6% (24/364) of respondents with hypertension. Use of health apps was reported 2 times or more per day by 21.3% (172/807) of respondents without a condition, 2.7% (10/364) with hypertension, 13.1% (26/198) with obesity, 12.3% (20/163) with diabetes, 12.0% (32/267) with depression, and 16.6% (53/319) with high cholesterol. Results of the logistic regression did not indicate a significant difference in health app download between individuals with and without chronic conditions (P>.05). Compared with individuals with poor health, health app download was more likely among those with self-reported very good health (odds ratio [OR] 3.80, 95% CI 2.38-6.09, P<.001) and excellent health (OR 4.77, 95% CI 2.70-8.42, P<.001). Similarly, compared with individuals who report never or rarely engaging in physical activity, health app download was more likely among those who report exercise 1 day per week (OR 2.47, 95% CI 1.6-3.83, P<.001), 2 days per week (OR 4.77, 95% CI 3.27-6.94, P<.001), 3 to 4 days per week (OR 5.00, 95% CI 3.52-7.10, P<.001), and 5 to 7 days per week (OR 4.64, 95% CI 3.11-6.92, P<.001). All logistic regression results controlled for age, sex, and race or ethnicity. CONCLUSIONS Results from this study suggest that individuals with poor self-reported health and low rates of physical activity, arguably those who stand to benefit most from health apps, were least likely to report download and use these health tools.
1935e0986939ea6ef2afa01eeef94dbfea6fb6da
Mean-variance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of single-period variants, including semivariance models. Particular emphasis is laid on avoiding the penalization of overperformance. The results are then used as building blocks in the development and theoretical analysis of multiperiod models based on scenario trees. A key property is the possibility of removing surplus money in future decisions, yielding approximate downside risk minimization.
0e1431fa42d76c44911b07078610d4b9254bd4ce
A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear mapfor instance, the space of all possible five-pixel products in 16 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
292eee24017356768f1f50b72701ea636dba7982
We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m of urban area in total.
922b5eaa5ca03b12d9842b7b84e0e420ccd2feee
AN IMPORTANT class of theoretical and practical problems in communication and control is of a statistical nature. Such problems are: (i) Prediction of random signals; (ii) separation of random signals from random noise; (iii) detection of signals of known form (pulses, sinusoids) in the presence of random noise. In his pioneering work, Wiener [1]3 showed that problems (i) and (ii) lead to the so-called Wiener-Hopf integral equation; he also gave a method (spectral factorization) for the solution of this integral equation in the practically important special case of stationary statistics and rational spectra. Many extensions and generalizations followed Wiener’s basic work. Zadeh and Ragazzini solved the finite-memory case [2]. Concurrently and independently of Bode and Shannon [3], they also gave a simplified method [2) of solution. Booton discussed the nonstationary Wiener-Hopf equation [4]. These results are now in standard texts [5-6]. A somewhat different approach along these main lines has been given recently by Darlington [7]. For extensions to sampled signals, see, e.g., Franklin [8], Lees [9]. Another approach based on the eigenfunctions of the WienerHopf equation (which applies also to nonstationary problems whereas the preceding methods in general don’t), has been pioneered by Davis [10] and applied by many others, e.g., Shinbrot [11], Blum [12], Pugachev [13], Solodovnikov [14]. In all these works, the objective is to obtain the specification of a linear dynamic system (Wiener filter) which accomplishes the prediction, separation, or detection of a random signal.4 ——— 1 This research was supported in part by the U. S. Air Force Office of Scientific Research under Contract AF 49 (638)-382. 2 7212 Bellona Ave. 3 Numbers in brackets designate References at end of paper. 4 Of course, in general these tasks may be done better by nonlinear filters. At present, however, little or nothing is known about how to obtain (both theoretically and practically) these nonlinear filters. Contributed by the Instruments and Regulators Division and presented at the Instruments and Regulators Conference, March 29– Apri1 2, 1959, of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS. NOTE: Statements and opinions advanced in papers are to be understood as individual expressions of their authors and not those of the Society. Manuscript received at ASME Headquarters, February 24, 1959. Paper No. 59-IRD—11. A New Approach to Linear Filtering and Prediction Problems
e50a316f97c9a405aa000d883a633bd5707f1a34
The experimental evidence accumulated over the past 20 years indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective termweighting systems. This article summarizes the insights gained in automatic term weighting, and provides baseline single-term-indexing models with which other more elaborate content analysis procedures can be compared. 1. AUTOMATIC TEXT ANALYSIS In the late 195Os, Luhn [l] first suggested that automatic text retrieval systems could be designed based on a comparison of content identifiers attached both to the stored texts and to the users’ information queries. Typically, certain words extracted from the texts of documents and queries would be used for content identification; alternatively, the content representations could be chosen manually by trained indexers familiar with the subject areas under consideration and with the contents of the document collections. In either case, the documents would be represented by term vectors of the form D= (ti,tj,...ytp) (1) where each tk identifies a content term assigned to some sample document D. Analogously, the information requests, or queries, would be represented either in vector form, or in the form of Boolean statements. Thus, a typical query Q might be formulated as Q = (qa,qbr.. . ,4r) (2)
6ac15e819701cd0d077d8157711c4c402106722c
This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross­modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real­ time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well­proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure­pursuit control and our local frame perception strategy, obstacle avoidance using kino­dynamic RRT path planning, U­turns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. † Executive Summary This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure-pursuit control and our local frame perception strategy, obstacle avoidance using kino-dynamic RRT path planning, U-turns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. DISCLAIMER: The information contained in this paper does not represent the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA) or the Department of Defense. DARPA does not guarantee the accuracy or reliability of the information in this paper. Additional support …
e275f643c97ca1f4c7715635bb72cf02df928d06
1e55bb7c095d3ea15bccb3df920c546ec54c86b5
8acaebdf9569adafb03793b23e77bf4ac8c09f83
We present the analysis and design of fixed physical length, spoof Surface Plasmon Polariton based waveguides with adjustable delay at terahertz frequencies. The adjustable delay is obtained using Corrugated Planar Goubau Lines (CPGL) by changing its corrugation depth without changing the total physical length of the waveguide. Our simulation results show that electrical lengths of 237.9°, 220.6°, and 310.6° can be achieved by physical lengths of 250 μm and 200 μm at 0.25, 0.275, and 0.3 THz, respectively, for demonstration purposes. These simulations results are also consistent with our analytical calculations using the physical parameter and material properties. When we combine pairs of same length delay lines as if they are two branches of a terahertz phase shifter, we achieved an error rate of relative phase shift estimation better than 5.8%. To the best of our knowledge, this is the first-time demonstration of adjustable spoof Surface Plasmon Polariton based CPGL delay lines. The idea can be used for obtaining tunable delay lines with fixed lengths and phase shifters for the terahertz band circuitry.
325d145af5f38943e469da6369ab26883a3fd69e
Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32% of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.
57bbbfea63019a57ef658a27622c357978400a50
7ffdf4d92b4bc5690249ed98e51e1699f39d0e71
For the first time, a fully integrated phased array antenna with radio frequency microelectromechanical systems (RF MEMS) switches on a flexible, organic substrate is demonstrated above 10 GHz. A low noise amplifier (LNA), MEMS phase shifter, and 2 times 2 patch antenna array are integrated into a system-on-package (SOP) on a liquid crystal polymer substrate. Two antenna arrays are compared; one implemented using a single-layer SOP and the second with a multilayer SOP. Both implementations are low-loss and capable of 12deg of beam steering. The design frequency is 14 GHz and the measured return loss is greater than 12 dB for both implementations. The use of an LNA allows for a much higher radiated power level. These antennas can be customized to meet almost any size, frequency, and performance needed. This research furthers the state-of-the-art for organic SOP devices.
d00ef607a10e5be00a9e05504ab9771c0b05d4ea
High-voltage-rated solid-state switches such as insulated-gate bipolar transistors (IGBTs) are commercially available up to 6.5 kV. Such voltage ratings are attractive for pulsed power and high-voltage switch-mode converter applications. However, as the IGBT voltage ratings increase, the rate of current rise and fall are generally reduced. This tradeoff is difficult to avoid as IGBTs must maintain a low resistance in the epitaxial or drift region layer. For high-voltage-rated IGBTs with thick drift regions to support the reverse voltage, the required high carrier concentrations are injected at turn on and removed at turn off, which slows the switching speed. An option for faster switching is to series multiple, lower voltage-rated IGBTs. An IGBT-stack prototype with six, 1200 V rated IGBTs in series has been experimentally tested. The six-series IGBT stack consists of individual, optically isolated, gate drivers and aluminum cooling plates for forced air cooling which results in a compact package. Each IGBT is overvoltage protected by transient voltage suppressors. The turn-on current rise time of the six-series IGBT stack and a single 6.5 kV rated IGBT has been experimentally measured in a pulsed resistive-load, capacitor discharge circuit. The IGBT stack has also been compared to two IGBT modules in series, each rated at 3.3 kV, in a boost circuit application switching at 9 kHz and producing an output of 5 kV. The six-series IGBT stack results in improved turn-on switching speed, and significantly higher power boost converter efficiency due to a reduced current tail during turn off. The experimental test parameters and the results of the comparison tests are discussed in the following paper
20f5b475effb8fd0bf26bc72b4490b033ac25129
We present a robust and real time approach to lane marker detection in urban streets. It is based on generating a top view of the road, filtering using selective oriented Gaussian filters, using RANSAC line fitting to give initial guesses to a new and fast RANSAC algorithm for fitting Bezier Splines, which is then followed by a post-processing step. Our algorithm can detect all lanes in still images of the street in various conditions, while operating at a rate of 50 Hz and achieving comparable results to previous techniques.
e6bef595cb78bcad4880aea6a3a73ecd32fbfe06
The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.
7cbbe0025b71a265c6bee195b5595cfad397a734
People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair.
bf003bb2d52304fea114d824bc0bf7bfbc7c3106
9a59a3719bf08105d4632898ee178bd982da2204
The autonomous vehicle is a mobile robot integrating multi‐sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called “Intelligent Pioneer”, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree‐of‐freedom dynamic model is developed to formulate the path‐tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive‐PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Challenge of China. Intelligent Pioneer finished all of the competition programmes and won first position in 2010 and third position in 2011.
7592f8a1d4fa2703b75cad6833775da2ff72fe7b
The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%). Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP and 0.31% with a committee of seven MLP. All we need to achieve this until 2011 best result are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.
cbcd9f32b526397f88d18163875d04255e72137f
14829636fee5a1cf8dee9737849a8e2bdaf9a91f
Bitcoin is a distributed digital currency which has attracted a substantial number of users. We perform an in-depth investigation to understand what made Bitcoin so successful, while decades of research on cryptographic e-cash has not lead to a large-scale deployment. We ask also how Bitcoin could become a good candidate for a long-lived stable currency. In doing so, we identify several issues and attacks of Bitcoin, and propose suitable techniques to address them.
3d16ed355757fc13b7c6d7d6d04e6e9c5c9c0b78
d19f938c790f0ffd8fa7fccc9fd7c40758a29f94
cd5b7d8fb4f8dc3872e773ec24460c9020da91ed
This paper presents a new design concept of a beam steerable high gain phased array antenna based on WR28 waveguide at 29 GHz frequency for fifth generation (5G) full dimension multiple input multiple output (FD-MIMO) system. The 8×8 planar phased array is fed by a three dimensional beamformer to obtain volumetric beam scanning ranging from −60 to +60 degrees both in azimuth and elevation direction. Beamforming network (BFN) is designed using 16 set of 8×8 Butler matrix beamformer to get 64 beam states, which control the horizontal and vertical angle. This is a new concept to design waveguide based high power three-dimensional beamformer for volumetric multibeam in Ka band for 5G application. The maximum gain of phased array is 28.5 dBi that covers 28.9 GHz to 29.4 GHz frequency band.
34feeafb5ff7757b67cf5c46da0869ffb9655310
Environmental energy is an attractive power source for low power wireless sensor networks. We present Prometheus, a system that intelligently manages energy transfer for perpetual operation without human intervention or servicing. Combining positive attributes of different energy storage elements and leveraging the intelligence of the microprocessor, we introduce an efficient multi-stage energy transfer system that reduces the common limitations of single energy storage systems to achieve near perpetual operation. We present our design choices, tradeoffs, circuit evaluations, performance analysis, and models. We discuss the relationships between system components and identify optimal hardware choices to meet an application's needs. Finally we present our implementation of a real system that uses solar energy to power Berkeley's Telos Mote. Our analysis predicts the system will operate for 43 years under 1% load, 4 years under 10% load, and 1 year under 100% load. Our implementation uses a two stage storage system consisting of supercapacitors (primary buffer) and a lithium rechargeable battery (secondary buffer). The mote has full knowledge of power levels and intelligently manages energy transfer to maximize lifetime.
3689220c58f89e9e19cc0df51c0a573884486708
AmbiMax is an energy harvesting circuit and a supercapacitor based energy storage system for wireless sensor nodes (WSN). Previous WSNs attempt to harvest energy from various sources, and some also use supercapacitors instead of batteries to address the battery aging problem. However, they either waste much available energy due to impedance mismatch, or they require active digital control that incurs overhead, or they work with only one specific type of source. AmbiMax addresses these problems by first performing maximum power point tracking (MPPT) autonomously, and then charges supercapacitors at maximum efficiency. Furthermore, AmbiMax is modular and enables composition of multiple energy harvesting sources including solar, wind, thermal, and vibration, each with a different optimal size. Experimental results on a real WSN platform, Eco, show that AmbiMax successfully manages multiple power sources simultaneously and autonomously at several times the efficiency of the current state-of-the-art for WSNs
4833d690f7e0a4020ef48c1a537dbb5b8b9b04c6
A low-power low-cost highly efficient maximum power point tracker (MPPT) to be integrated into a photovoltaic (PV) panel is proposed. This can result in a 25% energy enhancement compared to a standard photovoltaic panel, while performing functions like battery voltage regulation and matching of the PV array with the load. Instead of using an externally connected MPPT, it is proposed to use an integrated MPPT converter as part of the PV panel. It is proposed that this integrated MPPT uses a simple controller in order to be cost effective. Furthermore, the converter has to be very efficient, in order to transfer more energy to the load than a directly coupled system. This is achieved by using a simple soft-switched topology. A much higher conversion efficiency at lower cost will then result, making the MPPT an affordable solution for small PV energy systems.
61c1d66defb225eda47462d1bc393906772c9196
The enormous potential for wireless sensor networks to make a positive impact on our society has spawned a great deal of research on the topic, and this research is now producing environment-ready systems. Current technology limits coupled with widely-varying application requirements lead to a diversity of hardware platforms for different portions of the design space. In addition, the unique energy and reliability constraints of a system that must function for months at a time without human intervention mean that demands on sensor network hardware are different from the demands on standard integrated circuits. This paper describes our experiences designing sensor nodes and low level software to control them. In the ZebraNet system we use GPS technology to record fine-grained position data in order to track long term animal migrations [14]. The ZebraNet hardware is composed of a 16-bit TI microcontroller, 4 Mbits of off-chip flash memory, a 900 MHz radio, and a low-power GPS chip. In this paper, we discuss our techniques for devising efficient power supplies for sensor networks, methods of managing the energy consumption of the nodes, and methods of managing the peripheral devices including the radio, flash, and sensors. We conclude by evaluating the design of the ZebraNet nodes and discussing how it can be improved. Our lessons learned in developing this hardware can be useful both in designing future sensor nodes and in using them in real systems.
146da74cd886acbd4a593a55f0caacefa99714a6
The evolution of Artificial Intelligence has served as the catalyst in the field of technology. We can now develop things which was once just an imagination. One of such creation is the birth of self-driving car. Days have come where one can do their work or even sleep in the car and without even touching the steering wheel, accelerator you will still be able to reach your target destination safely. This paper proposes a working model of self-driving car which is capable of driving from one location to the other or to say on different types of tracks such as curved tracks, straight tracks and straight followed by curved tracks. A camera module is mounted over the top of the car along with Raspberry Pi sends the images from real world to the Convolutional Neural Network which then predicts one of the following directions. i.e. right, left, forward or stop which is then followed by sending a signal from the Arduino to the controller of the remote controlled car and as a result of it the car moves in the desired direction without any human intervention.
bb17e8858b0d3a5eba2bb91f45f4443d3e10b7cd
090a6772a1d69f07bfe7e89f99934294a0dac1b9
f07fd927971c40261dd7cef1ad6d2360b23fe294
We consider the problem of sparse canonical correlation analysis (CCA), i.e., the search for two linear combi nations, one for each multivariate, that yield maximum correlation using a specified number of variables. We propose an efficient numeri cal approximation based on a direct greedy approach which bound s the correlation at each stage. The method is specifically des igned to cope with large data sets and its computational complexit y depends only on the sparsity levels. We analyze the algorith m’s performance through the tradeoff between correlation and parsimony. The results of numerical simulation suggest that a significant portion of the correlation may be captured using a relatively small number of variables. In addition, we exami ne the use of sparse CCA as a regularization method when the number of available samples is small compared to the dimensions of t he multivariates. I. I NTRODUCTION Canonical correlation analysis (CCA), introduced by Harol d Hotelling [1], is a standard technique in multivariate data n lysis for extracting common features from a pair of data sourc es [2], [3]. Each of these data sources generates a random vecto r that we call a multivariate. Unlike classical dimensionali ty reduction methods which address one multivariate, CCA take s into account the statistical relations between samples fro m two spaces of possibly different dimensions and structure. In particular, it searches for two linear combinations, one fo r each multivariate, in order to maximize their correlation. It is used in different disciplines as a stand-alone tool or as a preprocessing step for other statistical methods. Further more, CCA is a generalized framework which includes numerous classical methods in statistics, e.g., Principal Componen t Analysis (PCA), Partial Least Squares (PLS) and Multiple Linear Regression (MLR) [4]. CCA has recently regained attention with the advent of kernel CCA and its application to independent component analysis [5], [6]. The last decade has witnessed a growing interest in the search for sparse representations of signals and sparse numerical methods. Thus, we consider the problem of sparse CCA, i.e., the search for linear combinations with maximal correlation using a small number of variables. The quest for sparsity can be motivated through various reasonings. First is the ability to interpret and visualize the results. A small number of variables allows us to get the “big picture”, while sacrificing some of the small details. Moreover, spars e representations enable the use of computationally efficien t The first two authors contributed equally to this manuscript . This work was supported in part by an AFOSR MURI under Grant FA9550-06-1-0 324. numerical methods, compression techniques, as well as nois e reduction algorithms. The second motivation for sparsity i s regularization and stability. One of the main vulnerabilit ies of CCA is its sensitivity to a small number of observations. Thu s, regularized methods such as ridge CCA [7] must be used. In this context, sparse CCA is a subset selection scheme which allows us to reduce the dimensions of the vectors and obtain a stable solution. To the best of our knowledge the first reference to sparse CCA appeared in [2] where backward and stepwise subset selection were proposed. This discussion was of qualitativ e nature and no specific numerical algorithm was proposed. Recently, increasing demands for multidimensional data pr ocessing and decreasing computational cost has caused the topic to rise to prominence once again [8]–[13]. The main disadvantages with these current solutions is that there is no direct control over the sparsity and it is difficult (and nonintuitive) to select their optimal hyperparameters. In add ition, the computational complexity of most of these methods is too high for practical applications with high dimensional data sets. Sparse CCA has also been implicitly addressed in [9], [14] an d is intimately related to the recent results on sparse PCA [9] , [15]–[17]. Indeed, our proposed solution is an extension of the results in [17] to CCA. The main contribution of this work is twofold. First, we derive CCA algorithms with direct control over the sparsity in each of the multivariates and examine their performance. Our computationally efficient methods are specifically aime d at understanding the relations between two data sets of larg e dimensions. We adopt a forward (or backward) greedy approach which is based on sequentially picking (or dropping) variables. At each stage, we bound the optimal CCA solution and bypass the need to resolve the full problem. Moreover, the computational complexity of the forward greedy method does not depend on the dimensions of the data but only on the sparsity parameters. Numerical simulation results show th at a significant portion of the correlation can be efficiently cap tured using a relatively low number of non-zero coefficients. Our second contribution is investigation of sparse CCA as a regularization method. Using empirical simulations we examin e the use of the different algorithms when the dimensions of the multivariates are larger than (or of the same order of) the number of samples and demonstrate the advantage of sparse CCA. In this context, one of the advantages of the greedy approach is that it generates the full sparsity path i n a single run and allows for efficient parameter tuning using
49afbe880b8bd419605beb84d3382647bf8e50ea
19b7e0786d9e093fdd8c8751dac0c4eb0aea0b74
0b3cfbf79d50dae4a16584533227bb728e3522aa
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
9eb67ca57fecc691853636507e2b852de3f56fac
Previous studies have shown that semantically meaningful representations of words and text can be acquired through neural embedding models. In particular, paragraph vector (PV) models have shown impressive performance in some natural language processing tasks by estimating a document (topic) level language model. Integrating the PV models with traditional language model approaches to retrieval, however, produces unstable performance and limited improvements. In this paper, we formally discuss three intrinsic problems of the original PV model that restrict its performance in retrieval tasks. We also describe modifications to the model that make it more suitable for the IR task, and show their impact through experiments and case studies. The three issues we address are (1) the unregulated training process of PV is vulnerable to short document over-fitting that produces length bias in the final retrieval model; (2) the corpus-based negative sampling of PV leads to a weighting scheme for words that overly suppresses the importance of frequent words; and (3) the lack of word-context information makes PV unable to capture word substitution relationships.
4df321947a2ac4365584a01d78a780913b171cf5
Aspect Based Sentiment Analysis (ABSA) is the task of mining and summarizing opinions from text about specific entities and their aspects. This article describes two datasets for the development and testing of ABSA systems for French which comprise user reviews annotated with relevant entities, aspects and polarity values. The first dataset contains 457 restaurant reviews (2365 sentences) for training and testing ABSA systems, while the second contains 162 museum reviews (655 sentences) dedicated to out-of-domain evaluation. Both datasets were built as part of SemEval-2016 Task 5 “Aspect-Based Sentiment Analysis” where seven different languages were represented, and are publicly available for research purposes. This article provides examples and statistics by annotation type, summarizes the annotation guidelines and discusses their cross-lingual applicability. It also explains how the data was used for evaluation in the SemEval ABSA task and briefly presents the results obtained for French.
2445089d4277ccbec3727fecfe73eaa4cc57e414
This paper evaluates the translation quality of machine translation systems for 8 language pairs: translating French, German, Spanish, and Czech to English and back. We carried out an extensive human evaluation which allowed us not only to rank the different MT systems, but also to perform higher-level analysis of the evaluation process. We measured timing and intraand inter-annotator agreement for three types of subjective evaluation. We measured the correlation of automatic evaluation metrics with human judgments. This meta-evaluation reveals surprising facts about the most commonly used methodologies.
1965a7d9a3eb0727c054fb235b1758c8ffbb8e22
Circularly polarized single-layer U-slot microstrip patch antenna has been proposed. The suggested asymmetrical U-slot can generate the two orthogonal modes for circular polarization without chamfering any corner of the probe-fed square patch microstrip antenna. A parametric study has been carried out to investigate the effects caused by different arm lengths of the U-slot. The thickness of the foam substrate is about 8.5% of the wavelength at the operating frequency. The 3 dB axial ratio bandwidth of the antenna is 4%. Both experimental and theoretical results of the antenna have been presented and discussed. Circular polarization, printed antennas, U-slot.
9462cd1ec2e404b22f76c88b6149d1e84683acb7
In this letter, a wideband compact circularly polarized (CP) patch antenna is proposed. This patch antenna consists of a printed meandering probe (M-probe) and truncated patches that excite orthogonal resonant modes to generate a wideband CP operation. The stacked patch is employed to further improve the axial-ratio (AR) bandwidth to fit the 5G Wi-Fi application. The proposed antenna achieves 42.3% impedance bandwidth and 16.8% AR bandwidth, respectively. The average gain within the AR bandwidth is 6.6 dBic with less than 0.5 dB variation. This work demonstrates a bandwidth broadening technique of an M-probe fed CP patch antenna. It is the first study to investigate and exhibit the M-probe could also provide the wideband characteristics in the dielectric loaded patch antenna. The potential applications of the antenna are 5G Wi-Fi and satellite communication systems.
d6002a6cc8b5fc2218754aed970aac91c8d8e7e9
In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D/LINEMOD representation introduced recently by Hinterstoisser et al., yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images.
41d103f751d47f0c140d21c5baa4981b3d4c9a76
The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories.
c9d1bcdb95aa748940b85508fd7277622f74c0a4
Case research has commanded respect in the information systems (IS) discipline for at least a decade. Notwithstanding the relevance and potential value of case studies, this methodological approach was once considered to be one of the least systematic. Toward the end of the 1980s, the issue of whether IS case research was rigorously conducted was first raised. Researchers from our field (e.g., Benbasat et al. 1987; Lee 1989) and from other disciplines (e.g., Eisenhardt 1989; Yin 1994) called for more rigor in case research and, through theirrecommendations, contributed to the advancement of the case study methodology. Considering these contributions, the present study seeks to determine the extent to which the field of IS has advanced in its operational use of case study method. Precisely, it investigates the level of methodological rigor in positivist IS case research conducted over the past decade. To fulfill this objective, we identified and coded 183 case articles from seven major IS journals. Evaluation attributes or criteria considered in the present review focus on three main areas, namely, design issues, data collection, and data analysis. While the level of methodological rigor has experienced modest progress with respect to some specific attributes, the overall assessed rigor is somewhat equivocal and there are still significant areas for improvement. One of the keys is to include better documentation particularly regarding issues related to the data collection and
025cdba37d191dc73859c51503e91b0dcf466741
Fingerprint image enhancement is an essential preprocessing step in fingerprint recognition applications. In this paper we introduce an approach that extracts simultaneously orientation and frequency of local ridge in the fingerprint image by Gabor wavelet filter bank and use them in Gabor Filtering of image. Furthermore, we describes a robust approach to fingerprint image enhancement, which is based on integration of Gabor Filters and Directional Median Filter(DMF). In fact, Gaussian-distributed noises are reduced effectively by Gabor Filters and impulse noises by DMF. the proposed DMF not only can finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. Experimental results show our method to be superior to those described in the literature.
3dfce4601c3f413605399267b3314b90dc4b3362
ÐToday's globally networked society places great demand on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call today for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of the respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not to distort the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal
cd866d4510e397dbc18156f8d840d7745943cc1a
74c24d7454a2408f766e4d9e507a0e9c3d80312f
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.
3973e14770350ed54ba1272aa3e19b4d21f5dad3
This paper describes the obstacle detection and tracking algorithms developed for Boss, which is Carnegie Mellon University 's winning entry in the 2007 DARPA Urban Challenge. We describe the tracking subsystem and show how it functions in the context of the larger perception system. The tracking subsystem gives the robot the ability to understand complex scenarios of urban driving to safely operate in the proximity of other vehicles. The tracking system fuses sensor data from more than a dozen sensors with additional information about the environment to generate a coherent situational model. A novel multiple-model approach is used to track the objects based on the quality of the sensor data. Finally, the architecture of the tracking subsystem explicitly abstracts each of the levels of processing. The subsystem can easily be extended by adding new sensors and validation algorithms.
6a694487451957937adddbd682d3851fabd45626
State-of-the-art question answering (QA) systems employ term-density ranking to retrieve answer passages. Such methods often retrieve incorrect passages as relationships among question terms are not considered. Previous studies attempted to address this problem by matching dependency relations between questions and answers. They used strict matching, which fails when semantically equivalent relationships are phrased differently. We propose fuzzy relation matching based on statistical models. We present two methods for learning relation mapping scores from past QA pairs: one based on mutual information and the other on expectation maximization. Experimental results show that our method significantly outperforms state-of-the-art density-based passage retrieval methods by up to 78% in mean reciprocal rank. Relation matching also brings about a 50% improvement in a system enhanced by query expansion.
2538e3eb24d26f31482c479d95d2e26c0e79b990
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.
317deb87586baa4ee7c7b5dfc603ebed94d1da07
We propose a new fast purely discriminative algorithm for natural language parsing, based on a “deep” recurrent convolutional graph transformer network (GTN). Assuming a decomposition of a parse tree into a stack of “levels”, the network predicts a level of the tree taking into account predictions of previous levels. Using only few basic text features which leverage word representations from Collobert and Weston (2008), we show similar performance (in F1 score) to existing pure discriminative parsers and existing “benchmark” parsers (like Collins parser, probabilistic context-free grammars based), with a huge speed advantage.
04cc04457e09e17897f9256c86b45b92d70a401f
Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations.
052b1d8ce63b07fec3de9dbb583772d860b7c769
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
07f3f736d90125cb2b04e7408782af411c67dd5a
Semantic matching is of central importance to many natural language tasks [2, 28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layerby-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.
0af737eae02032e66e035dfed7f853ccb095d6f5
How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence’s representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/yinwenpeng/Answer_Selection.
1c059493904b2244d2280b8b4c0c7d3ca115be73
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
de93c4f886bdf55bfc1bcaefad648d5996ed3302
This chapter examines the state of modern intrusion detection, with a particular emphasis on the emerging approach of data mining. The discussion paralleIs two important aspects of intrusion detection: general detection strategy (misuse detection versus anomaly detection) and data source (individual hosts versus network trafik). Misuse detection attempts to match known patterns of intrusion , while anomaly detection searches for deviations from normal behavior . Between the two approaches, only anomaly detection has the ability to detect unknown attacks. A particularly promising approach to anomaly detection combines association mining with other forms of machine learning such as classification. Moreover, the data source that an intrusion detection system employs significantly impacts the types of attacks it can detect. There is a tradeoff in the level of detailed information available verD. Barbará et al. (ed .), Applications of Data Mining in Computer Security © Kluwer Academic Publishers 2002 s
9e00005045a23f3f6b2c9fca094930f8ce42f9f6
2ec2f8cd6cf1a393acbc7881b8c81a78269cf5f7
We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.
a65e815895bed510c0549957ce6baa129c909813
We propose an unsupervised approach to learning non-concatenative morphology, which we apply to induce a lexicon of Arabic roots and pattern templates. The approach is based on the idea that roots and patterns may be revealed through mutually recursive scoring based on hypothesized pattern and root frequencies. After a further iterative refinement stage, morphological analysis with the induced lexicon achieves a root identification accuracy of over 94%. Our approach differs from previous work on unsupervised learning of Arabic morphology in that it is applicable to naturally-written, unvowelled text.
3f4e71d715fce70c89e4503d747aad11fcac8a43
This case study examines three different digital innovation projects within Auto Inc -- a large European automaker. By using the competing values framework as a theoretical lens we explore how dynamic capabilities occur in a firm trying to meet increasing demands in originating and innovating from digitalization. In this digitalization process, our study indicates that established socio-technical congruences are being challenged. More so, we pinpoint the need for organizations to find ways to embrace new experimental learning processes in the era of digitalization. While such a change requires long-term commitment and vision, this study presents three informal enablers for such experimental processes these enablers are timing, persistence, and contacts.
c22366074e3b243f2caaeb2f78a2c8d56072905e
A longitudinally-slotted ridge waveguide antenna array with a compact transverse dimension is presented. To broaden the bandwidth of the array, it is separated into two subarrays fed by a novel compact convex waveguide divider. A 16-element uniform linear array at X-band was fabricated and measured to verify the validity of the design. The measured bandwidth of S11les-15 dB is 14.9% and the measured cross- polarization level is less than -36 dB over the entire bandwidth. This array can be combined with the edge-slotted waveguide array to build a two-dimensional dual-polarization antenna array for the synthetic aperture radar (SAR) application
0d57ba12a6d958e178d83be4c84513f7e42b24e5
Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ∼90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internetscale data with high efficiency.
2bbe9735b81e0978125dad005656503fca567902
Kernel rootkits are formidable threats to computer systems. They are stealthy and can have unrestricted access to system resources. This paper presents NumChecker, a new virtual machine (VM) monitor based framework to detect and identify control-flow modifying kernel rootkits in a guest VM. NumChecker detects and identifies malicious modifications to a system call in the guest VM by measuring the number of certain hardware events that occur during the system call's execution. To automatically count these events, NumChecker leverages the hardware performance counters (HPCs), which exist in modern processors. By using HPCs, the checking cost is significantly reduced and the tamper-resistance is enhanced. We implement a prototype of NumChecker on Linux with the kernel-based VM. An HPC-based two-phase kernel rootkit detection and identification technique is presented and evaluated on a number of real-world kernel rootkits. The results demonstrate its practicality and effectiveness.
a3d638ab304d3ef3862d37987c3a258a24339e05
CycleGAN [Zhu et al., 2017] is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to “hide” information about a source image into the images it generates in a nearly imperceptible, highfrequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN’s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks.
c171faac12e0cf24e615a902e584a3444fcd8857
5a14949bcc06c0ae9eecd29b381ffce22e1e75b2
T he articles in this issue ofDATA BASE were chosen b y Anthony G . Hopwood, who is a professor of accounting and financial reporting at the London Graduate Schoo l of Business Studies . The articles contain important ideas , Professor Hopwood wrote, of significance to all intereste d in information systems, be they practitioners or academics . The authors, with their professional affiliations at th e time, were Chris Argyris, Graduate School of Education , Harvard University; Bo Hedberg and Sten Jonsson, Department of Business Administration, University o f Gothenburg; J . Frisco den Hertog, N . V. Philips' Gloeilampenfabrieken, The Netherlands, and Michael J . Earl, Oxford Centre for Management Studies . The articles appeared originally in Accounting, Organizations and Society, a publication of which Professor Hopwood is editor-in-chief. AOS exists to monitor emergin g developments and to actively encourage new approaches and perspectives .
02227c94dd41fe0b439e050d377b0beb5d427cda
Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks.
081651b38ff7533550a3adfc1c00da333a8fe86c
Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
6c8d5d5eee5967958a2e03a84bcc00f1f81f4d9e
High-throughput sequencing has made it theoretically possible to obtain high-quality de novo assembled genome sequences but in practice DNA extracts are often contaminated with sequences from other organisms. Currently, there are few existing methods for rigorously decontaminating eukaryotic assemblies. Those that do exist filter sequences based on nucleotide similarity to contaminants and risk eliminating sequences from the target organism. We introduce a novel application of an established machine learning method, a decision tree, that can rigorously classify sequences. The major strength of the decision tree is that it can take any measured feature as input and does not require a priori identification of significant descriptors. We use the decision tree to classify de novo assembled sequences and compare the method to published protocols. A decision tree performs better than existing methods when classifying sequences in eukaryotic de novo assemblies. It is efficient, readily implemented, and accurately identifies target and contaminant sequences. Importantly, a decision tree can be used to classify sequences according to measured descriptors and has potentially many uses in distilling biological datasets.
26433d86b9c215b5a6871c70197ff4081d63054a
Multimodal biometrics has recently attracted substantial interest for its high performance in biometric recognition system. In this paper we introduce multimodal biometrics for face and palmprint images using fusion techniques at the feature level. Gabor based image processing is utilized to extract discriminant features, while principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimension of each modality. The output features of LDA are serially combined and classified by a Euclidean distance classifier. The experimental results based on ORL face and Poly-U palmprint databases proved that this fusion technique is able to increase biometric recognition rates compared to that produced by single modal biometrics.
1c01e44df70d6fde616de1ef90e485b23a3ea549
We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.
39a6cc80b1590bcb2927a9d4c6c8f22d7480fbdd
In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.
0a10d64beb0931efdc24a28edaa91d539194b2e2
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
b07bfdebdf11b7ab3ea3d5f0087891c464c5e34d
A 64-element 29–30GHz active phased array for 5G millimeter wave applications is presented in this paper. The proposed phased array composites of 64-element antennas, 64-chan-nel T/R modules, 4 frequency conversion links, beam controlling circuitry, power management circuits and cooling fans, and are integrated in a in a very compact size(135mmX 77mmX56mm). Hybrid integration of GaAs and Si circuits are employed to achieve better RF performance. The architecture of the proposed phased array and the detail design of the T/R modules and antennas are analyzed. By the OTA (over the air) measurement, the proposed phased array achieves a bandwidth of 1 GHz at the center frequency of 29.5GHz, and the azimuth beam-width is 12 deg with the scanning range of ±45deg. With the excitation of 800MHz 64QAM signals, the transmitter beam achieves a EVM of 5.5%, ACLR of −30.5dBc with the PA working at −10dB back off, and the measured saturated EIRP is 63 dBm.
5f507abd8d07d3bee56820fd3a5dc2234d1c38ee
6424b69f3ff4d35249c0bb7ef912fbc2c86f4ff4
Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.
d2938415204bb6f99a069152cb954e4baa441bba
This letter presents a compact antenna suitable for the reception of GPS signals on artillery projectiles over 1.57-1.60 GHz. Four inverted-F-type elements are excited by a series feed network in equal magnitude and successive 90° phase difference. The shape and form factor of the antenna is tailored so that the antenna can be easily installed inside an artillery fuze. Measurements show that the proposed antenna has a gain of 2.90-3.77 dBic, an axial ratio of 1.9-2.86 dB, and a reflection coefficient of less than -10 dB over 1.57-1.62 GHz.
0e52fbadb7af607b4135189e722e550a0bd6e7cc
BACKGROUND Self-cutting using a razor blade is a type of self-mutilating behavior that leaves permanent and socially unacceptable scars with unique patterns, particularly on the upper extremities and anterior chest wall. These scars are easily recognized in the community and become a source of lifelong guilt, shame, and regret for the self-mutilators. In the presented clinical study, we aimed to investigate the effectiveness of carbon dioxide laser resurfacing and thin skin grafting in camouflaging self-inflicted razor blade incision scars. METHODS A total of 26 anatomical sites (11 upper arm, 11 forearm, and four anterior chest) of 16 white male patients, whose ages ranged from 20 to 41 years (mean, 23.8 years), were treated between February of 2001 and August of 2003. Detailed psychiatric evaluation preoperatively; informing the patient that the procedure is a "camouflage" operation; trimming hypertrophic scars down to intact skin level; intralesional corticosteroid injection to hypertrophic scars; carbon dioxide laser resurfacing as a single unit; thin (0.2 to 0.3 mm) skin grafting; compressive dressing for 15 days; use of tubular bandage; and protection from sunlight for at least 6 months constituted the key points of the procedure. RESULTS The scars were successfully camouflaged and converted to a socially acceptable appearance similar to a burn scar. Partial graft loss in one case and hyperpigmentation in another case were the complications. No new hypertrophic scar developed. CONCLUSIONS The carbon dioxide laser resurfacing and thin skin grafting method is effective in camouflaging self-inflicted razor blade incision scars.
2b0750d16db1ecf66a3c753264f207c2cb480bde
We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction.
3f4558f0526a7491e2597941f99c14fea536288d
f6c265af493c74cb7ef64b8ffe238e3f2487d133
In this research article, a compact dual band asymmetric coplanar strip-fed printed antenna is designed and presented for Bluetooth, WLAN/WiMAX and public safety applications. The dual frequency operating bands (2.45 GHz and 5.25 GHz) have been achieved by attaching two simple meander shaped radiating strips to the ACS feed line. The proposed antenna geometry is printed on a low cost FR4 substrate having thickness of 1.6mm with overall dimensions of 13 × 21.3m including uniplanar ground plane. The -10 dB impedance bandwidth of the meandered ACS-fed dual band monopole antenna is about 140MHz from 2.36-2.5 GHz, and 2500MHz from 4.5-7.0 GHz respectively, which can cover 2.4 GHz Bluetooth/WLAN, 5.2/5.8 GHz WLAN, 5.5 GHz WiMAX and 4.9 GHz US public safety bands. In addition to the simple geometry and wide impedance bandwidth features, proposed structure perform bidirectional and omnidirectional radiation pattern in both E and H-plane respectively.
04f39720b9b20f8ab990228ae3fe4f473e750fe3
17fac85921a6538161b30665f55991f7c7e0f940
We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = P i g(xi), where xi denotes the ith row of the database and g maps database rows to [0, 1]. We extend the study to general functions f , proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f . Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
2a622720d4021259a6f6d3c6298559d1b56e7e62
Recent web search techniques augment traditional text matching with a global notion of "importance" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.
37c3303d173c055592ef923235837e1cbc6bd986
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.
4556f3f9463166aa3e27b2bec798c0ca7316bd65
In this paper, we investigate how to modify the naive Bayes classifier in order to perform classification that is restricted to be independent with respect to a given sensitive attribute. Such independency restrictions occur naturally when the decision process leading to the labels in the data-set was biased; e.g., due to gender or racial discrimination. This setting is motivated by many cases in which there exist laws that disallow a decision that is partly based on discrimination. Naive application of machine learning techniques would result in huge fines for companies. We present three approaches for making the naive Bayes classifier discrimination-free: (i) modifying the probability of the decision being positive, (ii) training one model for every sensitive attribute value and balancing them, and (iii) adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization. We present experiments for the three approaches on both artificial and real-life data.
f5de0751d6d73f0496ac5842cc6ca84b2d0c2063
Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted.
bebdd553058ab50d0cb19a1f65d7f4daeb7cda37
The protection of information technology (IT) has become and is predicted to remain a key economic challenge for organizations. While research on IT security investment is fast growing, it lacks a theoretical basis for structuring research, explaining economictechnological phenomena and guide future research. We address this shortcoming by suggesting a new theoretical model emerging from a multi-theoretical perspective adopting the Resource-Based View and the Organizational Learning Theory. The joint application of these theories allows to conceptualize in one theoretical model the organizational learning effects that occur when the protection of organizational resources through IT security countermeasures develops over time. We use this model of IT security investments to synthesize findings of a large body of literature and to derive research gaps. We also discuss managerial implications of (closing) these gaps by providing practical examples.
1407b3363d9bd817b00e95190a95372d3cb3694a
In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort.
1bf9a76c9d9838afc51983894b58790b14c2e3d3
Ambient assisted living (AAL) delivers IT solutions that aim to facilitate and improve lives of the disabled, elderly, and chronically ill people. Mobility is a key issue for elderly people because their physical activity, in general, improves their quality of life and keep health status. Then, this paper presents an AAL framework for caregivers and elderly people that allow them to maintain an active lifestyle without limiting their mobility. This framework includes four AAL tools for mobility environments: i) a fall detection mobile application; ii) a biofeedback monitoring system trough wearable sensors; iii) an outdoor location service through a shoe equipped with Global Positioning System (GPS); and iv) a mobile application for caregivers that take care of several elders confined to a home environment. The proposal is evaluated and demonstrated and it is ready for use.
2375f6d71ce85a9ff457825e192c36045e994bdd
91c7fc5b47c6767632ba030167bb59d9d080fbed
We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAGGERFM, a modified variant of DAGGER that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model.
cc98157b70d7cf464b880668d7694edd12188157
Nowadays it is very important to maintain a high level security to ensure safe and trusted communication of information between various organizations. But secured data communication over internet and any other network is always under threat of intrusions and misuses. So Intrusion Detection Systems have become a needful component in terms of computer and network security. There are various approaches being utilized in intrusion detections, but unfortunately any of the systems so far is not completely flawless. So, the quest of betterment continues. In this progression, here we present an Intrusion Detection System (IDS), by applying genetic algorithm (GA) to efficiently detect various types of network intrusions. Parameters and evolution processes for GA are discussed in details and implemented. This approach uses evolution theory to information evolution in order to filter the traffic data and thus reduce the complexity. To implement and measure the performance of our system we used the KDD99 benchmark dataset and obtained reasonable detection rate.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
310
Edit dataset card

Collection including zeta-alpha-ai/NanoSCIDOCS