Accepted Papers

Conference Track

  • Learning Convolutional Neural Networks using Hybrid Orthogonal Projection and Estimation - Hengyue Pan*, York University; Hui Jiang, York University [abstract]
    Convolutional neural networks (CNNs) have yielded the excellent performance in a variety of computer vision tasks, where CNNs typically adopt a similar structure consisting of convolution layers, pooling layers and fully connected layers. In this paper, we propose to apply a novel method, namely Hybrid Orthogonal Projection and Estimation (HOPE), to CNNs in order to introduce orthogonality into the CNN structure. The HOPE model can be viewed as a hybrid model to combine feature extraction using orthogonal linear projection with mixture models. It is an effective model to extract useful information from the original high-dimension feature vectors and meanwhile filter out irrelevant noises. In this work, we present three different ways to apply the HOPE models to CNNs, i.e., {\em HOPE-Input}, {\em single-HOPE-Block} and {\em multi-HOPE-Blocks}. For {\em HOPE-Input} CNNs, a HOPE layer is directly used right after the input to de-correlate high-dimension input feature vectors. Alternatively, in {\em single-HOPE-Block} and {\em multi-HOPE-Blocks} CNNs, we consider to use HOPE layers to replace one or more blocks in the CNNs, where one block may include several convolutional layers and one pooling layer. The experimental results on CIFAR-10, CIFAR-100 and ImageNet databases have shown that the orthogonal constraints imposed by the HOPE layers can significantly improve the performance of CNNs in these image classification tasks (we have achieved one of the best performance when image augmentation has not been applied, and top 5 performance with image augmentation).
  • Limits of End-to-End Learning - Tobias Glasmachers*, Ruhr-University Bochum [abstract]
    End-to-end learning refers to training a possibly complex learning system by applying gradient-based learning to the system as a whole. End-to-end learning systems are specifically designed so that all modules are differentiable. In effect, not only a central learning machine, but also all ``peripheral'' modules like representation learning and memory formation are covered by a holistic learning process. The power of end-to-end learning has been demonstrated on many tasks, like playing a whole array of Atari video games with a single architecture. While pushing for solutions to more challenging tasks, network architectures keep growing more and more complex. In this paper we ask the question whether and to what extent end-to-end learning is a future-proof technique in the sense of scaling to complex and diverse data processing architectures. We point out potential inefficiencies, and we argue in particular that end-to-end learning does not make optimal use of the modular design of present neural networks. Our surprisingly simple experiments demonstrate these inefficiencies, up to the complete breakdown of learning.
  • A Study on Trust Region Update Rules in Newton Methods for Large-scale Linear Classification - Chih-Yang Hsia*, National Taiwan University; Ya Zhu, ; Chih-Jen Lin, National Taiwan University [abstract]
    The main task in training a linear classifier is to solve an unconstrained minimization problem. In applying an optimization method to obtain a model, typically we iteratively find a good direction and then decide a suitable step size. Past developments of extending optimization methods for large-scale linear classification focus on finding the direction, but little attention has been paid on adjusting the step size. In this work, we explain that inappropriate step-size adjustment may lead to serious slow convergence. Among the two major methods for step-size selection, line search and trust region, we focus on investigating the trust region methods. After presenting some detailed analysis, we develop novel and effective techniques to adjust the trust-region size. Experiments indicate that our new settings significantly outperform existing implementations for large-scale linear classification.
  • Mini-batch Block-coordinate based Stochastic Average Adjusted Gradient Methods to Solve Big Data Problems - Vinod Chauhan, Panjab University Chandigarh; Kalpana Dahiya, Panjab University Chandigarh; Anuj Sharma*, Panjab University Chandigarh [abstract]
    Big Data problems in Machine Learning have large number of data points or large number of features, or both, which make training of models difficult because of high computational complexities of single iteration of learning algorithms. To solve such learning problems, Stochastic Approximation offers an optimization approach to make complexity of each iteration independent of number of data points by taking only one data point or mini-batch of data points during each iteration and thereby helping to solve problems with large number of data points. Similarly, Coordinate Descent offers another optimization approach to make iteration complexity independent of the number of features/coordinates/variables by taking only one feature or block of features, instead of all, during an iteration and thereby helping to solve problems with large number of features. In this paper, an optimization framework, namely, Batch Block Optimization Framework has been developed to solve big data problems using the best of Stochastic Approximation as well as the best of Coordinate Descent approaches, independent of any solver. This framework is used to solve strongly convex and smooth empirical risk minimization problem with gradient descent (as a solver) and two novel Stochastic Average Adjusted Gradient methods have been proposed to reduce variance in mini-batch and block-coordinate setting of the developed framework. Theoretical analysis prove linear convergence of the proposed methods and empirical results with bench marked datasets prove the superiority of proposed methods against existing methods.
  • Instance Specific Discriminative Modal Pursuit: A Serialized Approach - Yang Yang*, Nanjing University; De-Chuan Zhan, NJU; Ying Fan, ; Yuan Jiang [abstract]
    With the fast development of data collection techniques, a huge amount of complex multi-modal data are generated, shared and stored on the Internet. The burden of extracting multi-modal features for test instances in data analysis becomes the main fact that hurts the efficiency of prediction. In this paper, in order to reduce the modal extraction cost in serialized classification system, we propose a novel end-to-end serialized adaptive decision approach named Discriminative Modal Pursuit (DMP), which can automatically extract instance-specifically discriminative modal sequence for reducing the cost of feature extraction in the test phase. Rather than jointly optimize a highly non-convex empirical risk minimization problem, we are inspired by LSTM, and the proposed DMP can turn to learn the decision policies which predict the label information and decide the modalities to be extracted simultaneously within limited modal acquisition budget. Consequently, DMP approach can balance the classification performance and modal feature extraction cost by utilizing different modalities for different test instances. Empirical studies show that DMP is more efficient and effective than existing modal/feature extraction methods.
  • A Quantum-Inspired Ensemble Method and Quantum-Inspired Forest Regressors - Zeke Xie*, The University of Tokyo; Issei Sato, The University of Tokyo [abstract]
    We propose a Quantum-Inspired Subspace(QIS) Ensemble Method for generating feature ensembles based on feature selections. We assign each principal component a Fraction Transition Probability as its probability weight based on Principal Component Analysis and quantum interpretations. In order to generate the feature subset for each base regressor, we select a feature subset from principal components based on Fraction Transition Probabilities. The idea originating from quantum mechanics can encourage ensemble diversity and the accuracy simultaneously. We incorporate Quantum-Inspired Subspace Method into Random Forest and propose Quantum-Inspired Forest. We theoretically prove that the quantum interpretation corresponds to the first order approximation of ensemble regression. We also evaluate the empirical performance of Quantum-Inspired Forest and Random Forest in multiple hyperparameter settings. Quantum-Inspired Forest prove the significant robustness of the default hyperparameters on most data sets. The contribution of this work is two-fold, a novel ensemble regression algorithm inspired by quantum mechanics and the theoretical connection between quantum interpretations and machine learning algorithms.
  • Distributionally Robust Groupwise Regularization Estimator - YANG KANG*, Columbia University; Jose Blanchet, Columbia University [abstract]
    Regularized estimators in the context of group variables have been applied successfully in model and feature selection in order to preserve interpretability. We formulate a Distributionally Robust Optimization (DRO) problem which recovers popular estimators, such as Group Square Root Lasso (GSRL). Our DRO formulation allows us to interpret GSRL as a game, in which we learn a regression parameter while an adversary chooses a perturbation of the data. We wish to pick the parameter to minimize the expected loss under any plausible model chosen by the adversary - who, on the other hand, wishes to increase the expected loss. The regularization parameter turns out to be precisely determined by the amount of perturbation on the training data allowed by the adversary. In this paper, we introduce a data-driven (statistical) criterion for the optimal choice of regularization, which we evaluate asymptotically, in closed form, as the size of the training set increases. Our easy-to-evaluate regularization formula is compared against cross-validation, showing good (sometimes superior) performance.
  • Multi-view Clustering with Adaptively Learned Graph - Hong Tao, NUDT; Chenping Hou*, ; Jubo Zhu, ; Dongyun Yi [abstract]
    Multi-view clustering, which aims to improve the clustering performance by exploring the data's multiple representations, has become an importance research direction. Graph based methods have been widely studied and achieve promising performance for multi-view clustering. However, most existing multi-view graph based methods perform clustering on the fixed input graphs, and the results are dependent on the quality of input graphs. In this paper, instead of fixing the input graphs, we propose Multi-view clustering with Adaptively Learned Graph (MALG), learning a new common similarity matrix. In our model, we not only consider the importance of multiple graphs from view level, but also focus on the performance of similarities within a view from sample-pair level. Sample-pair-specific weights are introduced to exploit the connection across views in more depth. In addition, the obtained optimal graph can be partitioned into specific clusters directly, according to its connected components. Experimental results on toy and real-world datasets demonstrate the efficacy of the proposed algorithm.
  • Select-and-Evaluate: A Learning Framework for Large-Scale Knowledge Graph Search - F A Rezaur Rahman Chowdhury, Washington State University; Chao Ma, Oregon State University; Md Rakibul Islam, Washington State University; Mohammad Omar Faruk, Washington State University; Janardhan Rao Doppa*, Washington State University [abstract]
    Querying graph structured data is a fundamental operation that enables important applications including knowledge graph search, social network analysis, and cyber-network security. However, the growing size of real-world data graphs poses severe challenges for graph search to meet the response-time requirements of the applications. To address these scalability challenges, we develop a learning framework for graph search called {\bf S}ele{\bf c}t-and-Ev{\bf al}uat{\bf e} (SCALE). The key insight is to select a small part of the data graph that is sufficient to answer a given query in order to satisfy the specified constraints on time or accuracy. We formulate the problem of generating the candidate subgraph as a computational search process and induce search control knowledge from training queries using imitation learning. First, we define a search space over candidate selection plans, and identify target selection plans corresponding to the training queries by performing an expensive search. Subsequently, we learn greedy search control knowledge to imitate the search behavior of the target selection plans. Our experiments on large-scale knowledge graphs including DBPedia, YAGO, and Freebase show that using the selection plans generated by the learned search control knowledge, we can significantly improve the computational-efficiency of graph search to achieve high accuracy.
  • Probability Calibration Trees - Tim Leathart*, University of Waikato; Eibe Frank, University of Waikato; Bernhard Pfahringer, "University of Waikato, New Zealand"; Geoffrey Holmes, University of Waikato [abstract]
    Obtaining accurate and well calibrated probability estimates from classifiers is useful in many applications, for example, when minimising the expected cost of classifications. Existing methods of calibrating probability estimates are applied globally, ignoring the potential for improvements by applying a more fine-grained model. We propose probability calibration trees, a modification of logistic model trees that can identify regions of the input space in which different probability calibration models should be learned. We compare probability calibration trees to two widely used calibration methods -- isotonic regression and Platt scaling -- and show that our method results in lower root mean squared error on average than both methods, for estimates produced by a variety of base learners.
  • Learning Predictive Leading Indicators for Forecasting Time Series Systems with Unknown Clusters of Forecast - Magda Gregorova*, HES-SO (HEG) Geneve; Alexandros Kalousis, ; Stephan Marchand-Maillet [abstract]
    We present a new method for forecasting systems of multiple interrelated time series. The method learns the forecast models together with discovering leading indicators from within the system that serve as good predictors improving the forecast accuracy and a cluster structure of the predictive tasks around these. The method is based on the classical linear vector autoregressive model (VAR) and links the discovery of the leading indicators to inferring sparse graphs of Granger causality. We formulate a new constrained optimisation problem to promote the desired sparse structures across the models and the sharing of information amongst the learning tasks in a multi-task manner. We propose an algorithm for solving the problem and document on a battery of synthetic and real-data experiments the advantages of our new method over baseline VAR models as well as the state-of-the-art sparse VAR learning methods.
  • Locally Smoothed Neural Networks - Liang Pang*, ICT; Yanyan Lan, Chinese Academy of Sciences; Jun Xu, "Institute of Computing Technology, CAS, China"; Jiafeng Guo, Chinese Academy of Sciences (CAS); Cheng Xueqi, ICT, UCAS [abstract]
    Convolutional Neural Networks (CNN) and the locally connected layer are limited in capturing the importance and relations of different local receptive fields, which are often crucial for tasks such as face verification, visual question answering, and word sequence prediction. To tackle the issue, we propose a novel locally smoothed neural network (LSNN) in this paper. The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields. Specifically, a multi-variate Gaussian function is utilized to generate the smoother, for modeling the location relations among different local receptive fields. Furthermore, the content information can also be leveraged by setting the mean and precision of the Gaussian function according to the content. Experiments on some variant of MNIST clearly show our advantages over CNN and locally connected layer.
  • Data sparse nonparametric regression with epsilon-insensitive losses - Maxime Sangnier*, UPMC - France; Olivier Fercoq, ; Florence d'Alché Buc [abstract]
    Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of epsilon-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with epsilon-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.
  • Rate Optimal Estimation for High Dimensional Spatial Covariance Matrices - Aidong Adam Ding*, Northeastern University; Yi Li, ; Jennifer Dy, Northeastern University [abstract]
    Spatial covariance matrix estimation is of great significance in many applications in climatology, econometrics and many other fields with complex data structures involving spatial dependencies. High dimensionality brings new challenges to this problem, and no theoretical optimal estimator has been proved for the spatial high-dimensional covariance matrix. Over the past decade, the method of regularization has been introduced to high-dimensional covariance estimation for various structured matrices, to achieve rate optimal estimators. In this paper, we aim to bridge the gap in these two research areas. We use a structure of block bandable covariance matrices to incorporate spatial dependence information, and study rate optimal estimation of this type of structured high dimensional covariance matrices. A double tapering estimator is proposed, and is shown to achieve the asymptotic minimax error bound. Numerical studies on both synthetic and real data are conducted showing the improvement of the double tapering estimator over the sample covariance matrix estimator.
  • PHD: A Probabilistic Model of Hybrid Deep Collaborative Filtering for Recommender Systems - Jie Liu*, Shanghai Jiao Tong University; Dong Wang [abstract]
    Collaborative Filtering (CF), a well-known approach in producing recommender systems, has achieved wide use and excellent performance not only in research but also in industry. However, problems related to cold start and data sparsity have caused CF to attract an increasing amount of attention in efforts to solve these problems. Traditional approaches adopt side information to extract effective latent factors but still have some room for growth. Due to the strong characteristic of feature extraction in deep learning, many researchers have employed it with CF to extract effective representations and to enhance its performance in rating prediction. Based on this previous work, we propose a probabilistic model that combines a stacked denoising autoencoder and a convolutional neural network together with auxiliary side information (i.e, both from users and items) to extract users and items' latent factors, respectively. Extensive experiments for four datasets demonstrate that our proposed model outperforms other traditional approaches and deep learning models making it state of the art.
  • Adaptive Sampling Scheme for Learning in Severely Imbalanced Large Scale Data - Wei Zhang*, Adobe; Scott Tomko, Adobe [abstract]
    Imbalanced data poses a serious challenge for many machine learning and data mining applications. It may significantly affect the performance of learning algorithms. In digital marketing applications, events of interest (positive instances for building predictive models) such as click and purchase are rare. A retail website can easily receive a million visits every day, yet only a small percentage of visits lead to purchase. The large amount of raw data and the small percentage of positive instances make it challenging to build decent predictive models in a timely fashion. In this paper, we propose an adaptive sampling strategy to deal with this problem. It efficiently returns high quality training data, ensures system responsiveness and improves predictive performances.
  • ST-GAN: Unsupervised Image Semantic Transformation Using Generative Adversarial Networks - JiChao Zhang*, Shandong University; Fan Zhong, Shandong University; Gongze Cao, Zhejiang University [abstract]
    Image semantic transformation aims to convert one image into another image with different semantic features (e.g., face pose, hairstyle). The previous methods, which learn the mapping function from one image domain to the other, require supervised information directly or indirectly. In this paper, we propose an unsupervised image semantic transformation method called semantic transformation generative adversarial networks (ST-GAN). We further improve ST-GAN with the Wasserstein distance to generate more realistic images and propose a method called local mutual information maximization to obtain a more explicit semantic transformation. ST-GAN has the ability to map the image semantic features into the latent vector and then perform transformation by controlling the latent vector. After the proposed framework is trained on a benchmark, the original face images can be reconstructed and then translated into various images with different semantic features.
  • Deep Competitive Pathway Networks - Jia-Ren Chang, National Chiao Tung University; Yong-Sheng Chen*, National Chiao Tung University [abstract]
    In the design of deep neural architectures, recent studies have demonstrated the benefits of grouping subnetworks into a larger network. For examples, the Inception architecture integrates multi-scale subnetworks and the residual network can be regarded that a residual unit combines a residual subnetwork with an identity shortcut. In this work, we embrace this observation and propose the Competitive Pathway Network (CoPaNet). The CoPaNet comprises a stack of competitive pathway units and each unit contains multiple parallel residual-type subnetworks followed by a max operation for feature competition. This mechanism enhances the model capability by learning a variety of features in subnetworks. The proposed strategy explicitly shows that the features propagate through pathways in various routing patterns, which is referred to as pathway encoding of category information. Moreover, the cross-block shortcut can be added to the CoPaNet to encourage feature reuse. We evaluated the proposed CoPaNet on four object recognition benchmarks: CIFAR-10, CIFAR-100, SVHN, and ImageNet. CoPaNet obtained the state-of-the-art or comparable results using similar amounts of parameters.
  • Regret For Expected Improvement Over The Best-Observed Value and Stopping Condition - Vu Nguyen*, Deakin University; Sunil Gupta, Deakin University; Santu Rana, Deakin University; Cheng Li [abstract]
    Bayesian optimization (BO) is a sample-efficient method for global optimization of expensive, noisy, black-box functions using probabilistic methods. The performance of a Bayesian optimization method depends on its selection strategy through the acquisition function. Expected improvement (EI) is one of the most widely used acquisition functions for BO that finds the expectation of the improvement function over the incumbent. The incumbent is usually selected as the best-observed value so far, termed as y^max (for the maximizing problem). Recent work has studied the convergence rate for EI under some mild assumptions or zero noise of observations. Especially, the work of (Wang and de Freitas, 2014) has derived the sublinear regret for EI under a stochastic noise. However, due to the difficulty in stochastic noise setting and to make the convergent proof feasible, they use an alternative choice for the incumbent as the maximum of the Gaussian process predictive mean, µ^max . This modification makes the algorithm computationally inefficient because it requires an additional global optimization step to estimate µ^max that is costly and may be inaccurate. To address this issue, we derive a sublinear convergence rate for EI using the commonly used y^max . Moreover, our analysis is the first to study a stopping criteria for EI to prevent unnecessary evaluations. Our analysis complements the results of (Wang and de Freitas, 2014) to theoretically cover two incumbent settings for EI. Finally, we demonstrate empirically that EI using y max is both more computationally efficiency and more accurate than EI using µ^max.
  • Scale-Invariant Recognition by Weight-Shared CNNs in Parallel - Ryo Takahashi*, Kobe University; Takashi Matsubara, Kobe University; Kuniaki Uehara, Kobe University [abstract]
    Deep convolutional neural networks (CNNs) have become one of the most successful methods for image processing tasks in past few years. Recent studies on modern residual architectures, enabling CNNs to be much deeper, have achieved much better results thanks to their high expressive ability by numerous parameters. In general, CNNs are known to have the robustness to the small parallel shift of objects in images by their local receptive fields, weight parameters shared by each unit, and pooling layers sandwiching them. However, CNNs have a limited robustness to the other geometric transformations such as scaling and rotation, and this lack becomes an obstacle to performance improvement even now. This paper proposes a novel network architecture, the weight-shared multi-stage network (WSMS-Net), and focuses on acquiring the scale invariance by constructing of multiple stages of CNNs. The WSMS-Net is easily combined with existing deep CNNs, enables existing deep CNNs to acquire a robustness to the scaling, and therefore, achieves higher classification accuracy on CIFAR-10, CIFAR-100 and ImageNet datasets.
  • Using DNN to Automate Large Scale SA]{Using Deep Neural Networks to Automate Large Scale Statistical Analysis for Big Data Applications - Rongrong Zhang*, 1987; Wei Deng, Purdue University; Michael Zhu, Purdue University [abstract]
    Statistical analysis (SA) is a complex process to deduce population properties from analysis of data. It usually takes a well-trained analyst to successfully perform SA, and it becomes extremely challenging to apply SA to big data applications. We propose to use deep neural networks to automate the SA process. In particular, we propose to construct convolutional neural networks (CNNs) to perform automatic model selection and parameter estimation, two most important SA tasks. We refer to the resulting CNNs as the neural model selector and the neural model estimator, respectively, which can be properly trained using labeled data systematically generated from candidate models. Simulation study shows that both the selector and estimator demonstrate excellent performances. The idea and proposed framework can be further extended to automate the entire SA process and have the potential to revolutionize how SA is performed in big data analytics.
  • Recognizing Art Style Automatically in painting with deep learning - Adrian Lecoutre, INSA Rouen; Florian Yger*, Université Paris-Dauphine; Benjamin Negrevergne, Université [abstract]
    An art style (or movement) refers to a tendency in an art with a specific goal or philosophy and sometimes setting some explicit canon, followed by a group of artists during a restricted period of time. Hence, art style is probably the most general descriptor that can be used to classify a painting as it merges visual and historical information. In the context of large visual art databases, style is therefore a crucial metadata for the description of paintings. In this article, in an attempt at automatically detecting the art style of a painting from an image, we propose to apply a deep learning approach and illustrate our study with results on WikiArt data.
  • One Class Splitting Criteria for Random Forests - Nicolas Goix, Télécom ParisTech; Nicolas Drougard, ISAE; Romain Brault*, Télécom ParisTech; Maël Chiapino, Télécom ParisTech [abstract]
    Random Forests (RFs) are strong machine learning tools for classification and regression. However, they remain supervised algorithms, and no extension of RFs to the one-class setting has been proposed, except for techniques based on second-class sampling. This work fills this gap by proposing a natural methodology to extend standard splitting criteria to the one-class setting, structurally generalizing RFs to one-class classification. An extensive benchmark of seven state-of-the-art anomaly detection algorithms is also presented. This empirically demonstrates the relevance of our approach.
  • Computer Assisted Composition with Recurrent Neural Networks - Christian Walder*, DATA61; Dongwoo Kim, ANU [abstract]
    "Sequence modeling with neural networks has lead to powerful models of symbolic music data. We address the problem of exploiting these models to reach creative musical goals. To this end we generalise previous work, which sampled Markovian sequence models under the constraint that the sequence belong to the language of a given finite state machine. We consider more expressive non-Markov models, thereby requiring approximate sampling which we provide in the form of an efficient sequential Monte Carlo method. In addition we provide and compare with a beam search strategy for conditional probability maximisation. Our algorithms are capable of convincingly re-harmonising famous musical works. To demonstrate this we provide visualisations, quantitative experiments, a human listening test and illustrative audio examples. We find both the sampling and optimisation procedures to be effective, yet complementary in character. For the case of highly permissive constraint sets, we find that sampling is to be preferred due to the overly regular nature of the optimisation based results.
  • Whitening-Free Least-Squares Non-Gaussian Component Analysis - Hiroaki Shiino*, Yahoo Japan Corporation; Hiroaki Sasaki, Nara Institute of Science and Technology; Gang Niu, The University of Tokyo/ RIKEN; Masashi Sugiyama, RIKEN / The University of Tokyo [abstract]
    Non-Gaussian component analysis (NGCA) is an unsupervised linear dimension reduction method that extracts low-dimensional non-Gaussian “signals” from high-dimensional data contaminated with Gaussian noise. NGCA can be regarded as a generalization of projection pursuit (PP) and independent component analysis (ICA) to multi-dimensional and dependent non-Gaussian components. Indeed, seminal approaches to NGCA are based on PP and ICA. Recently, a novel NGCA approach called least-squares NGCA (LSNGCA) has been developed, which gives a solution analytically through least-squares estimation of log-density gradients and eigendecomposition. However, since pre-whitening of data is involved in LSNGCA, it performs unreliably when the data covariance matrix is ill-conditioned, which is often the case in high-dimensional data analysis. In this paper, we propose a whitening-free variant of LSNGCA and experimentally demonstrate its superiority.
  • Semi-supervised Convolutional Neural Networks for Identifying Wi-Fi Interference Sources - Krista Longi*, University of Helsinki; Teemu Pulkkinen, ; Arto Klami, University of Helsinki [abstract]
    We present a convolutional neural network for identifying radio frequency devices from signal data, in order to detect possible interference sources for wireless local area networks. Collecting training data for this problem is particularly challenging due to a high number of possible interfering devices, difficulty in obtaining precise timings, and the need to measure the devices in varying conditions. To overcome this challenge we focus on semi-supervised learning, aiming to minimize the need to of reliable training samples while utilizing larger amounts of unsupervised labels to improve the accuracy. In particular, we propose a novel structured extension of the pseudo-label technique to take advantage of temporal continuity in the data and show that already a few seconds of training data for each device is sufficient for highly accurate recognition.
  • Magnitude-Preserving Ranking for Structured Outputs - Celine Brouard*, Aalto university; Eric Bach, Aalto university; Sebastian Böcker, FSU Jena; Juho Rousu, Aalto university [abstract]
    In this paper, we present a novel method for solving structured prediction problems, based on combining Input Output Kernel Regression (IOKR) with an extension of magnitude-preserving ranking to structured output spaces. In particular, we concentrate on the case where a set of candidate outputs has been given, and the associated pre-image problem calls for ranking the set of candidate outputs. Our method, called magnitude-preserving IOKR, both aims to produce a good approximation of the output feature vectors, and to preserve the magnitude differences of the output features in the candidate sets. For the case where the candidate set does not contain corresponding 'correct' inputs, we propose a method for approximating the inputs through application of IOKR in the reverse direction. We apply our method to two learning problems: cross-lingual document retrieval and metabolite identification. Experiments show that the proposed approach improves performance over IOKR, and in the latter application obtains the current state-of-the-art accuracy.
  • A Word Embeddings Informed Focused Topic Model - He Zhao*, Monash University; Lan Du, Monash University; Wray Buntine, "Monash University, Australia" [abstract]
    In natural language processing and related fields, it has been shown that the word embeddings can successfully capture both the semantic and syntactic features of words. They can serve as complementary information to topics models, especially especially for the cases where word co-occurrence data is insufficient, such as with short texts. In this paper, we propose a focused topic model where how a topic focuses on words is informed by word embeddings. Our models is able to discover more informed focused topics with more representative words, leading to better modelling accuracy and topic quality. With the data argumentation technique, we can derive an efficient Gibbs sampling algorithm, which benefits from the fully local conjugacy of the model. We conduct extensive experiments on several real world datasets demonstrate that our model achieves comparable or improved performance in terms of both perplexity and topic coherence, particularly in handling sparse text data.
  • Accumulated Gradient Normalization - Joeri Hermans*, Maastricht University; Gerasimos Spanakis, Maastricht University [abstract]
    This work addresses the instability in asynchronous data parallel optimization. It does so by introducing a novel distributed optimizer with state-of-the-art performance which is able to efficiently optimize a central model under communication constraints. The optimizer achieves this by pushing a normalized sequence of first-order gradients to a parameter server. This implies that the magnitude of the worker deltas are smaller, and provide a better direction towards a minimum compared to first-order gradients. As a result, our approach mitigates the parameter staleness problem more effectively.
  • A Mutually-Dependent Hadamard Kernel for Modelling Latent Variable Couplings - Sami Remes*, Aalto University; Markus Heinonen, Aalto University; Samuel Kaski, Aalto University [abstract]
    We introduce a novel kernel that models input-dependent couplings across multiple latent processes. The pairwise kernel measures covariance both along inputs and across different latent signals in a mutually-dependent fashion. The latent correlation Gaussian process (LCGP) model combines these non-stationary latent components into multiple outputs by an input-dependent mixing matrix. Probit classification and support for multiple observation sets are derived by Variational Bayesian inference. Results on several datasets indicate that the LCGP model can recover the correlations between latent signals while simultaneously achieving state-of-the-art performance. We highlight the latent covariances with an EEG classification dataset where latent brain processes and their couplings simultaneously emerge from the model.
  • Learning Deep Semantic Embeddings for Cross-Modal Retrieval - Cuicui Kang*, University of Chinese Academy ; Shengcai Liao, ; Xiong Gang, ; Jianlong Tan [abstract]
    Deep learning methods have been actively researched for cross-modal retrieval, with the softmax cross-entropy loss commonly applied for supervised learning. However, the softmax cross-entropy loss is known to result in large intra-class variances, which is not not very suited for cross-modal matching. In this paper, a deep architecture called Deep Semantic Embedding (DSE) is proposed, which is trained in an end-to-end manner for image-text cross-modal retrieval. With images and texts mapped to a feature embedding space, class labels are used to guide the embedding learning, so that the embedding space has a semantic meaning common for both images and texts. This way, the difference between different modalities is eliminated. Under this framework, the center loss is introduced beyond the commonly used softmax cross-entropy loss to achieve both inter-class separation and intra-class compactness. Besides, a distance based softmax cross-entropy loss is proposed to jointly consider the softmax cross-entropy and center losses in fully gradient based learning. Experiments have been done on three popular image-text cross-modal retrieval databases, showing that the proposed algorithms have achieved the best overall performances.
  • Pyramid Person Matching Network for Person Re-identification - Chaojie Mao*, Zhejiang University; Yingming Li, Zhejiang University; Zhongfei Zhang, Zhejiang University; yaqing Zhang, Zhejiang University; Xi Li, Zhejiang University [abstract]
    In this work, we present a deep convolutional pyramid person matching network (PPMN) with specially designed Pyramid Matching Module to address the problem of person re-identification. The architecture takes a pair of RGB images as input, and outputs a similiarity value indicating whether the two input images represent the same person or not. Based on deep convolutional neural networks, our approach first learns the discriminative semantic representation with the semantic-component-aware features for persons and then employs the Pyramid Matching Module to match the common semantic-components of persons, which is robust to the variation of spatial scales and misalignment of locations posed by viewpoint changes. The above two processes are jointly optimized via a unified end-to-end deep learning scheme. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art approaches, especially on the rank-1 recognition rate.
  • Learning RBM with a DC programming Approach - Vidyadhar Upadhya*, Indian Institute of Science, B; P S Sastry, Indian Institute of Science, Bangalore [abstract]
    By exploiting the property that the RBM log-likelihood function is the difference of convex functions, we formulate a stochastic variant of the difference of convex functions (DC) programming to minimize the negative log-likelihood. Interestingly, the traditional contrastive divergence algorithm is a special case of the above formulation and the hyperparameters of the two algorithms can be chosen such that the amount of computation per mini-batch is identical. We show that for a given computational budget the proposed algorithm almost always reaches a higher log-likelihood more rapidly, compared to the standard contrastive divergence algorithm. Further, we modify this algorithm to use the centered gradients and show that it is more efficient and effective compared to the standard centered gradient algorithm on benchmark datasets.
  • Multi-Task Structured Prediction for Entity Analysis: Search-Based Learning Algorithms - Chao Ma*, Oregon State University; Janardhan Rao Doppa, Washington State University; Prasad Tadepalli, Oregon State University; Hamed Shahbazi, Oregon State University; Xiaoli Fern, Oregon State University [abstract]
    We study several search-based approaches to the problem of multi-task structured prediction (MTSP) in the context of multiple entity analysis tasks in natural language processing. In search-based structured prediction, we learn a scoring function from supervised training data where the best solutions correspond to the highest scores. We study 3 different search architectures to multi-task structured prediction that make different tradeoffs between speed and accuracy. In the fastest ``pipeline'' architecture, we solve different tasks one after another in a pipelined fashion. In the ``joint'' architecture, which is the slowest, we formulate MTSP as a single-task structured prediction, and search the joint search space. We introduce two different ways of pruning the search space to make search more efficient. In the intermediate ``cyclic'' architecture, we cycle through the tasks in sequence multiple times until there is no performance improvement. Our results on two benchmark domains show that the joint architecture improves over the pipeline architecture as well as the previous state-of-the-art approach based on graphical modeling. Learned pruning gives twice the speedup over the joint architecture with negligible loss in accuracy. The cyclic architecture is faster than the joint approach and performs competitively with it.
  • Nested LSTMs - Joel Ruben Antony Moniz*, Carnegie Mellon University; david Krueger, Montreal Institute for Learning Algorithms [abstract]
    We propose Nested LSTMs (NLSTM), a novel RNN architecture with multiple levels of memory. Nested LSTMs add depth to LSTMs via nesting as opposed to stacking. The value of a memory cell in an NLSTM is computed by an LSTM cell, which has its own inner memory cell. Specifically, instead of computing the value of the (outer) memory cell as $c^{outer}_t = f_t \odot c_{t-1} + i_t \odot g_t$, NLSTM memory cells use the concatenation $(f_t \odot c_{t-1}, i_t \odot g_t)$ as input to an inner LSTM (or NLSTM) memory cell, and set $c^{outer}_t$ = $h^{inner}_t$. Nested LSTMs outperform both stacked and single-layer LSTMs with similar numbers of parameters in our experiments on various character-level language modeling tasks, and the inner memories of an LSTM learn longer term dependencies compared with the higher-level units of a stacked LSTM.
  • On the Flatness of Loss Surface for Two-layered ReLU Networks - Jiezhang Cao, SCUT; Qingyao Wu*, ; Yuguang Yan, SCUT; Li Wang, University of Texas at Arlington; Mingkui Tan [abstract]
    Deep learning has achieved unprecedented practical success in many applications. Despite its empirical success, however, the theoretical understanding of deep neural networks still remains a major open problem. In this paper, we explore properties of two-layered ReLU network. For simplicity, we assume that the optimal model parameters (also called ground-truth parameters) are known. We then assume that a network receives Gaussian input and is trained by minimizing the expected squared loss between learned parameters and known ground-truth parameters. Theoretically, we propose a normal equation for critical points, and study the invariances under three kinds of transformations. We prove that these transformations can keep the loss of a critical point invariant, thus can form flat regions. Therefore, how to escape from flat regions is vital for training networks.
  • Radical-level Ideograph Encoder for RNN-based Sentiment Analysis of Chinese and Japanese - Yuanzhi Ke*, Keio University; Masafumi Hagiwara, Keio University [abstract]
    The character vocabulary can be very large in non-alphabetic languages such as Chinese and Japanese, which makes neural network models huge to process such languages. We explored a model for sentiment classification that takes the embeddings of the radicals of the Chinese characters, i.e, hanzi of Chinese and kanji of Japanese. Our model is composed of a CNN word feature encoder and a bi-directional RNN document feature encoder. The results achieved are on par with the character embedding-based models, and close to the state-of-the-art word embedding-based models, with 90% smaller vocabulary, and at least 13% and 80% fewer parameters than the character embedding-based models and word embedding-based models respectively. The results suggest that the radical embedding-based approach is cost-effective for machine learning on Chinese and Japanese.
  • Recovering Probability Distributions from Missing Data - Jin Tian*, Iowa State University [abstract]
    A probabilistic query may not be estimable from observed data corrupted by missing values if the data are not missing at random (MAR). It is therefore of theoretical interest and practical importance to determine in principle whether a probabilistic query is estimable from missing data or not when the data are not MAR. We present algorithms that systematically determine whether the joint probability distribution or a target marginal distribution is estimable from observed data with missing values, assuming that the data-generation model is represented as a Bayesian network, known as m-graphs, that not only encodes the dependencies among the variables but also explicitly portrays the mechanisms responsible for the missingness process. The results significantly advance the existing work.
  • Attentive Path Combination for Knowledge Graph Completion - Xiaotian Jiang*, IIE, CAS; Quan Wang, Institute of Information Engineering, CAS; Bin Wang, Institute of Information Engineering, CAS [abstract]
    Knowledge Graphs (KGs) are often greatly incomplete, necessitating a demand for KG completion. Path-based relation inference is one of the most important approaches to this task. Traditional method treats each path between entity pairs as an atomic feature, thus inducing sparsity. Recently, neural network models solved this problem by decomposing path as a sequence of all relations in the path, so as to model the path representation using Recurrent Neural Network (RNN) architectures. In cases that there are multiple paths between an entity pair, state-of-the-art neural models either select just one path with the largest score, or make usage of simple score pooling methods like Top-K, Average, LogSumExp. Unfortunately, none of these methods is good enough to support relation inference that only bases on collecting all the evidences from multiple informative paths. In this paper, we propose a novel path-based relation inference model that learns effective entity pair representation with attentive path combination method. Given an entity pair and a set of paths connecting the pair in KG, our model allows for integrating information from each of the paths, and form a dynamic entity pair representation with respect to each candidate relation to query. We empirically evaluate the proposed method on a real-world dataset. Experimental results show that the proposed model achieves better performance than state-of-the-art path-based relation inference methods.
  • A Covariance Matrix Adaptation Evolution Strategy for Direct Policy Search in Reproducing Kernel Hilbert - Vien Ngo*, University of Stuttgart; Viet-Hung Dang, Duy Tan University; Chung TaeChoong, Kyung Hee University [abstract]
    The covariance matrix adaptation evolution strategy (CMA-ES) is an efficient derivative-free optimization algorithm. It optimizes a black-box objective function over a well defined parameter space. In some problems, such parameter spaces are defined using function approximation in which feature functions are manually defined. Therefore, the performance of those techniques strongly depends on the quality of chosen features. Hence, enabling CMA-ES to optimize on a more complex and general function class of the objective has long been desired. Specifically, we consider modeling the input space for black-box optimization in reproducing kernel Hilbert spaces (RKHS). This modeling leads to a functional optimization problem whose domain is a function space that enables us to optimize in a very rich function class. In addition, we propose CMA-ES-RKHS, a generalized CMA-ES framework, that performs black-box functional optimization in the RKHS. A search distribution, represented as a Gaussian process, is adapted by updating both its mean function and covariance operator. Adaptive representation of the function and covariance operator is achieved with sparsification techniques. We evaluate CMA-ES-RKHS on a simple functional optimization problem and bench-mark reinforcement learning (RL) domains. For an application in RL, we model policies for MDPs in RKHS and transform a cumulative return objective as a functional of RKHS policies, which can be optimized via CMA-ES-RKHS. This formulation results in a black-box functional policy search framework.
  • Neural-Power: Predict and Deploy Energy-Efficient Convolutional Neural Networks - Ermao Cai*, Carnegie Mellon University; Da-Cheng Juan, Google Research; Dimitrios Stamoulis, Carnegie Mellon University; Diana Marculescu, Carnegie Mellon University [abstract]
    “How much energy is consumed for an inference made by a convolutional neural network (CNN)?” With the increased popularity of CNN models deployed on the wide-spectrum of platforms (from mobile devices to workstations), the energy consumption of making an inference with a CNN model has drawn significantly more attention. From lengthening battery life of mobile devices to reducing the energy bill of a datacenter, it is important to understand the energy efficiency of CNN models during serving for making an inference, without actually training the model. In this work, we propose Neural-Power: a layer-wise predictive framework based on sparse polynomial regression, for predicting the serving energy consumption of a CNN model deployed on any GPU platform. Given the architecture (or hyper-parameters) of a CNN model, the proposed framework provides an accurate prediction and breakdown for both power and runtime across all layers in the whole network, aiming at helping machine learners quickly identify the bottleneck(s) of execution time or power consumption. We also propose a metric “energy-precision ratio” (EPR) to guide machine learners in selecting an energy-efficient CNN architecture that better trades off the energy consumption and prediction accuracy. The experimental results show that the prediction accuracy of the proposed Neural-Power outperforms the best published model to date, yielding an improvement in accuracy up to 68.5%. We also assess the accuracy of predictions at the network level, by predicting the runtime, power, and energy of state-of-the-art CNN configurations, achieving an average accuracy of 88.24% in runtime, 88.34% in power, and 97.21% in energy. We comprehensively corroborate the effectiveness of Neural-Power as a powerful framework for machine learning practitioners by testing it on different GPU platforms and Deep Learning software tools.

Journal Track

  • Efficient Preconditioning for Noisy Separable NMFs by Successive Projection Based Low-Rank Approximations - Tomohiko Mizutani*, Tokyo Institute of Technology; Mirai Tanaka, The Institute of Statistical Mathematics [abstract]
    The successive projection algorithm (SPA) can quickly solve a nonnegative matrix factorization problem under a separability assumption. Even if noise is added to the problem, SPA is robust as long as the perturbations caused by the noise are small. In particular, robustness against noise should be high when handling the problems arising from real applications. The preconditioner proposed by Gillis and Vavasis (2015) makes it possible to enhance the noise robustness of SPA. Meanwhile, an additional computational cost is required. The construction of the preconditioner contains a step to compute the top-$k$ truncated singular value decomposition of an input matrix. It is known that the decomposition provides the best rank-$k$ approximation to the input matrix; in other words, a matrix with the smallest approximation error among all matrices of rank less than $k$. This step is an obstacle to an efficient implementation of the preconditioned SPA. To address the cost issue, we propose a modification of the algorithm for constructing the preconditioner. Although the original algorithm uses the best rank-$k$ approximation, instead of it, our modification uses an alternative. Ideally, this alternative should have high approximation accuracy and low computational cost. To ensure this, our modification employs a rank-$k$ approximation produced by an SPA based algorithm. We analyze the accuracy of the approximation and evaluate the computational cost of the algorithm. We then present an empirical study revealing the actual performance of the SPA.
  • Distributed Multi-task Classification: A Decentralized Online Learning Approach - Chi Zhang*, Nanyang Technological University; Peilin Zhao, Ant Financial; Shuji Hao, A*STAR; Yeng Chai Soh, Nanyang Technological University; Bu Sung Lee, Nanyang Technological University; Steven C.H. Hoi, Singapore Management University [abstract]
    Although dispersing one single task (STL) to distributed learning nodes has been intensively studied by the previous research, multi-task learning (MTL) on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome this challenge and enable learning multiple tasks simultaneously on a decentralized distributed network. Specifically, the learning framework can be separated into two phases: (i) multi-task information is shared within each node on the first phase; (ii) communication between nodes then leads the whole network to converge to a common minimizer. Theoretical analysis indicates that our algorithm achieves a O(\sqrt(T)) regret bound when compared with the best classifier in hindsight, which is further validated by experiments on both synthetic and real-world datasets.
  • Learning Safe Multi-Label Prediction for Weakly Labeled Data - Tong Wei, Nanjing University; Lan-Zhe Guo, Nanjing University; Yu-Feng Li*, Nanjing University; Wei Gao, Nanjing University [abstract]
    In this paper we study multi-label learning with weakly labeled data, i.e., labels of training examples are incomplete, which commonly occurs in real applications, e.g., image classification, document categorization. This setting includes, e.g., (i) \emph{semi-supervised multi-label learning} where completely labeled examples are partially known; (ii) \emph{weak label learning} where relevant labels of examples are partially known; iii) \emph{extended weak label learning} where relevant and irrelevant labels of examples are partially known. Previous studies often expect that the learning method with the use of weakly labeled data will improve the performance, as more data are employed. This, however, is not always the cases in reality, i.e., weakly labeled data may sometimes degenerate the learning performance. It is desirable to learn \emph{safe} multi-label prediction that will not hurt performance when weakly labelled data is involved in the learning procedure. In this work we optimize multi-label evaluation metrics (F$_1$ score and Top-$k$ precision) given that the ground-truth label assignment is realized by a convex combination of base multi-label learners. To cope with the infinite number of possible ground-truth label assignments, cutting-plane strategy is adopted to iteratively generate the most helpful label assignments. The whole optimization is cast as a series of simple linear programs in an efficient manner. Extensive experiments on three weakly labeled learning tasks, namely, i) semisupervised multi-label learning; ii) weak-label learning and iii) extended weak-label learning, clearly show that our proposal improves the safeness of using weakly labelled data compared with many state-of-the-art methods.
  • Crowdsourcing with Unsure Option - Yao-Xiang Ding, Nanjing University; Zhi-Hua Zhou*, Nanjing University [abstract]
    One of the fundamental problems in crowdsourcing is the trade-off between the number of the workers needed for high-accuracy aggregation and the budget to pay. For saving budget, it is important to ensure high quality of the crowd-sourced labels, hence the total cost on label collection will be reduced. Since the self-confidence of the workers often has a close relationship with their abilities, a possible way for quality control is to request the workers to return the labels only when they feel confident, by means of providing unsure option to them. On the other hand, allowing workers to choose unsure option also leads to the potential danger of budget waste. In this work, we propose the analysis towards understanding when providing the unsure option indeed leads to significant cost reduction, as well as how the confidence threshold is set. We also propose an online mechanism, which is alternative for threshold selection when the estimation of the crowd ability distribution is difficult.
  • Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning - Tomoya Sakai*, The University of Tokyo / RIKEN; Gang Niu, The University of Tokyo / RIKEN; Masashi Sugiyama, RIKEN / The University of Tokyo [abstract]
    Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification. So far, various supervised AUC optimization methods have been developed and they are also extended to semisupervised scenarios to cope with small sample problems. However, existing semisupervised AUC optimization methods rely on strong distributional assumptions, which are rarely satisfied in real-world problems. In this paper, we propose a novel semisupervised AUC optimization method that does not require such restrictive assumptions. We first develop an AUC optimization method based only on positive and unlabeled data (PU-AUC) and then extend it to semi-supervised learning by combining it with a supervised AUC optimization method. We theoretically prove that, without the restrictive distributional assumptions, unlabeled data contribute to improving the generalization performance in PU and semi-supervised AUC optimization methods. Finally, we demonstrate the practical usefulness of the proposed methods through experiments.
  • Robust Plackett-Luce Model for k-ary Crowdsourced Preferences - Bo Han*, University of Technology Sydney; Yuangang Pan, University of Technology Sydney; Ivor W. Tsang, University of Technology Sydney [abstract]
    The aggregation of $k$-ary preferences is an emerging ranking problem, which plays an important role in several aspects of our daily life, such as ordinal peer grading and online product recommendation. At the same time, crowdsourcing has become a trendy way to provide a plethora of $k$-ary preferences for this ranking problem, due to convenient platforms and low costs. However, $k$-ary preferences from crowdsourced workers are often noisy, which inevitably degenerates the performance of traditional aggregation models. To address this challenge, in this paper, we present a RObust PlAckett-Luce (ROPAL) model. Specifically, to ensure the robustness, ROPAL integrates the Plackett-Luce model with a denoising vector. Based on the Kendall-tau distance, this vector corrects $k$-ary crowdsourced preferences with a certain probability. In addition, we propose an online Bayesian inference to make ROPAL scalable to large-scale preferences. We conduct comprehensive experiments on simulated and real-world datasets. Empirical results on ``massive synthetic'' and ``real-world'' datasets show that ROPAL with online Bayesian inference achieves substantial improvements in robustness and noisy worker detection over current approaches.