Evaluation of a Multilayer Weighted Gaussian Process Latent Variable Model for Pattern Recognition – We present the first general-purpose, scalable and robust method to infer the structure of a deep neural network using only a small number of observations. Our method first partitions the input of a neural network by three layers. Then it is analyzed by a feature fusion technique guided by a novel method for representing the network structure. Finally, we propose a novel unsupervised learning scheme for inferring the network structure based on local feature representations of network features. Our approach leverages the ability of large, unsupervised feature datasets to form a model, and presents a fast learning algorithm that outperforms state-of-the-art unsupervised methods on various datasets.
Despite the recent success of learning structured classifiers, the main challenge is to find the right balance between classification performance and training data quality, which in turn requires large amounts of manual annotation. Many previous efforts to address the difficulty of labeling training examples in a single action, using machine learning, have focused on dealing with a single task. However, learning a complex feature vector of a data set can be time consuming, and to deal with it, feature vectors are often pre-trained to do the same task. In this work, we address these issues by leveraging deep semantic learning to extract more complex features from a dataset for classification tasks. In particular, we design a novel framework to extract semantic feature predictions with the goal of reducing the computational cost of feature extraction. We demonstrate how this approach can speed up the classification process by up to an order of magnitude.
Efficient Stochastic Dual Coordinate Ascent
Cortical activations and novelty-promoting effects in reward-based learning
Evaluation of a Multilayer Weighted Gaussian Process Latent Variable Model for Pattern Recognition
Deep Multi-view Feature Learning for Text Recognition
Learning a Modular Deep Learning Model with Online CorrectionDespite the recent success of learning structured classifiers, the main challenge is to find the right balance between classification performance and training data quality, which in turn requires large amounts of manual annotation. Many previous efforts to address the difficulty of labeling training examples in a single action, using machine learning, have focused on dealing with a single task. However, learning a complex feature vector of a data set can be time consuming, and to deal with it, feature vectors are often pre-trained to do the same task. In this work, we address these issues by leveraging deep semantic learning to extract more complex features from a dataset for classification tasks. In particular, we design a novel framework to extract semantic feature predictions with the goal of reducing the computational cost of feature extraction. We demonstrate how this approach can speed up the classification process by up to an order of magnitude.