Boosting the Interpretability of Online Diagnostic Statics by Learning to Map the Spatial Path – We study the problem of learning an online model to predict a patient’s health status. In this paper, we propose a novel algorithm for predicting health status: a supervised learning method. In particular, we use a deep convolutional neural network (CNN) to learn to predict which patient will be most likely to be diagnosed with a disease from the most probable (patient-dependent) results which are produced by an online method. Since the disease prediction is generated by a fully convolutional network, we have learned the predictions directly from full convolutional networks. This allows us to avoid using the traditional CNN to predict a disease outcome such as blood pressure, which is usually the most likely patient to be diagnosed with a disease. We demonstrate our approach and show the effectiveness of it on a set of simulated clinical trials.
In this paper, we propose a novel Deep Reinforcement Learning system, Neural-SteerNet, which can be regarded as a general reinforcement learning system. This system has been tested on a dataset of real-world tasks as well as on a set of tasks with few rewards. We show that the Neural-SteerNet can learn to navigate successfully from a relatively low level problem. Moreover, the network can successfully learn to find the target objects of the task and can navigate, and perform well within the visual environment. Experiments conducted on both real and simulated data illustrate that the Neural-SteerNet can perform better than other reinforcement learning systems on the task and can reach higher accuracies.
Self-Organizing Sensor Networks for Prediction with Multi-view and Multi-view Learning
Boosting the Interpretability of Online Diagnostic Statics by Learning to Map the Spatial Path
Multi-Channel Multi-Resolution RGB-D Light Field Video with Convolutional Neural Networks
Deep Reinforcement Learning for Goal-Directed Exploration in Sequential Decision Making ScenariosIn this paper, we propose a novel Deep Reinforcement Learning system, Neural-SteerNet, which can be regarded as a general reinforcement learning system. This system has been tested on a dataset of real-world tasks as well as on a set of tasks with few rewards. We show that the Neural-SteerNet can learn to navigate successfully from a relatively low level problem. Moreover, the network can successfully learn to find the target objects of the task and can navigate, and perform well within the visual environment. Experiments conducted on both real and simulated data illustrate that the Neural-SteerNet can perform better than other reinforcement learning systems on the task and can reach higher accuracies.