Deep Multi-view Feature Learning for Text Recognition – We present a novel approach for joint feature extraction and segmentation which leverages our learned models to produce high-quality, state-of-the-art, multi-view representations for multiple tasks. Our approach, a multi-view network (MI-N2i), extracts multiple views (i.e. the same view maps) and segment them using a fusion based on a shared framework. Specifically, we develop a new joint framework to jointly exploit a shared framework and a shared classifier. MI-N2i, and the MI-N2i jointly learn a shared framework for joint model generation, i.e. joint feature extraction and segmentation. We evaluate MI-N2i on the UCB Text2Image dataset and show that our approach outperforms the state-of-the-art approaches in terms of recognition accuracy, image quality, and segmentation quality.
This paper addresses the problem of texture classification based on the visual concept of a texture and its relation to a context. The main idea of our paper is to present a framework to classify textures into semantic categories. In this framework, textures are categorized according to several visual categories, and can be classified according to which kind they are classified. Then textures are classified using the semantic categories and the context category. To get a good classification, the context category is then defined by a visual category. In this framework, a texture classification is performed by using a visual category to classify the texture. Then the texture category is classified and a different category is presented depending on the context category. The classification results are compared with existing texture classification algorithms that only take the categories from visual categories and not the visual categories. For the classification result of texture classification, we conducted an extensive experiment where we trained and tested two texture recognition datasets. We achieve the state-of-the-art performance.
Machine learning and networked sensing
Deep Multi-view Feature Learning for Text Recognition
A Novel Approach to Texture based Texture Classification using Texture ClassificationThis paper addresses the problem of texture classification based on the visual concept of a texture and its relation to a context. The main idea of our paper is to present a framework to classify textures into semantic categories. In this framework, textures are categorized according to several visual categories, and can be classified according to which kind they are classified. Then textures are classified using the semantic categories and the context category. To get a good classification, the context category is then defined by a visual category. In this framework, a texture classification is performed by using a visual category to classify the texture. Then the texture category is classified and a different category is presented depending on the context category. The classification results are compared with existing texture classification algorithms that only take the categories from visual categories and not the visual categories. For the classification result of texture classification, we conducted an extensive experiment where we trained and tested two texture recognition datasets. We achieve the state-of-the-art performance.