Unsupervised feature learning for self-tuning neural networks

2021 ◽  
Vol 133 ◽  
pp. 103-111
Author(s):  
Jongbin Ryu ◽  
Ming-Hsuan Yang ◽  
Jongwoo Lim
2019 ◽  
Vol 119 ◽  
pp. 332-340 ◽  
Author(s):  
Daniel J. Saunders ◽  
Devdhar Patel ◽  
Hananel Hazan ◽  
Hava T. Siegelmann ◽  
Robert Kozma

2016 ◽  
Vol 38 (9) ◽  
pp. 1734-1747 ◽  
Author(s):  
Alexey Dosovitskiy ◽  
Philipp Fischer ◽  
Jost Tobias Springenberg ◽  
Martin Riedmiller ◽  
Thomas Brox

2019 ◽  
Vol 8 (2) ◽  
pp. 5525-5528

Recognizing text in images has received attention recently. Traditional systems during this space have relied on elaborating models incorporating rigorously hand-designed options or giant amounts of previous information. This paper proposed by taking a different route and combines the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a standard framework to coach highly accurate character recognizer and text detector modules. The recognition pipeline of scanning, segmenting, and recognition is examined and delineated completely


2021 ◽  
Author(s):  
Mingyuan Meng ◽  
Xingyu Yang ◽  
Lei Bi ◽  
Jinman Kim ◽  
Shanlin Xiao ◽  
...  

2021 ◽  
Vol 13 (4) ◽  
pp. 742
Author(s):  
Jian Peng ◽  
Xiaoming Mei ◽  
Wenbo Li ◽  
Liang Hong ◽  
Bingyu Sun ◽  
...  

Scene understanding of remote sensing images is of great significance in various applications. Its fundamental problem is how to construct representative features. Various convolutional neural network architectures have been proposed for automatically learning features from images. However, is the current way of configuring the same architecture to learn all the data while ignoring the differences between images the right one? It seems to be contrary to our intuition: it is clear that some images are easier to recognize, and some are harder to recognize. This problem is the gap between the characteristics of the images and the learning features corresponding to specific network structures. Unfortunately, the literature so far lacks an analysis of the two. In this paper, we explore this problem from three aspects: we first build a visual-based evaluation pipeline of scene complexity to characterize the intrinsic differences between images; then, we analyze the relationship between semantic concepts and feature representations, i.e., the scalability and hierarchy of features which the essential elements in CNNs of different architectures, for remote sensing scenes of different complexity; thirdly, we introduce CAM, a visualization method that explains feature learning within neural networks, to analyze the relationship between scenes with different complexity and semantic feature representations. The experimental results show that a complex scene would need deeper and multi-scale features, whereas a simpler scene would need lower and single-scale features. Besides, the complex scene concept is more dependent on the joint semantic representation of multiple objects. Furthermore, we propose the framework of scene complexity prediction for an image and utilize it to design a depth and scale-adaptive model. It achieves higher performance but with fewer parameters than the original model, demonstrating the potential significance of scene complexity.


Sign in / Sign up

Export Citation Format

Share Document