NASIL: Neural Network Architecture Searching for Incremental Learning in Image Classification

Author(s):  
Xianya Fu ◽  
Wenrui Li ◽  
Qiurui Chen ◽  
Lianyi Zhang ◽  
Kai Yang ◽  
...  
Author(s):  
A. Ferreyra-Ramirez ◽  
C. Aviles-Cruz ◽  
E. Rodriguez-Martinez ◽  
J. Villegas-Cortez ◽  
A. Zuñiga-Lopez

Author(s):  
Taras Iakymchuk ◽  
Alfredo Rosado-Muñoz ◽  
Juan F Guerrero-Martínez ◽  
Manuel Bataller-Mompeán ◽  
Jose V Francés-Víllora

Author(s):  
CHENG-AN HUNG ◽  
SHENG-FUU LIN

A neural network architecture that incorporates a supervised mechanism into a fuzzy adaptive Hamming net (FAHN) is presented. The FAHN constructs hyper-rectangles that represent template weights in an unsupervised learning paradigm. Learning in the FAHN consists of creating and adjusting hyper-rectangles in feature space. By aggregating multiple hyper-rectangles into a single class, we can build a classifier, to be henceforth termed as a supervised fuzzy adaptive Hamming net (SFAHN), that discriminates between nonconvex and even discontinuous classes. The SFAHN can operate at a fast-learning rate in online (incremental) or offline (batch) applications, without becoming unstable. The performance of the SFAHN is tested on the Fisher iris data and on an online character recognition problem.


2021 ◽  
Author(s):  
Amogh Palasamudram

<p>This research aims to introduce and evaluate a new neural network architecture to improve the speed and effectiveness of forward propagation in neural networks: the Neural Layer Bypassing Network (NLBN). The theory and workings of this architecture have been explained in this research paper, along with comparisons to other methods of increasing the efficacy of deep learning models. This research also includes code examples with 3 image classification models trained on different datasets and analyses the impact of the NLBN architecture on forward propagation. It was found that this architecture increases the speed of forward propagation and tends to slightly decrease the accuracy of the model. However, it takes longer to train and takes more memory. All in all, this architecture is a potential foundation for using deep learning to teach deep learning models to be more efficient. This includes skipping and re-propagating through layers to improve the overall performance of a model.</p><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document