scholarly journals Image Classification Based on Automatic Neural Architecture Search Using Binary Crow Search Algorithm

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 189891-189912
Author(s):  
Mobeen Ahmad ◽  
Muhammad Abdullah ◽  
Hyeonjoon Moon ◽  
Seong Joon Yoo ◽  
Dongil Han
Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6927
Author(s):  
Xiaojuan Wang ◽  
Xinlei Wang ◽  
Tianqi Lv ◽  
Lei Jin ◽  
Mingshu He

Human activity recognition (HAR) based on wearable sensors is a promising research direction. The resources of handheld terminals and wearable devices limit the performance of recognition and require lightweight architectures. With the development of deep learning, the neural architecture search (NAS) has emerged in an attempt to minimize human intervention. We propose an approach for using NAS to search for models suitable for HAR tasks, namely, HARNAS. The multi-objective search algorithm NSGA-II is used as the search strategy of HARNAS. To make a trade-off between the performance and computation speed of a model, the F1 score and the number of floating-point operations (FLOPs) are selected, resulting in a bi-objective problem. However, the computation speed of a model not only depends on the complexity, but is also related to the memory access cost (MAC). Therefore, we expand the bi-objective search to a tri-objective strategy. We use the Opportunity dataset as the basis for most experiments and also evaluate the portability of the model on the UniMiB-SHAR dataset. The experimental results show that HARNAS designed without manual adjustments can achieve better performance than the best model tweaked by humans. HARNAS obtained an F1 score of 92.16% and parameters of 0.32 MB on the Opportunity dataset.


2021 ◽  
Vol 16 (3) ◽  
pp. 67-78
Author(s):  
Yu Xue ◽  
Yankang Wang ◽  
Jiayu Liang ◽  
Adam Slowik

2020 ◽  
Vol 34 (10) ◽  
pp. 13783-13784
Author(s):  
Deanna Flynn ◽  
P. Michael Furlong ◽  
Brian Coltin

Our neural architecture search algorithm progressively searches a tree of neural network architectures. Child nodes are created by inserting new layers determined by a transition graph into a parent network up to a maximum depth and pruned when performance is worse than its parent. This increases efficiency but makes the algorithm greedy. Simpler networks are successfully found before more complex ones that can achieve benchmark performance similar to other top-performing networks.


Author(s):  
Séamus Lankford ◽  
◽  
Diarmuid Grimes

The training and optimization of neural networks, using pre-trained, super learner and ensemble approaches is explored. Neural networks, and in particular Convolutional Neural Networks (CNNs), are often optimized using default parameters. Neural Architecture Search (NAS) enables multiple architectures to be evaluated prior to selection of the optimal architecture. Our contribution is to develop, and make available to the community, a system that integrates open source tools for the neural architecture search (OpenNAS) of image classification models. OpenNAS takes any dataset of grayscale, or RGB images, and generates the optimal CNN architecture. Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and pre-trained models serve as base learners for ensembles. Meta learner algorithms are subsequently applied to these base learners and the ensemble performance on image classification problems is evaluated. Our results show that a stacked generalization ensemble of heterogeneous models is the most effective approach to image classification within OpenNAS.


Sign in / Sign up

Export Citation Format

Share Document