neural architecture
Recently Published Documents


TOTAL DOCUMENTS

1073
(FIVE YEARS 763)

H-INDEX

45
(FIVE YEARS 18)

2022 ◽  
Vol 27 (4) ◽  
pp. 692-708
Author(s):  
Babatounde Moctard Oloulade ◽  
Jianliang Gao ◽  
Jiamin Chen ◽  
Tengfei Lyu ◽  
Raeed Al-Sabri
Keyword(s):  

2022 ◽  
Vol 54 (9) ◽  
pp. 1-37
Author(s):  
Lingxi Xie ◽  
Xin Chen ◽  
Kaifeng Bi ◽  
Longhui Wei ◽  
Yuhui Xu ◽  
...  

Neural architecture search (NAS) has attracted increasing attention. In recent years, individual search methods have been replaced by weight-sharing search methods for higher search efficiency, but the latter methods often suffer lower instability. This article provides a literature review on these methods and owes this issue to the optimization gap . From this perspective, we summarize existing approaches into several categories according to their efforts in bridging the gap, and we analyze both advantages and disadvantages of these methodologies. Finally, we share our opinions on the future directions of NAS and AutoML. Due to the expertise of the authors, this article mainly focuses on the application of NAS to computer vision problems.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 473
Author(s):  
Christoforos Nalmpantis ◽  
Nikolaos Virtsionis Gkalinikis ◽  
Dimitris Vrakas

Deploying energy disaggregation models in the real-world is a challenging task. These models are usually deep neural networks and can be costly when running on a server or prohibitive when the target device has limited resources. Deep learning models are usually computationally expensive and they have large storage requirements. Reducing the computational cost and the size of a neural network, without trading off any performance is not a trivial task. This paper suggests a novel neural architecture that has less learning parameters, smaller size and fast inference time without trading off performance. The proposed architecture performs on par with two popular strong baseline models. The key characteristic is the Fourier transformation which has no learning parameters and it can be computed efficiently.


Author(s):  
Xun Yang ◽  
Shanshan Wang ◽  
Jian Dong ◽  
Jianfeng Dong ◽  
Meng Wang ◽  
...  
Keyword(s):  

Author(s):  
Zhenhou Hong ◽  
Jianzong Wang ◽  
Xiaoyang Qu ◽  
Chendong Zhao ◽  
Jie Liu ◽  
...  

Author(s):  
Ariel Keller Rorabaugh ◽  
Silvina Caino-Lores ◽  
Travis Johnston ◽  
Michela Taufer

Sign in / Sign up

Export Citation Format

Share Document