Author response for "Deep learning and reinforcement learning approach on microgrid"

Author(s):  
Kumar Chandrasekaran ◽  
Prabaakaran Kandasamy ◽  
Srividhya Ramanathan
Author(s):  
Kumar Chandrasekaran ◽  
Prabaakaran Kandasamy ◽  
Srividhya Ramanathan

Author(s):  
Seyed Saeed Changiz Rezaei ◽  
Fred X. Han ◽  
Di Niu ◽  
Mohammad Salameh ◽  
Keith Mills ◽  
...  

Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess. In this paper, we propose Generative Adversarial NAS (GA-NAS) with theoretically provable convergence guarantees, promoting stability and reproducibility in neural architecture search. Inspired by importance sampling, GA-NAS iteratively fits a generator to previously discovered top architectures, thus increasingly focusing on important parts of a large search space. Furthermore, we propose an efficient adversarial learning approach, where the generator is trained by reinforcement learning based on rewards provided by a discriminator, thus being able to explore the search space without evaluating a large number of architectures. Extensive experiments show that GA-NAS beats the best published results under several cases on three public NAS benchmarks. In the meantime, GA-NAS can handle ad-hoc search constraints and search spaces. We show that GA-NAS can be used to improve already optimized baselines found by other NAS methods, including EfficientNet and ProxylessNAS, in terms of ImageNet accuracy or the number of parameters, in their original search space.


Sign in / Sign up

Export Citation Format

Share Document