scholarly journals Generative Adversarial Neural Architecture Search

Author(s):  
Seyed Saeed Changiz Rezaei ◽  
Fred X. Han ◽  
Di Niu ◽  
Mohammad Salameh ◽  
Keith Mills ◽  
...  

Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess. In this paper, we propose Generative Adversarial NAS (GA-NAS) with theoretically provable convergence guarantees, promoting stability and reproducibility in neural architecture search. Inspired by importance sampling, GA-NAS iteratively fits a generator to previously discovered top architectures, thus increasingly focusing on important parts of a large search space. Furthermore, we propose an efficient adversarial learning approach, where the generator is trained by reinforcement learning based on rewards provided by a discriminator, thus being able to explore the search space without evaluating a large number of architectures. Extensive experiments show that GA-NAS beats the best published results under several cases on three public NAS benchmarks. In the meantime, GA-NAS can handle ad-hoc search constraints and search spaces. We show that GA-NAS can be used to improve already optimized baselines found by other NAS methods, including EfficientNet and ProxylessNAS, in terms of ImageNet accuracy or the number of parameters, in their original search space.

Author(s):  
Kalev Kask ◽  
Bobak Pezeshki ◽  
Filjor Broka ◽  
Alexander Ihler ◽  
Rina Dechter

Abstraction Sampling (AS) is a recently introduced enhancement of Importance Sampling that exploits stratification by using a notion of abstractions: groupings of similar nodes into abstract states. It was previously shown that AS performs particularly well when sampling over an AND/OR search space; however, existing schemes were limited to ``proper'' abstractions in order to ensure unbiasedness, severely hindering scalability. In this paper, we introduce AOAS, a new Abstraction Sampling scheme on AND/OR search spaces that allow more flexible use of abstractions by circumventing the properness requirement. We analyze the properties of this new algorithm and, in an extensive empirical evaluation on five benchmarks, over 480 problems, and comparing against other state of the art algorithms, illustrate AOAS's properties and show that it provides a far more powerful and competitive Abstraction Sampling framework.


Author(s):  
Kumar Chandrasekaran ◽  
Prabaakaran Kandasamy ◽  
Srividhya Ramanathan

2021 ◽  
Vol 11 (23) ◽  
pp. 11436
Author(s):  
Ha Yoon Song

The current evolution of deep learning requires further optimization in terms of accuracy and time. From the perspective of new requirements, AutoML is an area that could provide possible solutions. AutoML has a neural architecture search (NAS) field. DARTS is a widely used approach in NAS and is based on gradient descent; however, it has some drawbacks. In this study, we attempted to overcome some of the drawbacks of DARTS by improving the accuracy and decreasing the search cost. The DARTS algorithm uses a mixed operation that combines all operations in the search space. The architecture parameter of each operation comprising a mixed operation is trained using gradient descent, and the operation with the largest architecture parameter is selected. The use of a mixed operation causes a problem called vote dispersion: similar operations share architecture parameters during gradient descent; thus, there are cases where the most important operation is disregarded. In this selection process, vote dispersion causes DARTS performance to degrade. To cope with this problem, we propose a new algorithm based on DARTS called DG-DARTS. Two search stages are introduced, and the clustering of operations is applied in DG-DARTS. In summary, DG-DARTS achieves an error rate of 2.51% on the CIFAR10 dataset, and its search cost is 0.2 GPU days because the search space of the second stage is reduced by half. The speed-up factor of DG-DARTS to DARTS is 6.82, which indicates that the search cost of DG-DARTS is only 13% that of DARTS.


Author(s):  
V. A. Knyaz ◽  
V. V. Kniaz ◽  
M. M. Novikov ◽  
R. M. Galeev

Abstract. The problem of facial appearance reconstruction (or facial approximation) basing on a skull is very important as for anthropology and archaeology as for forensics. Recent progress in optical 3D measurements allowed to substitute manual facial reconstruction techniques with computer-aided ones based on digital skull 3D models. Growing amount of data and developing methods for data processing provide a background for creating fully automated technique of face approximation.The performed study addressed to a problem of facial approximation based on skull digital 3D model with deep learning techniques. The skull 3D models used for appearance reconstruction are generated by the original photogrammetric system in automated mode. These 3D models are then used as input for the algorithm for face appearance reconstruction. The paper presents a deep learning approach for facial approximation basing on a skull. It exploits the generative adversarial learning for transition data from one modality (skull) to another modality (face) using digital skull 3D models and face 3D models. A special dataset containing skull 3D models and face 3D models has been collected and adapted for convolutional neural network training and testing. Evaluation results on testing part of the dataset demonstrates high potential of the developed approach in facial approximation.


Sign in / Sign up

Export Citation Format

Share Document