scholarly journals Design of optical meta-structures with applications to beam engineering using deep learning

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Robin Singh ◽  
Anu Agarwal ◽  
Brian W. Anthony

AbstractNanophotonics is a rapidly emerging field in which complex on-chip components are required to manipulate light waves. The design space of on-chip nanophotonic components, such as an optical meta surface which uses sub-wavelength meta-atoms, is often a high dimensional one. As such conventional optimization methods fail to capture the global optimum within the feasible search space. In this manuscript, we explore a Machine Learning (ML)-based method for the inverse design of the meta-optical structure. We present a data-driven approach for modeling a grating meta-structure which performs photonic beam engineering. On-chip planar photonic waveguide-based beam engineering offers the potential to efficiently manipulate photons to create excitation beams (Gaussian, focused and collimated) for lab-on-chip applications of Infrared, Raman and fluorescence spectroscopic analysis. Inverse modeling predicts meta surface design parameters based on a desired electromagnetic field outcome. Starting with the desired diffraction beam profile, we apply an inverse model to evaluate the optimal design parameters of the meta surface. Parameters such as the repetition period (in 2D axis), height and size of scatterers are calculated using a feedforward deep neural network (DNN) and convolutional neural network (CNN) architecture. A qualitative analysis of the trained neural network, working in tandem with the forward model, predicts the diffraction profile with a correlation coefficient as high as 0.996. The developed model allows us to rapidly estimate the desired design parameters, in contrast to conventional (gradient descent based or genetic optimization) time-intensive optimization approaches.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Feng Qian ◽  
Mohammad Reza Mahmoudi ◽  
Hamïd Parvïn ◽  
Kim-Hung Pho ◽  
Bui Anh Tuan

Conventional optimization methods are not efficient enough to solve many of the naturally complicated optimization problems. Thus, inspired by nature, metaheuristic algorithms can be utilized as a new kind of problem solvers in solution to these types of optimization problems. In this paper, an optimization algorithm is proposed which is capable of finding the expected quality of different locations and also tuning its exploration-exploitation dilemma to the location of an individual. A novel particle swarm optimization algorithm is presented which implements the conditioning learning behavior so that the particles are led to perform a natural conditioning behavior on an unconditioned motive. In the problem space, particles are classified into several categories so that if a particle lies within a low diversity category, it would have a tendency to move towards its best personal experience. But, if the particle’s category is with high diversity, it would have the tendency to move towards the global optimum of that category. The idea of the birds’ sensitivity to its flying space is also utilized to increase the particles’ speed in undesired spaces in order to leave those spaces as soon as possible. However, in desirable spaces, the particles’ velocity is reduced to provide a situation in which the particles have more time to explore their environment. In the proposed algorithm, the birds’ instinctive behavior is implemented to construct an initial population randomly or chaotically. Experiments provided to compare the proposed algorithm with the state-of-the-art methods show that our optimization algorithm is one of the most efficient and appropriate ones to solve the static optimization problems.


2015 ◽  
Vol 733 ◽  
pp. 898-901 ◽  
Author(s):  
Hong Li ◽  
Xue Ding

Optimization problem is the problem which can be often encountered mostly in industrial design, and the key of optimization is to find the global optimum and higher constriction speed. This paper proposes a PSO algorithm based on BP neural network by neural network trains and selects individual extreme best randomly, to make the particle follow the optimal particle in the solution space search, and obtain the optimum extreme best in the whole situation. Through the application of the simulation experiment on image segmentation showed that the algorithm is suitable in dealing with multiple types function and constraint, with fast convergence speed, and easy combination with traditional optimization methods, thus improving its own limitations, and solving problems more efficiently.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Alireza Rowhanimanesh ◽  
Sohrab Efati

Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red_factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2020 ◽  
Vol 34 (07) ◽  
pp. 10526-10533 ◽  
Author(s):  
Hanlin Chen ◽  
Li'an Zhuo ◽  
Baochang Zhang ◽  
Xiawu Zheng ◽  
Jianzhuang Liu ◽  
...  

Neural architecture search (NAS) can have a significant impact in computer vision by automatically designing optimal neural network architectures for various tasks. A variant, binarized neural architecture search (BNAS), with a search space of binarized convolutions, can produce extremely compressed models. Unfortunately, this area remains largely unexplored. BNAS is more challenging than NAS due to the learning inefficiency caused by optimization requirements and the huge architecture space. To address these issues, we introduce channel sampling and operation space reduction into a differentiable NAS to significantly reduce the cost of searching. This is accomplished through a performance-based strategy used to abandon less potential operations. Two optimization methods for binarized neural networks are used to validate the effectiveness of our BNAS. Extensive experiments demonstrate that the proposed BNAS achieves a performance comparable to NAS on both CIFAR and ImageNet databases. An accuracy of 96.53% vs. 97.22% is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a 40% faster search than the state-of-the-art PC-DARTS.


Author(s):  
Shenghao Jiang ◽  
Saeed Mashdoor ◽  
Hamid Parvin ◽  
Bui Anh Tuan ◽  
Kim-Hung Pho

Optimization is an important and decisive task in science. Many optimization problems in science are naturally too complicated and difficult to be modeled and solved by the conventional optimization methods such as mathematical programming problem solvers. Meta-heuristic algorithms that are inspired by nature have started a new era in computing theory to solve the optimization problems. The paper seeks to find an optimization algorithm that learns the expected quality of different places gradually and adapts its exploration-exploitation dilemma to the location of an individual. Using birds’ classical conditioning learning behavior, in this paper, a new particle swarm optimization algorithm has been introduced where particles can learn to perform a natural conditioning behavior towards an unconditioned stimulus. Particles are divided into multiple categories in the problem space and if any of them finds the diversity of its category to be low, it will try to go towards its best personal experience. But if the diversity among the particles of its category is high, it will try to be inclined to the global optimum of its category. We have also used the idea of birds’ sensitivity to the space in which they fly and we have tried to move the particles more quickly in improper spaces so that they would depart these spaces as fast as possible. On the contrary, we reduced the particles’ speed in valuable spaces in order to let them explore those places more. In the initial population, the algorithm has used the instinctive behavior of birds to provide a population based on the particles’ merits. The proposed method has been implemented in MATLAB and the results have been divided into several subpopulations or parts. The proposed method has been compared to the state-of-the-art methods. It has been shown that the proposed method is a consistent algorithm for solving the static optimization problems.


2014 ◽  
Vol 980 ◽  
pp. 198-202
Author(s):  
Pavel Raska ◽  
Ulrych Zdenek

The paper deals with testing optimization methods and their setting of the parameters used to search for the global optimum of specified objective functions. The objective functions were specified considering the objectives of the discrete event simulation models. We specified the evaluation methods considering the success of finding the global optimum (or the best found objective function value) the in defined search space. We tested Random Search, Hill Climbing, Tabu Search, Local Search, Downhill Simplex, Simulated Annealing, Differential Evolution and Evolution Strategy. After the testing we proposed some slight modifications of the Downhill Simplex and Differential Evolution optimization methods.


2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


2021 ◽  
pp. 2004101
Author(s):  
Marco Giacometti ◽  
Francesca Milesi ◽  
Pietro Lorenzo Coppadoro ◽  
Alberto Rizzo ◽  
Federico Fagiani ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document