The inverse design of structural color using machine learning

Nanoscale ◽  
2019 ◽  
Vol 11 (45) ◽  
pp. 21748-21758 ◽  
Author(s):  
Zhao Huang ◽  
Xin Liu ◽  
Jianfeng Zang

Using machine learning, the inverse design of color printing is efficiently achieved. For a desired color, a suitable geometry is finally found through reinforcement learning.

Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Photonics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 33
Author(s):  
Lucas Lamata

Quantum machine learning has emerged as a promising paradigm that could accelerate machine learning calculations. Inside this field, quantum reinforcement learning aims at designing and building quantum agents that may exchange information with their environment and adapt to it, with the aim of achieving some goal. Different quantum platforms have been considered for quantum machine learning and specifically for quantum reinforcement learning. Here, we review the field of quantum reinforcement learning and its implementation with quantum photonics. This quantum technology may enhance quantum computation and communication, as well as machine learning, via the fruitful marriage between these previously unrelated fields.


2021 ◽  
pp. 027836492098785
Author(s):  
Julian Ibarz ◽  
Jie Tan ◽  
Chelsea Finn ◽  
Mrinal Kalakrishnan ◽  
Peter Pastor ◽  
...  

Deep reinforcement learning (RL) has emerged as a promising approach for autonomously acquiring complex behaviors from low-level sensor observations. Although a large portion of deep RL research has focused on applications in video games and simulated control, which does not connect with the constraints of learning in real environments, deep RL has also demonstrated promise in enabling physical robots to learn complex skills in the real world. At the same time, real-world robotics provides an appealing domain for evaluating such algorithms, as it connects directly to how humans learn: as an embodied agent in the real world. Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains. In this review article, we present a number of case studies involving robotic deep RL. Building off of these case studies, we discuss commonly perceived challenges in deep RL and how they have been addressed in these works. We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting and are not often the focus of mainstream RL research. Our goal is to provide a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world.


2021 ◽  
pp. 2002923
Author(s):  
Zhaocheng Liu ◽  
Dayu Zhu ◽  
Lakshmi Raju ◽  
Wenshan Cai

2010 ◽  
Author(s):  
Hyoki Kim ◽  
Jianping Ge ◽  
Junhoi Kim ◽  
Sung-Eun Choi ◽  
Hosuk Lee ◽  
...  

Nanophotonics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 385-392
Author(s):  
Joeri Lenaerts ◽  
Hannah Pinson ◽  
Vincent Ginis

AbstractMachine learning offers the potential to revolutionize the inverse design of complex nanophotonic components. Here, we propose a novel variant of this formalism specifically suited for the design of resonant nanophotonic components. Typically, the first step of an inverse design process based on machine learning is training a neural network to approximate the non-linear mapping from a set of input parameters to a given optical system’s features. The second step starts from the desired features, e.g. a transmission spectrum, and propagates back through the trained network to find the optimal input parameters. For resonant systems, this second step corresponds to a gradient descent in a highly oscillatory loss landscape. As a result, the algorithm often converges into a local minimum. We significantly improve this method’s efficiency by adding the Fourier transform of the desired spectrum to the optimization procedure. We demonstrate our method by retrieving the optimal design parameters for desired transmission and reflection spectra of Fabry–Pérot resonators and Bragg reflectors, two canonical optical components whose functionality is based on wave interference. Our results can be extended to the optimization of more complex nanophotonic components interacting with structured incident fields.


2018 ◽  
Author(s):  
Jatin Kumar ◽  
Qianxiao Li ◽  
Karen Y.T. Tang ◽  
Tonio Buonassisi ◽  
Anibal L. Gonzalez-Oyarce ◽  
...  

<div><div><div><p>Inverse design is an outstanding challenge in disordered systems with multiple length scales such as polymers, particularly when designing polymers with desired phase behavior. We demonstrate high-accuracy tuning of poly(2-oxazoline) cloud point via machine learning. With a design space of four repeating units and a range of molecular masses, we achieve an accuracy of 4°C root mean squared error (RMSE) in a temperature range of 24– 90°C, employing gradient boosting with decision trees. The RMSE is >3x better than linear and polynomial regression. We perform inverse design via particle-swarm optimization, predicting and synthesizing 17 polymers with constrained design at 4 target cloud points from 37 to 80°C. Our approach challenges the status quo in polymer design with a machine learning algorithm, that is capable of fast and systematic discovery of new polymers.</p></div></div></div>


2020 ◽  
Vol 28 (15) ◽  
pp. 21668
Author(s):  
Zhiqin He ◽  
Jiangbing Du ◽  
Xinyi Chen ◽  
Weihong Shen ◽  
Yuting Huang ◽  
...  

Nanophotonics ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Sean Hooten ◽  
Raymond G. Beausoleil ◽  
Thomas Van Vaerenbergh

Abstract We present a proof-of-concept technique for the inverse design of electromagnetic devices motivated by the policy gradient method in reinforcement learning, named PHORCED (PHotonic Optimization using REINFORCE Criteria for Enhanced Design). This technique uses a probabilistic generative neural network interfaced with an electromagnetic solver to assist in the design of photonic devices, such as grating couplers. We show that PHORCED obtains better performing grating coupler designs than local gradient-based inverse design via the adjoint method, while potentially providing faster convergence over competing state-of-the-art generative methods. As a further example of the benefits of this method, we implement transfer learning with PHORCED, demonstrating that a neural network trained to optimize 8° grating couplers can then be re-trained on grating couplers with alternate scattering angles while requiring >10× fewer simulations than control cases.


Author(s):  
Ali Fakhry

The applications of Deep Q-Networks are seen throughout the field of reinforcement learning, a large subsect of machine learning. Using a classic environment from OpenAI, CarRacing-v0, a 2D car racing environment, alongside a custom based modification of the environment, a DQN, Deep Q-Network, was created to solve both the classic and custom environments. The environments are tested using custom made CNN architectures and applying transfer learning from Resnet18. While DQNs were state of the art years ago, using it for CarRacing-v0 appears somewhat unappealing and not as effective as other reinforcement learning techniques. Overall, while the model did train and the agent learned various parts of the environment, attempting to reach the reward threshold for the environment with this reinforcement learning technique seems problematic and difficult as other techniques would be more useful.


Sign in / Sign up

Export Citation Format

Share Document