Locating the local minima in lens design with machine learning

Author(s):  
Anna V. Kononova ◽  
Ofer M. Shir ◽  
Teus Tukker ◽  
Pierluigi Frisco ◽  
Shutong Zeng ◽  
...  
1991 ◽  
Author(s):  
Michael J. Kidger ◽  
Paul T. Leamy
Keyword(s):  

2009 ◽  
Vol 17 (8) ◽  
pp. 6436 ◽  
Author(s):  
Maarten van Turnhout ◽  
Florian Bociort

1981 ◽  
Vol 20 (3) ◽  
pp. 384 ◽  
Author(s):  
Berlyn Brixner
Keyword(s):  

Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 881 ◽  
Author(s):  
Pierre Baudot

Previous works established that entropy is characterized uniquely as the first cohomology class in a topos and described some of its applications to the unsupervised classification of gene expression modules or cell types. These studies raised important questions regarding the statistical meaning of the resulting cohomology of information and its interpretation or consequences with respect to usual data analysis and statistical physics. This paper aims to present the computational methods of information cohomology and to propose its interpretations in terms of statistical physics and machine learning. In order to further underline the cohomological nature of information functions and chain rules, the computation of the cohomology in low degrees is detailed to show more directly that the k multivariate mutual information ( I k ) are ( k - 1 ) -coboundaries. The ( k - 1 ) -cocycles condition corresponds to I k = 0 , which generalizes statistical independence to arbitrary degree k. Hence, the cohomology can be interpreted as quantifying the statistical dependences and the obstruction to factorization. I develop the computationally tractable subcase of simplicial information cohomology represented by entropy H k and information I k landscapes and their respective paths, allowing investigation of Shannon’s information in the multivariate case without the assumptions of independence or of identically distributed variables. I give an interpretation of this cohomology in terms of phase transitions in a model of k-body interactions, holding both for statistical physics without mean field approximations and for data points. The I 1 components define a self-internal energy functional U k and ( - 1 ) k I k , k ≥ 2 components define the contribution to a free energy functional G k (the total correlation) of the k-body interactions. A basic mean field model is developed and computed on genetic data reproducing usual free energy landscapes with phase transition, sustaining the analogy of clustering with condensation. The set of information paths in simplicial structures is in bijection with the symmetric group and random processes, providing a trivial topological expression of the second law of thermodynamics. The local minima of free energy, related to conditional information negativity and conditional independence, characterize a minimum free energy complex. This complex formalizes the minimum free-energy principle in topology, provides a definition of a complex system and characterizes a multiplicity of local minima that quantifies the diversity observed in biology. I give an interpretation of this complex in terms of unsupervised deep learning where the neural network architecture is given by the chain complex and conclude by discussing future supervised applications.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1272
Author(s):  
Konstantin Barkalov ◽  
Ilya Lebedev ◽  
Evgeny Kozinov

This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy).


2019 ◽  
Vol 73 (12) ◽  
pp. 990-996 ◽  
Author(s):  
Shungo Koichi ◽  
Hans P. Lüthi

In the context of the prediction of the (in-)stability of chemical compounds using machine learning tools, we are often confronted with a basic issue: Whereas much information is available on stable (existing) compounds, little is known about compounds that might well exist, but that have not yet been successfully synthesized, or compounds that are inherently unstable (kinetically and thermodynamically). In the search for Togni-type reagents, many of them kinetically instable, the stability of the prospects can be assessed based on the transition state for the conversion to their non-hypervalent inactive isomer. In earlier work, we determined the barriers of conversion for over one-hundred reagents, still not enough information to train a tool such as a vector support machine. Here, instead, we focus on the early intermediate structures expressed along the isomerization pathway, i.e. transition state searches are replaced by finding (local) minima. Based on an array of 382 Togni-type reagents whose behaviour was known in advance, we show that it is possible to have the machine predict the intermediate form expressed. The approach introduced here can be used to make predictions on the stability and possibly also the reactivity of Togni-type reagents in general.


2015 ◽  
Vol 23 (5) ◽  
pp. 6679 ◽  
Author(s):  
Maarten van Turnhout ◽  
Pascal van Grol ◽  
Florian Bociort ◽  
H. Paul Urbach

2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


Author(s):  
W.J. de Ruijter ◽  
P. Rez ◽  
David J. Smith

There is growing interest in the on-line use of computers in high-resolution electron n which should reduce the demands on highly skilled operators and thereby extend the r of the technique. An on-line computer could obviously perform routine procedures hand, or else facilitate automation of various restoration, reconstruction and enhan These techniques are slow and cumbersome at present because of the need for cai micrographs and off-line processing. In low resolution microscopy (most biologic; primary incentive for automation and computer image analysis is to create a instrument, with standard programmed procedures. In HREM (materials researc computer image analysis should lead to better utilization of the microscope. Instru (improved lens design and higher accelerating voltages) have improved the interpretab the level of atomic dimensions (approximately 1.6 Å) and instrumental resolutior should become feasible in the near future.


Sign in / Sign up

Export Citation Format

Share Document