scholarly journals Consistent numerical methods for state and control constrained trajectory optimisation with parameter dependency

Author(s):  
Claire Walton ◽  
Isaac Kaminer ◽  
Qi Gong
Author(s):  
Philipp Hennig ◽  
Michael A. Osborne ◽  
Mark Girolami

We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.


2004 ◽  
Vol 37 (9) ◽  
pp. 895-900 ◽  
Author(s):  
John Bagterp Jørgensen ◽  
James B. Rawlings ◽  
Sten Bay Jørgensen

Author(s):  
Samuel E. Otto ◽  
Clarence W. Rowley

A common way to represent a system's dynamics is to specify how the state evolves in time. An alternative viewpoint is to specify how functions of the state evolve in time. This evolution of functions is governed by a linear operator called the Koopman operator, whose spectral properties reveal intrinsic features of a system. For instance, its eigenfunctions determine coordinates in which the dynamics evolve linearly. This review discusses the theoretical foundations of Koopman operator methods, as well as numerical methods developed over the past two decades to approximate the Koopman operator from data, for systems both with and without actuation. We pay special attention to ergodic systems, for which especially effective numerical methods are available. For nonlinear systems with an affine control input, the Koopman formalism leads naturally to systems that are bilinear in the state and the input, and this structure can be leveraged for the design of controllers and estimators. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 4 is May 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2018 ◽  
Author(s):  
Shuilian Xie ◽  
Ulisses M. Braga-Neto

AbstractMotivationPrecision and recall have become very popular classification accuracy metrics in the statistical learning literature. These metrics are ordinarily defined under the assumption that the data are sampled randomly from the mixture of the populations. However, observational case-control studies for biomarker discovery often collect data that are sampled separately from the case and control populations, particularly in the case of rare diseases. This discrepancy may introduce severe bias in classifier accuracy estimation.ResultsWe demonstrate, using both analytical and numerical methods, that classifier precision estimates can display strong bias under separating sampling, with the bias magnitude depending on the difference between the case prevalences in the data and in the actual population. We show that this bias is systematic in the sense that it cannot be reduced by increasing sample size. If information about the true case prevalence is available from public health records, then a modified precision estimator is proposed that displays smaller bias, which can in fact be reduced to zero as sample size increases under regularity conditions on the classification algorithm. The accuracy of the theoretical analysis and the performance of the proposed precision estimator under separate sampling are investigated using synthetic and real data from observational case-control studies. The results confirmed that the proposed precision estimator indeed becomes unbiased as sample size increases, while the ordinary precision estimator may display large bias, particularly in the case of rare diseases.AvailabilityExtra plots are available as Supplementary Materials.Author summaryBiomedical data are often sampled separately from the case and control populations, particularly in the case of rare diseases. Precision is a popular classification accuracy metric in the statistical learning literature, which implicitly assumes that the data are sampled randomly from the mixture of the populations. In this paper we study the bias of precision under separate sampling using theoretical and numerical methods. We also propose a precision estimator for separate sampling in the case when the prevalence is known from public health records. The results confirmed that the proposed precision estimator becomes unbiased as sample size increases, while the ordinary precision estimator may display large bias, particularly in the case of rare diseases. In the absence of any knowledge about disease prevalence, precision estimates should be avoided under separate sampling.


10.5772/21152 ◽  
2011 ◽  
Author(s):  
Javier de ◽  
Daniel Rodriguez ◽  
Leo Gonzalez ◽  
Vassilis Theofilis

Sign in / Sign up

Export Citation Format

Share Document