algorithmic differentiation
Recently Published Documents


TOTAL DOCUMENTS

175
(FIVE YEARS 43)

H-INDEX

15
(FIVE YEARS 3)

Mathematics ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 126
Author(s):  
Andrey Tsyganov ◽  
Julia Tsyganova

The paper considers the problem of algorithmic differentiation of information matrix difference equations for calculating the information matrix derivatives in the information Kalman filter. The equations are presented in the form of a matrix MWGS (modified weighted Gram–Schmidt) transformation. The solution is based on the usage of special methods for the algorithmic differentiation of matrix MWGS transformation of two types: forward (MWGS-LD) and backward (MWGS-UD). The main result of the work is a new MWGS-based array algorithm for computing the information matrix sensitivity equations. The algorithm is robust to machine round-off errors due to the application of the MWGS orthogonalization procedure at each step. The obtained results are applied to solve the problem of parameter identification for state-space models of discrete-time linear stochastic systems. Numerical experiments confirm the efficiency of the proposed solution.


Author(s):  
Ole Burghardt ◽  
Pedro Gomes ◽  
Tobias Kattmann ◽  
Thomas D. Economon ◽  
Nicolas R. Gauger ◽  
...  

AbstractThis article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Xipeng Shen ◽  
Guoqiang Zhang ◽  
Irene Dea ◽  
Samantha Andow ◽  
Emilio Arroyo-Fang ◽  
...  

This paper presents a novel optimization for differentiable programming named coarsening optimization. It offers a systematic way to synergize symbolic differentiation and algorithmic differentiation (AD). Through it, the granularity of the computations differentiated by each step in AD can become much larger than a single operation, and hence lead to much reduced runtime computations and data allocations in AD. To circumvent the difficulties that control flow creates to symbolic differentiation in coarsening, this work introduces phi-calculus, a novel method to allow symbolic reasoning and differentiation of computations that involve branches and loops. It further avoids "expression swell" in symbolic differentiation and balance reuse and coarsening through the design of reuse-centric segment of interest identification. Experiments on a collection of real-world applications show that coarsening optimization is effective in speeding up AD, producing several times to two orders of magnitude speedups.


2021 ◽  
Vol 14 (9) ◽  
pp. 5843-5861
Author(s):  
Conrad P. Koziol ◽  
Joe A. Todd ◽  
Daniel N. Goldberg ◽  
James R. Maddison

Abstract. Mass loss due to dynamic changes in ice sheets is a significant contributor to sea level rise, and this contribution is expected to increase in the future. Numerical codes simulating the evolution of ice sheets can potentially quantify this future contribution. However, the uncertainty inherent in these models propagates into projections of sea level rise is and hence crucial to understand. Key variables of ice sheet models, such as basal drag or ice stiffness, are typically initialized using inversion methodologies to ensure that models match present observations. Such inversions often involve tens or hundreds of thousands of parameters, with unknown uncertainties and dependencies. The computationally intensive nature of inversions along with their high number of parameters mean traditional methods such as Monte Carlo are expensive for uncertainty quantification. Here we develop a framework to estimate the posterior uncertainty of inversions and project them onto sea level change projections over the decadal timescale. The framework treats parametric uncertainty as multivariate Gaussian and exploits the equivalence between the Hessian of the model and the inverse covariance of the parameter set. The former is computed efficiently via algorithmic differentiation, and the posterior covariance is propagated in time using a time-dependent model adjoint to produce projection error bars. This work represents an important step in quantifying the internal uncertainty of projections of ice sheet models.


Author(s):  
Scott D. Will ◽  
Marshall D. Perrin ◽  
Emiel H. Por ◽  
James Noss ◽  
Ananya Sahoo ◽  
...  

2021 ◽  
Author(s):  
Hangkong Wu ◽  
Shenren Xu ◽  
Xiuquan Huang ◽  
Dingxi Wang

Abstract This paper presents the development and verification of a discrete adjoint solver using algorithmic differentiation (AD). The computational cost of sensitivity evaluation using the adjoint method is largely independent of the number of design variables, making it attractive for optimization applications where the design variables are far more than objectives and constraints. To obtain the gradients of a single objective function or constraint with respect to many design variables, the nonlinear flow and the adjoint equations need to be solved once at every design cycle. This paper makes a detailed presentation of how AD is used to develop a discrete adjoint solver. The data flow diagrams of the nonlinear flow, linear and adjoint solvers are compared. Moreover, a comparison of convergence history of sensitivity, asymptotic rate of residual convergence and computational cost between the linear and adjoint solvers is also made. Two cases — the subsonic Durham turbine and transonic NASA Rotor 67 are studied in this paper. The results show that the adjoint solver has the same asymptotic rate of residual convergence and produces consistent convergence history of sensitivity as the linear solver, but the adjoint solver consumes more time and memory.


Sign in / Sign up

Export Citation Format

Share Document