scholarly journals Solving partial differential equations in a data-driven multiprocessor environment

1988 ◽  
Vol 16 (2) ◽  
pp. 223-230 ◽  
Author(s):  
J. L. Gaudiot ◽  
C. M. Lin ◽  
M. Hosseiniyar
Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
J. Nathan Kutz ◽  
J. L. Proctor ◽  
S. L. Brunton

We consider the application of Koopman theory to nonlinear partial differential equations and data-driven spatio-temporal systems. We demonstrate that the observables chosen for constructing the Koopman operator are critical for enabling an accurate approximation to the nonlinear dynamics. If such observables can be found, then the dynamic mode decomposition (DMD) algorithm can be enacted to compute a finite-dimensional approximation of the Koopman operator, including its eigenfunctions, eigenvalues, and Koopman modes. We demonstrate simple rules of thumb for selecting a parsimonious set of observables that can greatly improve the approximation of the Koopman operator. Further, we show that the clear goal in selecting observables is to place the DMD eigenvalues on the imaginary axis, thus giving an objective function for observable selection. Judiciously chosen observables lead to physically interpretable spatio-temporal features of the complex system under consideration and provide a connection to manifold learning methods. Our method provides a valuable intermediate, yet interpretable, approximation to the Koopman operator that lies between the DMD method and the computationally intensive extended DMD (EDMD). We demonstrate the impact of observable selection, including kernel methods, and construction of the Koopman operator on several canonical nonlinear PDEs: Burgers’ equation, the nonlinear Schrödinger equation, the cubic-quintic Ginzburg-Landau equation, and a reaction-diffusion system. These examples serve to highlight the most pressing and critical challenge of Koopman theory: a principled way to select appropriate observables.


2019 ◽  
Vol 116 (31) ◽  
pp. 15344-15349 ◽  
Author(s):  
Yohai Bar-Sinai ◽  
Stephan Hoyer ◽  
Jason Hickey ◽  
Michael P. Brenner

The numerical solution of partial differential equations (PDEs) is challenging because of the need to resolve spatiotemporal features over wide length- and timescales. Often, it is computationally intractable to resolve the finest features in the solution. The only recourse is to use approximate coarse-grained representations, which aim to accurately represent long-wavelength dynamics while properly accounting for unresolved small-scale physics. Deriving such coarse-grained equations is notoriously difficult and often ad hoc. Here we introduce data-driven discretization, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations. Our approach uses neural networks to estimate spatial derivatives, which are optimized end to end to best satisfy the equations on a low-resolution grid. The resulting numerical methods are remarkably accurate, allowing us to integrate in time a collection of nonlinear equations in 1 spatial dimension at resolutions 4× to 8× coarser than is possible with standard finite-difference methods.


2021 ◽  
pp. 227-227
Author(s):  
Zhijun Zhou ◽  
Zhang Qi ◽  
Xichuan Cai ◽  
Kun Li ◽  
Jingwei Zhao

Data-driven approaches have achieved remarkable success in different applications; however, their use in solving partial differential equations (PDEs) has only recently emerged. Herein, we present the potential fluid method (PFM), which uses existing data to nest physical meanings into mathematical iterative processes. The PFM is suitable for partial differential equations, such as computational fluid dynamic problems, including Burgers? equation. PFM can iteratively determine the steady-state space distribution of PDEs. For mathematical reasons, we compare the PFM with the finite difference method (FDM) and give a detailed explanation.


Sign in / Sign up

Export Citation Format

Share Document