scholarly journals An improved data-free surrogate model for solving partial differential equations using deep neural networks

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xinhai Chen ◽  
Rongliang Chen ◽  
Qian Wan ◽  
Rui Xu ◽  
Jie Liu

AbstractPartial differential equations (PDEs) are ubiquitous in natural science and engineering problems. Traditional discrete methods for solving PDEs are usually time-consuming and labor-intensive due to the need for tedious mesh generation and numerical iterations. Recently, deep neural networks have shown new promise in cost-effective surrogate modeling because of their universal function approximation abilities. In this paper, we borrow the idea from physics-informed neural networks (PINNs) and propose an improved data-free surrogate model, DFS-Net. Specifically, we devise an attention-based neural structure containing a weighting mechanism to alleviate the problem of unstable or inaccurate predictions by PINNs. The proposed DFS-Net takes expanded spatial and temporal coordinates as the input and directly outputs the observables (quantities of interest). It approximates the PDE solution by minimizing the weighted residuals of the governing equations and data-fit terms, where no simulation or measured data are needed. The experimental results demonstrate that DFS-Net offers a good trade-off between accuracy and efficiency. It outperforms the widely used surrogate models in terms of prediction performance on different numerical benchmarks, including the Helmholtz, Klein–Gordon, and Navier–Stokes equations.

Author(s):  
Gitta Kutyniok ◽  
Philipp Petersen ◽  
Mones Raslan ◽  
Reinhold Schneider

AbstractWe derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical neural network approximation results. Concretely, we use the existence of a small reduced basis to construct, for a large variety of parametric partial differential equations, neural networks that yield approximations of the parametric solution maps in such a way that the sizes of these networks essentially only depend on the size of the reduced basis.


Author(s):  
А. А. Епифанов

Глубокие нейронные сети стремительно развиваются в связи со значительным прогрессом в технологиях производительных вычислений. В данной работе рассматривается применение подходов, в основе которых лежит использование глубоких нейронных сетей, для решения дифференциальных уравнений в частных производных. Приводится пример численного решения уравнения Пуассона в двухмерной области методом Галеркина с глубокими нейронными сетями. Recently deep learning networks made huge progress due to the advances in highperformance computing technologies. This study covers a range of approaches to solving partial differential equations with deep learning. An example of solving the Poisson equation in a twodimensional domain using the Galerkin method with deep neural networks is presented.


Author(s):  
Pratik Chaudhari ◽  
Adam Oberman ◽  
Stanley Osher ◽  
Stefano Soatto ◽  
Guillaume Carlier

Author(s):  
Jean Chamberlain Chedjou ◽  
Kyandoghere Kyamakya

This paper develops and validates through a series of presentable examples, a comprehensive high-precision, and ultrafast computing concept for solving nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) with cellular neural networks (CNN). The core of this concept is a straightforward scheme that we call "nonlinear adaptive optimization (NAOP),” which is used for a precise template calculation for solving nonlinear ODEs and PDEs through CNN processors. One of the key contributions of this work is to demonstrate the possibility of transforming different types of nonlinearities displayed by various classical and well-known nonlinear equations (e.g., van der Pol-, Rayleigh-, Duffing-, Rössler-, Lorenz-, and Jerk-equations, just to name a few) unto first-order CNN elementary cells, and thereby enabling the easy derivation of corresponding CNN templates. Furthermore, in the case of PDE solving, the same concept also allows a mapping unto first-order CNN cells while considering one or even more nonlinear terms of the Taylor's series expansion generally used in the transformation of a PDE in a set of coupled nonlinear ODEs. Therefore, the concept of this paper does significantly contribute to the consolidation of CNN as a universal and ultrafast solver of nonlinear ODEs and/or PDEs. This clearly enables a CNN-based, real-time, ultraprecise, and low-cost computational engineering. As proof of concept, two examples of well-known ODEs are considered namely a second-order linear ODE and a second order nonlinear ODE of the van der Pol type. For each of these ODEs, the corresponding precise CNN templates are derived and are used to deduce the expected solutions. An implementation of the concept developed is possible even on embedded digital platforms (e.g., field programmable gate array (FPGA), digital signal processor (DSP), graphics processing unit (GPU), etc.). This opens a broad range of applications. Ongoing works (as outlook) are using NAOP for deriving precise templates for a selected set of practically interesting ODEs and PDEs equation models such as Lorenz-, Rössler-, Navier Stokes-, Schrödinger-, Maxwell-, etc.


Sign in / Sign up

Export Citation Format

Share Document