scholarly journals The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations

Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1303 ◽  
Author(s):  
Carl Leake ◽  
Hunter Johnston ◽  
Daniele Mortari

This article presents a reformulation of the Theory of Functional Connections: a general methodology for functional interpolation that can embed a set of user-specified linear constraints. The reformulation presented in this paper exploits the underlying functional structure presented in the seminal paper on the Theory of Functional Connections to ease the derivation of these interpolating functionals—called constrained expressions—and provides rigorous terminology that lends itself to straightforward derivations of mathematical proofs regarding the properties of these constrained expressions. Furthermore, the extension of the technique to and proofs in n-dimensions is immediate through a recursive application of the univariate formulation. In all, the results of this reformulation are compared to prior work to highlight the novelty and mathematical convenience of using this approach. Finally, the methodology presented in this paper is applied to two partial differential equations with different boundary conditions, and, when data is available, the results are compared to state-of-the-art methods.

2020 ◽  
Vol 2 (1) ◽  
pp. 37-55 ◽  
Author(s):  
Carl Leake ◽  
Daniele Mortari

This article presents a new methodology called Deep Theory of Functional Connections (TFC) that estimates the solutions of partial differential equations (PDEs) by combining neural networks with the TFC. The TFC is used to transform PDEs into unconstrained optimization problems by analytically embedding the PDE’s constraints into a “constrained expression” containing a free function. In this research, the free function is chosen to be a neural network, which is used to solve the now unconstrained optimization problem. This optimization problem consists of minimizing a loss function that is chosen to be the square of the residuals of the PDE. The neural network is trained in an unsupervised manner to minimize this loss function. This methodology has two major differences when compared with popular methods used to estimate the solutions of PDEs. First, this methodology does not need to discretize the domain into a grid, rather, this methodology can randomly sample points from the domain during the training phase. Second, after training, this methodology produces an accurate analytical approximation of the solution throughout the entire training domain. Because the methodology produces an analytical solution, it is straightforward to obtain the solution at any point within the domain and to perform further manipulation if needed, such as differentiation. In contrast, other popular methods require extra numerical techniques if the estimated solution is desired at points that do not lie on the discretized grid, or if further manipulation to the estimated solution must be performed.


2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Herb E. Kunze ◽  
Davide La Torre ◽  
Franklin Mendivil ◽  
Manuel Ruiz Galán ◽  
Rachad Zaki

We illustrate, in this short survey, the current state of the art of fractal-based techniques and their application to the solution of inverse problems for ordinary and partial differential equations. We review several methods based on the Collage Theorem and its extensions. We also discuss two innovative applications: the first one is related to a vibrating string model while the second one considers a collage-based approach for solving inverse problems for partial differential equations on a perforated domain.


Sign in / Sign up

Export Citation Format

Share Document