scholarly journals A Finite Difference Method for the Variational p-Laplacian

2021 ◽  
Vol 90 (1) ◽  
Author(s):  
Félix del Teso ◽  
Erik Lindgren

AbstractWe propose a new monotone finite difference discretization for the variational p-Laplace operator, $$\Delta _pu=\text{ div }(|\nabla u|^{p-2}\nabla u),$$ Δ p u = div ( | ∇ u | p - 2 ∇ u ) , and present a convergent numerical scheme for related Dirichlet problems. The resulting nonlinear system is solved using two different methods: one based on Newton-Raphson and one explicit method. Finally, we exhibit some numerical simulations supporting our theoretical results. To the best of our knowledge, this is the first monotone finite difference discretization of the variational p-Laplacian and also the first time that nonhomogeneous problems for this operator can be treated numerically with a finite difference scheme.

2004 ◽  
Vol 4 (1) ◽  
pp. 34-47 ◽  
Author(s):  
Francisco J. Gaspar ◽  
Francisco J. Lisbona ◽  
Petr N. Vabishchevich

AbstractEnergy estimates and convergence analysis of finite difference methods for Biot's consolidation model are presented for several types of radial ow. The model is written by a system of partial differential equations which depend on an integer parameter (n = 0; 1; 2) corresponding to the one-dimensional ow through a deformable slab and the radial ow through an elastic cylindrical or spherical shell respectively. The finite difference discretization is performed on staggered grids using separated points for the approximation of pressure and displacements. Numerical results are given to illustrate the obtained theoretical results.


1995 ◽  
Vol 06 (03) ◽  
pp. 249-256 ◽  
Author(s):  
FU-SHENG TSUNG ◽  
GARRISON W. COTTRELL

A recurrent learning algorithm based on a finite difference discretization of continuous equations for neural networks is derived. This algorithm has the simplicity of discrete algorithms while retaining some essential characteristics of the continuous equations. In discrete networks learning smooth oscillations is difficult if the period of oscillation is too large. The network either grossly distorts the waveforms or is unable to learn at all. We show how the finite difference formulation can explain and overcome this problem. Formulas for learning time constants and time delays in this framework are also presented.


Sign in / Sign up

Export Citation Format

Share Document