Trigonometric Approximation of Solutions of Periodic Pseudodifferential Equations

Author(s):  
W. McLean ◽  
W. L. Wendland
2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Abhishek Mishra ◽  
Vishnu Narayan Mishra ◽  
M. Mursaleen

AbstractIn this paper, we establish a new estimate for the degree of approximation of functions $f(x,y)$ f ( x , y ) belonging to the generalized Lipschitz class $Lip ((\xi _{1}, \xi _{2} );r )$ L i p ( ( ξ 1 , ξ 2 ) ; r ) , $r \geq 1$ r ≥ 1 , by double Hausdorff matrix summability means of double Fourier series. We also deduce the degree of approximation of functions from $Lip ((\alpha ,\beta );r )$ L i p ( ( α , β ) ; r ) and $Lip(\alpha ,\beta )$ L i p ( α , β ) in the form of corollary. We establish some auxiliary results on trigonometric approximation for almost Euler means and $(C, \gamma , \delta )$ ( C , γ , δ ) means.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2001 ◽  
Vol 09 (02) ◽  
pp. 495-513 ◽  
Author(s):  
A. HANYGA ◽  
M. SEREDYŃSKA

Uniformly asymptotic frequency-domain solutions for a class of hyperbolic equations with singular convolution operators are derived. Asymptotic solutions for this class of equations involve additional parameters — called attenuation parameters — which control the smoothing of the wavefield at the wavefront. At caustics the ray amplitudes have a singularity associated with vanishing of ray spreading and with divergence of an integral controlling the rate of exponential amplitude decay. Both problems are resolved by applying a generalized Kravtsov–Ludwig formula derived in this paper. A different asymptotic solution is constructed in the case of separation of dispersion and focusing effects.


Sign in / Sign up

Export Citation Format

Share Document