On the trigonometric approximation of the generalized weighted Lipschitz Class

2014 ◽  
Vol 247 ◽  
pp. 1139-1140
Author(s):  
Ren-Jiang Zhang
2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Abhishek Mishra ◽  
Vishnu Narayan Mishra ◽  
M. Mursaleen

AbstractIn this paper, we establish a new estimate for the degree of approximation of functions $f(x,y)$ f ( x , y ) belonging to the generalized Lipschitz class $Lip ((\xi _{1}, \xi _{2} );r )$ L i p ( ( ξ 1 , ξ 2 ) ; r ) , $r \geq 1$ r ≥ 1 , by double Hausdorff matrix summability means of double Fourier series. We also deduce the degree of approximation of functions from $Lip ((\alpha ,\beta );r )$ L i p ( ( α , β ) ; r ) and $Lip(\alpha ,\beta )$ L i p ( α , β ) in the form of corollary. We establish some auxiliary results on trigonometric approximation for almost Euler means and $(C, \gamma , \delta )$ ( C , γ , δ ) means.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
M. L. Mittal ◽  
Mradul Veer Singh

We prove two Theorems on approximation of functions belonging to Lipschitz classLip(α,p)inLp-norm using Cesàro submethod. Further we discuss few corollaries of our Theorems and compare them with the existing results. We also note that our results give sharper estimates than the estimates in some of the known results.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2020 ◽  
Vol 55 (3) ◽  
pp. 196-199
Author(s):  
F. Tugores ◽  
L. Tugores
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document