scholarly journals Jackson and Stechkin type inequalities of trigonometric approximation inAp;q()w;#

2018 ◽  
Vol 42 (6) ◽  
pp. 2979-2993
Author(s):  
Ahmet Hamdi AVŞAR ◽  
Hüseyin KOÇ
2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Abhishek Mishra ◽  
Vishnu Narayan Mishra ◽  
M. Mursaleen

AbstractIn this paper, we establish a new estimate for the degree of approximation of functions $f(x,y)$ f ( x , y ) belonging to the generalized Lipschitz class $Lip ((\xi _{1}, \xi _{2} );r )$ L i p ( ( ξ 1 , ξ 2 ) ; r ) , $r \geq 1$ r ≥ 1 , by double Hausdorff matrix summability means of double Fourier series. We also deduce the degree of approximation of functions from $Lip ((\alpha ,\beta );r )$ L i p ( ( α , β ) ; r ) and $Lip(\alpha ,\beta )$ L i p ( α , β ) in the form of corollary. We establish some auxiliary results on trigonometric approximation for almost Euler means and $(C, \gamma , \delta )$ ( C , γ , δ ) means.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


Sign in / Sign up

Export Citation Format

Share Document