approximation power
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 2)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Vol 47 (3) ◽  
Author(s):  
Mario Kapl ◽  
Vito Vitrih

AbstractThe design of globally Cs-smooth (s ≥ 1) isogeometric spline spaces over multi-patch geometries with possibly extraordinary vertices, i.e. vertices with valencies different from four, is a current and challenging topic of research in the framework of isogeometric analysis. In this work, we extend the recent methods Kapl et al. Comput. Aided Geom. Des. 52–53:75–89, 2017, Kapl et al. Comput. Aided Geom. Des. 69:55–75, 2019 and Kapl and Vitrih J. Comput. Appl. Math. 335:289–311, 2018, Kapl and Vitrih J. Comput. Appl. Math. 358:385–404, 2019 and Kapl and Vitrih Comput. Methods Appl. Mech. Engrg. 360:112684, 2020 for the construction of C1-smooth and C2-smooth isogeometric spline spaces over particular planar multi-patch geometries to the case of Cs-smooth isogeometric multi-patch spline spaces of degree p, inner regularity r and of a smoothness s ≥ 1, with p ≥ 2s + 1 and s ≤ r ≤ p − s − 1. More precisely, we study for s ≥ 1 the space of Cs-smooth isogeometric spline functions defined on planar, bilinearly parameterized multi-patch domains, and generate a particular Cs-smooth subspace of the entire Cs-smooth isogeometric multi-patch spline space. We further present the construction of a basis for this Cs-smooth subspace, which consists of simple and locally supported functions. Moreover, we use the Cs-smooth spline functions to perform L2 approximation on bilinearly parameterized multi-patch domains, where the obtained numerical results indicate an optimal approximation power of the constructed Cs-smooth subspace.



2021 ◽  
pp. 1-32
Author(s):  
Zuowei Shen ◽  
Haizhao Yang ◽  
Shijun Zhang

A new network with super-approximation power is introduced. This network is built with Floor ([Formula: see text]) or ReLU ([Formula: see text]) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters [Formula: see text] and [Formula: see text], we show that Floor-ReLU networks with width [Formula: see text] and depth [Formula: see text] can uniformly approximate a Hölder function [Formula: see text] on [Formula: see text] with an approximation error [Formula: see text], where [Formula: see text] and [Formula: see text] are the Hölder order and constant, respectively. More generally for an arbitrary continuous function [Formula: see text] on [Formula: see text] with a modulus of continuity [Formula: see text], the constructive approximation rate is [Formula: see text]. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of [Formula: see text] as [Formula: see text] is moderate (e.g., [Formula: see text] for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially [Formula: see text] times a function of [Formula: see text] and [Formula: see text] independent of [Formula: see text] within the modulus of continuity.



Author(s):  
Hock Hung Chieng ◽  
Noorhaniza Wahid ◽  
Pauline Ong

QActivation function is a key component in deep learning that performs non-linear mappings between the inputs and outputs. Rectified Linear Unit (ReLU) has been the most popular activation function across the deep learning community. However, ReLU contains several shortcomings that can result in inefficient training of the deep neural networks, these are: 1) the negative cancellation property of ReLU tends to treat negative inputs as unimportant information for the learning, resulting in performance degradation; 2) the inherent predefined nature of ReLU is unlikely to promote additional flexibility, expressivity, and robustness to the networks; 3) the mean activation of ReLU is highly positive and leads to bias shift effect in network layers; and 4) the multilinear structure of ReLU restricts the non-linear approximation power of the networks. To tackle these shortcomings, this paper introduced Parametric Flatten-T Swish (PFTS) as an alternative to ReLU. By taking ReLU as a baseline method, the experiments showed that PFTS improved classification accuracy on SVHN dataset by 0.31%, 0.98%, 2.16%, 17.72%, 1.35%, 0.97%, 39.99%, and 71.83% on DNN-3A, DNN-3B, DNN-4, DNN-5A, DNN-5B, DNN-5C, DNN-6, and DNN-7, respectively. Besides, PFTS also achieved the highest mean rank among the comparison methods. The proposed PFTS manifested higher non-linear approximation power during training and thereby improved the predictive performance of the networks.



2020 ◽  
Author(s):  
Hector F. Calvo-Pardo ◽  
Tullio Mancini ◽  
Jose Olmo


2017 ◽  
Vol 311 ◽  
pp. 423-438
Author(s):  
Cesare Bracco ◽  
Durkbin Cho ◽  
Catterina Dagnino ◽  
Tae-wan Kim
Keyword(s):  


2012 ◽  
Vol 29 (8) ◽  
pp. 599-612 ◽  
Author(s):  
Larry L. Schumaker ◽  
Lujun Wang


Author(s):  
Ming-Jun Lai ◽  
Larry L. Schumaker
Keyword(s):  


Author(s):  
Ming-Jun Lai ◽  
Larry L. Schumaker
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document