scholarly journals Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs

Entropy ◽  
2018 ◽  
Vol 20 (10) ◽  
pp. 722 ◽  
Author(s):  
Rabha Ibrahim ◽  
Maslina Darus

In this paper, we study Tsallis’ fractional entropy (TFE) in a complex domain by applying the definition of the complex probability functions. We study the upper and lower bounds of TFE based on some special functions. Moreover, applications in complex neural networks (CNNs) are illustrated to recognize the accuracy of CNNs.


1994 ◽  
Vol 3 (3) ◽  
pp. 411-419
Author(s):  
Andrzej Pelc

In group testing, sets of data undergo tests that reveal if a set contains faulty data. Assuming that data items are faulty with given probability and independently of one another, we investigate small families of tests that enable us to locate correctly all faulty data with probability converging to one as the amount of data grows. Upper and lower bounds on the minimum number of such tests are established for different probability functions, and respective location strategies are constructed.



Author(s):  
Rabha W. Ibrahim

In this paper, we aim to introduce some geometric properties of analytic functions by utilizing the concept of fractional entropy in a complex domain. We extend the fractional entropy, type Tsallis entropy in the complex z-plane, by using some analytic functions. Established by this diffusion,we state specic new classes of analytic functions (type Schwarz function). Other geometric properties are validated in the sequel. Our development is completed by the Euler form Lemma and Jack Lemma.



Filomat ◽  
2011 ◽  
Vol 25 (4) ◽  
pp. 153-163
Author(s):  
Mohammad Masjed-Jamei

In this paper, we introduce two specific classes of functions in Lp-spaces that can generate new and known inequalities in the literature. By using some recent results related to the Chebyshev functional, we then obtain upper bounds for the absolute value of the two introduced functions and consider three particular examples. One of these examples is a suitable tool for finding upper and lower bounds of some incomplete special functions such as incomplete gamma and beta functions.



2002 ◽  
Vol 14 (2) ◽  
pp. 241-301 ◽  
Author(s):  
Michael Schmitt

In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.



2017 ◽  
Vol 58 (3-4) ◽  
pp. 238-246
Author(s):  
ZHIXIANG CHEN ◽  
FEILONG CAO

We address the construction and approximation for feed-forward neural networks (FNNs) with zonal functions on the unit sphere. The filtered de la Vallée-Poussin operator and the spherical quadrature formula are used to construct the spherical FNNs. In particular, the upper and lower bounds of approximation errors by the FNNs are estimated, where the best polynomial approximation of a spherical function is used as a measure of approximation error.



Mathematics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 17 ◽  
Author(s):  
Abdollah Alhevaz ◽  
Maryam Baghipur ◽  
Hilal A. Ganie ◽  
Yilun Shang

The generalized distance matrix D α ( G ) of a connected graph G is defined as D α ( G ) = α T r ( G ) + ( 1 − α ) D ( G ) , where 0 ≤ α ≤ 1 , D ( G ) is the distance matrix and T r ( G ) is the diagonal matrix of the node transmissions. In this paper, we extend the concept of energy to the generalized distance matrix and define the generalized distance energy E D α ( G ) . Some new upper and lower bounds for the generalized distance energy E D α ( G ) of G are established based on parameters including the Wiener index W ( G ) and the transmission degrees. Extremal graphs attaining these bounds are identified. It is found that the complete graph has the minimum generalized distance energy among all connected graphs, while the minimum is attained by the star graph among trees of order n.



2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Hui Lei ◽  
Gou Hu ◽  
Zhi-Jie Cao ◽  
Ting-Song Du

Abstract The main aim of this paper is to establish some Fejér-type inequalities involving hypergeometric functions in terms of GA-s-convexity. For this purpose, we construct a Hadamard k-fractional identity related to geometrically symmetric mappings. Moreover, we give the upper and lower bounds for the weighted inequalities via products of two different mappings. Some applications of the presented results to special means are also provided.



Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 512
Author(s):  
Maryam Baghipur ◽  
Modjtaba Ghorbani ◽  
Hilal A. Ganie ◽  
Yilun Shang

The signless Laplacian reciprocal distance matrix for a simple connected graph G is defined as RQ(G)=diag(RH(G))+RD(G). Here, RD(G) is the Harary matrix (also called reciprocal distance matrix) while diag(RH(G)) represents the diagonal matrix of the total reciprocal distance vertices. In the present work, some upper and lower bounds for the second-largest eigenvalue of the signless Laplacian reciprocal distance matrix of graphs in terms of various graph parameters are investigated. Besides, all graphs attaining these new bounds are characterized. Additionally, it is inferred that among all connected graphs with n vertices, the complete graph Kn and the graph Kn−e obtained from Kn by deleting an edge e have the maximum second-largest signless Laplacian reciprocal distance eigenvalue.



2020 ◽  
Vol 26 (2) ◽  
pp. 131-161
Author(s):  
Florian Bourgey ◽  
Stefano De Marco ◽  
Emmanuel Gobet ◽  
Alexandre Zhou

AbstractThe multilevel Monte Carlo (MLMC) method developed by M. B. Giles [Multilevel Monte Carlo path simulation, Oper. Res. 56 2008, 3, 607–617] has a natural application to the evaluation of nested expectations {\mathbb{E}[g(\mathbb{E}[f(X,Y)|X])]}, where {f,g} are functions and {(X,Y)} a couple of independent random variables. Apart from the pricing of American-type derivatives, such computations arise in a large variety of risk valuations (VaR or CVaR of a portfolio, CVA), and in the assessment of margin costs for centrally cleared portfolios. In this work, we focus on the computation of initial margin. We analyze the properties of corresponding MLMC estimators, for which we provide results of asymptotic optimality; at the technical level, we have to deal with limited regularity of the outer function g (which might fail to be everywhere differentiable). Parallel to this, we investigate upper and lower bounds for nested expectations as above, in the spirit of primal-dual algorithms for stochastic control problems.



Sign in / Sign up

Export Citation Format

Share Document