Measuring fuzzy specificity using fuzzy unit hypercube

2018 ◽  
Vol 5 (1) ◽  
pp. 39-47
Author(s):  
Geoffrey O. Barini ◽  
Livingstone M. Ngoo ◽  
Ronald M. Waweru
Keyword(s):  
Author(s):  
John H. Halton

Introduction and statement of results. We shall describe how, for successive integers N, the points {nξ}, with n = 0, 1, …,N – 1, are distributed in the closed unit interval U = [0, 1]; by showing how successive points {Nξ,} modify the partition of U produced by the previous points. The simple generalization to the k-dimensional sequence {nξ} = ({nξ(1)},{nξ(2)}, …,{nξ(k)}), in the unit hypercube Uk, is also made.


2019 ◽  
Vol 23 (23) ◽  
pp. 12521-12527
Author(s):  
Geoffrey O. Barini ◽  
Livingstone M. Ngoo ◽  
Ronald W. Mwangi

1987 ◽  
Vol 24 (03) ◽  
pp. 609-618 ◽  
Author(s):  
Laurence A. Baxter ◽  
Chul Kim

A continuum structure function γ is a non-decreasing mapping from the unit hypercube to the unit interval. Block and Savits (1984) use the sets and to determine bounds on the distribution of γ (X) when X is a vector of associated random variables. It is shown that, if γ admits of a modular decomposition, improved bounds may be obtained.


Author(s):  
Laurence A. Baxter

AbstractA continuum structure function is a nondecreasing mapping from the unit hypercube to the unit interval. This paper continues the author's work on the subject, extending Griffith's definitions of coherency to such functions and studying the analytic properties of a continuum structure function based on Natvig's ‘second suggestion’.


1994 ◽  
Vol 6 (6) ◽  
pp. 1233-1243 ◽  
Author(s):  
Yoshifusa Ito

Using only an elementary constructive method, we prove the universal approximation capability of three-layered feedforward neural networks that have sigmoid units on two layers. We regard the Heaviside function as a special case of sigmoid function and measure accuracy of approximation in either the supremum norm or in the Lp-norm. Given a continuous function defined on a unit hypercube and the required accuracy of approximation, we can estimate the numbers of necessary units on the respective sigmoid unit layers. In the case where the sigmoid function is the Heaviside function, our result improves the estimation of Kůrková (1992). If the accuracy of approximation is measured in the LP-norm, our estimation also improves that of Kůrková (1992), even when the sigmoid function is not the Heaviside function.


2006 ◽  
Vol 197 (1) ◽  
pp. 282-285 ◽  
Author(s):  
Tim Pillards ◽  
Bart Vandewoestyne ◽  
Ronald Cools
Keyword(s):  

2016 ◽  
Vol 28 (12) ◽  
pp. 2585-2593 ◽  
Author(s):  
Hien D. Nguyen ◽  
Luke R. Lloyd-Jones ◽  
Geoffrey J. McLachlan

The mixture-of-experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from a Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alternative result, which shows that the class of MoE mean functions is dense in the class of all continuous functions over arbitrary compact domains of estimation. Our result can be viewed as a universal approximation theorem for MoE models. The theorem we present allows MoE users to be confident in applying such models for estimation when data arise from nonlinear and nondifferentiable generative processes.


Sign in / Sign up

Export Citation Format

Share Document