generalized networks
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 6)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Philip Naveen

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.


2021 ◽  
Author(s):  
Philip Naveen

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Yuanzhao Zhang ◽  
Vito Latora ◽  
Adilson E. Motter

AbstractWhen describing complex interconnected systems, one often has to go beyond the standard network description to account for generalized interactions. Here, we establish a unified framework to simplify the stability analysis of cluster synchronization patterns for a wide range of generalized networks, including hypergraphs, multilayer networks, and temporal networks. The framework is based on finding a simultaneous block diagonalization of the matrices encoding the synchronization pattern and the network topology. As an application, we use simultaneous block diagonalization to unveil an intriguing type of chimera states that appear only in the presence of higher-order interactions. The unified framework established here can be extended to other dynamical processes and can facilitate the discovery of emergent phenomena in complex systems with generalized interactions.


2020 ◽  
Vol 13(62) (1) ◽  
pp. 303-330
Author(s):  
Massoud Aman ◽  
◽  
Reza Ghanbari ◽  
Donya Heydari ◽  
◽  
...  
Keyword(s):  

2020 ◽  
Vol 31 (01) ◽  
pp. 7-21 ◽  
Author(s):  
Fernando Arroyo Montoro ◽  
Sandra Gómez-Canaval ◽  
Karina Jiménez Vega ◽  
Alfonso Ortega de la Puente

In this paper we consider a new variant of Networks of Polarized Evolutionary Processors (NPEP) named Generalized Networks of Evolutionary Polarized Processors (GNPEP) and propose them as solvers of combinatorial optimization problems. Unlike the NPEP model, GNPEP uses its numerical evaluation over the processed data from a quantitative perspective, hence this model might be more suitable to solve specific hard problems in a more efficient and economic way. In particular, we propose a GNPEP network to solve a well-known NP-hard problem, namely the [Formula: see text]-queens. We prove that this GNPEP algorithm requires a linear time in the size of a given instance. This result suggests that the GNPEP model is more suitable to address problems related to combinatorial optimization in which integer restrictions have a relevant role.


2017 ◽  
Vol 100 ◽  
pp. 91-103 ◽  
Author(s):  
Eleni Stai ◽  
Vasileios Karyotis ◽  
Antonia-Chrysanthi Bitsaki ◽  
Symeon Papavassiliou

Sign in / Sign up

Export Citation Format

Share Document