Notice of Retraction: Perceptron Linear Activation Function Design with CMOS-Memristive Circuits

Author(s):  
Bexultan Nursultan ◽  
Olga Krestinskaya
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yao Ying ◽  
Nengbo Zhang ◽  
Ping He ◽  
Silong Peng

The activation function is the basic component of the convolutional neural network (CNN), which provides the nonlinear transformation capability required by the network. Many activation functions make the original input compete with different linear or nonlinear mapping terms to obtain different nonlinear transformation capabilities. Until recently, the original input of funnel activation (FReLU) competed with the spatial conditions, so FReLU not only has the ability of nonlinear transformation but also has the ability of pixelwise modeling. We summarize the competition mechanism in the activation function and then propose a novel activation function design template: competitive activation function (CAF), which promotes competition among different elements. CAF generalizes all activation functions that use competition mechanisms. According to CAF, we propose a parametric funnel rectified exponential unit (PFREU). PFREU promotes competition among linear mapping, nonlinear mapping, and spatial conditions. We conduct experiments on four datasets of different sizes, and the experimental results of three classical convolutional neural networks proved the superiority of our method.


2016 ◽  
Author(s):  
Helen Farman ◽  
Jianyao Wu ◽  
Karin Gustafsson ◽  
Sara Windahl ◽  
Sung Kim ◽  
...  

2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Keith M. Martin

This chapter discusses cryptographic mechanisms for providing data integrity. We begin by identifying different levels of data integrity that can be provided. We then look in detail at hash functions, explaining the different security properties that they have, as well as presenting several different applications of a hash function. We then look at hash function design and illustrate this by discussing the hash function SHA-3. Next, we discuss message authentication codes (MACs), presenting a basic model and discussing basic properties. We compare two different MAC constructions, CBC-MAC and HMAC. Finally, we consider different ways of using MACs together with encryption. We focus on authenticated encryption modes, and illustrate these by describing Galois Counter mode.


2014 ◽  
Vol 143 ◽  
pp. 182-196 ◽  
Author(s):  
Sartaj Singh Sodhi ◽  
Pravin Chandra

Sign in / Sign up

Export Citation Format

Share Document