Extension Neural Network-Type 3

Author(s):  
Manghui Wang
Keyword(s):  
Type 3 ◽  
2021 ◽  
Author(s):  
Kathakali Sarkar ◽  
Deepro Bonnerjee ◽  
Rajkamal Srivastava ◽  
Sangram Bagh

Here, we adapted the basic concept of artificial neural networks (ANN) and experimentally demonstrate a broadly applicable single layer ANN type architecture with molecular engineered bacteria to perform complex irreversible...


Nanoscale ◽  
2021 ◽  
Author(s):  
Xiaoyan Wang ◽  
Wenxi Zhao ◽  
Yang Fei ◽  
Yanjuan Sun ◽  
Fan Dong

Three-dimensional catalysts have attracted great attention in the field of hydrogen evolution reaction (HER). It still, however, remains a great challenge in structural innovation and performance enhancement. Herein, we designed...


2018 ◽  
Vol 98 (15) ◽  
pp. 2639-2647 ◽  
Author(s):  
Danilo Costarelli ◽  
Gianluca Vinti

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Meng-Hui Wang

The values of electronic components are always deviated, but the functions of the modern circuits are more and more precise, which makes the automatic fault diagnosis of analog circuits very complex and difficult. This paper presents an extension-neural-network-type-1-(ENN-1-) based method for fault diagnosis of analog circuits. This proposed method combines the extension theory and neural networks to create a novel neural network. Using the matter-element models of fault types and a correlation function, can be calculated the correlation degree between the tested pattern and every fault type; then, the cause of the circuit malfunction can be directly diagnosed by the analysis of the correlation degree. The experimental results show that the proposed method has a high diagnostic accuracy and is more fault tolerant than the multilayer neural network (MNN) and thek-means based methods.


1995 ◽  
Vol 9 (10) ◽  
pp. 829-831
Author(s):  
E A Gladkov ◽  
A V Malolektov ◽  
G den Owden

2020 ◽  
Vol 9 (4) ◽  
pp. 1
Author(s):  
Arman I. Mohammed ◽  
Ahmed AK. Tahir

A new optimization algorithm called Adam Meged with AMSgrad (AMAMSgrad) is modified and used for training a convolutional neural network type Wide Residual Neural Network, Wide ResNet (WRN), for image classification purpose. The modification includes the use of the second moment as in AMSgrad and the use of Adam updating rule but with and (2) as the power of the denominator. The main aim is to improve the performance of the AMAMSgrad optimizer by a proper selection of and the power of the denominator. The implementation of AMAMSgrad and the two known methods (Adam and AMSgrad) on the Wide ResNet using CIFAR-10 dataset for image classification reveals that WRN performs better with AMAMSgrad optimizer compared to its performance with Adam and AMSgrad optimizers. The accuracies of training, validation and testing are improved with AMAMSgrad over Adam and AMSgrad. AMAMSgrad needs less number of epochs to reach maximum performance compared to Adam and AMSgrad. With AMAMSgrad, the training accuracies are (90.45%, 97.79%, 99.98%, 99.99%) respectively at epoch (60, 120, 160, 200), while validation accuracy for the same epoch numbers are (84.89%, 91.53%, 95.05%, 95.23). For testing, the WRN with AMAMSgrad provided an overall accuracy of 94.8%. All these accuracies outrages those provided by WRN with Adam and AMSgrad. The classification metric measures indicate that the given architecture of WRN with the three optimizers performs significantly well and with high confidentiality, especially with AMAMSgrad optimizer.


Author(s):  
Vladimír Konečný ◽  
Oldřich Trenz ◽  
Milan Sepši

Neural networks present a modern, very effective and practical instrument designated for decision-making support. To make use of them, we not only need to select the neural network type and structure, but also a corresponding data adjustment. One consequence of unsuitable data use can be an inexact or absolutely mistaken function of the model. The need for a certain adjustment of input data comes from the features of the chosen neural network type, from the use of various metrics systems of object attributes, but also from the weight, i.e., the importance of individual attributes, but also from establishing representatives of classifying sets and learning about their characteristics. For the purposes of the classification itself, we can suffice with a model in which the number of output neurons equals the number of classifying sets. Nonetheless, the model with a greater number of neurons assembled into a matrix can testify more about the problem, and provides clearer visual information.


Sign in / Sign up

Export Citation Format

Share Document