scholarly journals Optimized Activation for Quantum-Inspired Self-supervised Neural Network based Fully Automated Brain MR Image Segmentation

Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Bijaya Ketan Panigrahi

<div>The slow-convergence problem degrades the segmentation performance of the recently proposed Quantum-Inspired Self-supervised Neural Network models owing to lack of suitable tailoring of the inter-connection weights. Hence, incorporation of quantum-inspired meta-heuristics in the Quantum-Inspired Self-supervised Neural Network models optimizes their hyper-parameters and inter-connection weights. This paper is aimed at proposing an optimized version of a Quantum-Inspired Self-supervised Neural Network (QIS-Net) model for optimal</div><div>segmentation of brain Magnetic Resonance (MR) Imaging. The suggested Optimized Quantum-Inspired Self-supervised Neural Network (Opti-QISNet) model resembles the architecture of QIS-Net and its operations are leveraged to obtain optimal segmentation outcome. The optimized activation function employed in the presented model is referred to as Quantum-Inspired Optimized Multi-Level Sigmoidal (Opti-QSig) activation. The Opti-QSig activation function is optimized by three quantum-inspired meta-heuristics with fifitness evaluation using Otsu’s multi-level thresholding. Rigorous experiments have been conducted on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data repository. The experimental outcomes show that the proposed self-supervised Opti-QISNet model offffers a promising alternative to the deeply supervised neural network based architectures (UNet and FCNNs) in medical image segmentation and outperforms our recently developed models QIBDS Net and QIS-Net.</div>

2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Bijaya Ketan Panigrahi

<div>The slow-convergence problem degrades the segmentation performance of the recently proposed Quantum-Inspired Self-supervised Neural Network models owing to lack of suitable tailoring of the inter-connection weights. Hence, incorporation of quantum-inspired meta-heuristics in the Quantum-Inspired Self-supervised Neural Network models optimizes their hyper-parameters and inter-connection weights. This paper is aimed at proposing an optimized version of a Quantum-Inspired Self-supervised Neural Network (QIS-Net) model for optimal</div><div>segmentation of brain Magnetic Resonance (MR) Imaging. The suggested Optimized Quantum-Inspired Self-supervised Neural Network (Opti-QISNet) model resembles the architecture of QIS-Net and its operations are leveraged to obtain optimal segmentation outcome. The optimized activation function employed in the presented model is referred to as Quantum-Inspired Optimized Multi-Level Sigmoidal (Opti-QSig) activation. The Opti-QSig activation function is optimized by three quantum-inspired meta-heuristics with fifitness evaluation using Otsu’s multi-level thresholding. Rigorous experiments have been conducted on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data repository. The experimental outcomes show that the proposed self-supervised Opti-QISNet model offffers a promising alternative to the deeply supervised neural network based architectures (UNet and FCNNs) in medical image segmentation and outperforms our recently developed models QIBDS Net and QIS-Net.</div>


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4242
Author(s):  
Fausto Valencia ◽  
Hugo Arcos ◽  
Franklin Quilumba

The purpose of this research is the evaluation of artificial neural network models in the prediction of stresses in a 400 MVA power transformer winding conductor caused by the circulation of fault currents. The models were compared considering the training, validation, and test data errors’ behavior. Different combinations of hyperparameters were analyzed based on the variation of architectures, optimizers, and activation functions. The data for the process was created from finite element simulations performed in the FEMM software. The design of the Artificial Neural Network was performed using the Keras framework. As a result, a model with one hidden layer was the best suited architecture for the problem at hand, with the optimizer Adam and the activation function ReLU. The final Artificial Neural Network model predictions were compared with the Finite Element Method results, showing good agreement but with a much shorter solution time.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


2021 ◽  
Vol 7 (2) ◽  
pp. 2266-2280
Author(s):  
Huamin Zhang ◽  
◽  
Hongcai Yin ◽  

<abstract><p>The time-varying solution of a class generalized linear matrix equation with the transpose of an unknown matrix is discussed. The computation model is constructed and asymptotic convergence proof is given by using the zeroing neural network method. Using an activation function, the predefined-time convergence property and noise suppression strategy are discussed. Numerical examples are offered to illustrate the efficacy of the suggested zeroing neural network models.</p></abstract>


Designs ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 78
Author(s):  
Kareem Othman

Laboratory tests for the estimation of the compaction parameters, namely the maximum dry density (MDD) and optimum moisture content (OMC) are time-consuming and costly. Thus, this paper employs the artificial neural network technique for the prediction of the OMC and MDD for the aggregate base course from relatively easier index properties tests. The grain size distribution, plastic limit, and liquid limits are used as the inputs for the development of the ANNs. In this study, multiple ANNs (240 ANNs) are tested to choose the optimum ANN that produces the best predictions. This paper focuses on studying the impact of three different activation functions: number of hidden layers, number of neurons per hidden layer on the predictions, and heatmaps are generated to compare the performance of every ANN with different settings. Results show that the optimum ANN hyperparameters change depending on the predicted parameter. Additionally, the hyperbolic tangent activation is the most efficient activation function as it outperforms the other two activation functions. Additionally, the simplest ANN architectures results in the best predictions, as the performance of the ANNs deteriorates with the increase in the number of hidden layers or the number of neurons per hidden layers.


2016 ◽  
Vol 26 (05) ◽  
pp. 1650040 ◽  
Author(s):  
Francisco Javier Ropero Peláez ◽  
Mariana Antonia Aguiar-Furucho ◽  
Diego Andina

In this paper, we use the neural property known as intrinsic plasticity to develop neural network models that resemble the koniocortex, the fourth layer of sensory cortices. These models evolved from a very basic two-layered neural network to a complex associative koniocortex network. In the initial network, intrinsic and synaptic plasticity govern the shifting of the activation function, and the modification of synaptic weights, respectively. In this first version, competition is forced, so that the most activated neuron is arbitrarily set to one and the others to zero, while in the second, competition occurs naturally due to inhibition between second layer neurons. In the third version of the network, whose architecture is similar to the koniocortex, competition also occurs naturally owing to the interplay between inhibitory interneurons and synaptic and intrinsic plasticity. A more complex associative neural network was developed based on this basic koniocortex-like neural network, capable of dealing with incomplete patterns and ideally suited to operating similarly to a learning vector quantization network. We also discuss the biological plausibility of the networks and their role in a more complex thalamocortical model.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Sign in / Sign up

Export Citation Format

Share Document