scholarly journals Variable Chromosome Genetic Algorithm for Structure Learning in Neural Networks to Imitate Human Brain

2019 ◽  
Vol 9 (15) ◽  
pp. 3176 ◽  
Author(s):  
Kang-moon Park ◽  
Donghoon Shin ◽  
Sung-do Chi

This paper proposes the variable chromosome genetic algorithm (VCGA) for structure learning in neural networks. Currently, the structural parameters of neural networks, i.e., number of neurons, coupling relations, number of layers, etc., have mostly been designed on the basis of heuristic knowledge of an artificial intelligence (AI) expert. To overcome this limitation, in this study evolutionary approach (EA) has been utilized to automatically generate the proper artificial neural network (ANN) structures. VCGA has a new genetic operation called a chromosome attachment. By applying the VCGA, the initial ANN structures can be flexibly evolved toward the proper structure. The case study applied to the typical exclusive or (XOR) problem shows the feasibility of our methodology. Our approach is differentiated with others in that it uses a variable chromosome in the genetic algorithm. It makes a neural network structure vary naturally, both constructively and destructively. It has been shown that the XOR problem is successfully optimized using a VCGA with a chromosome attachment to learn the structure of neural networks. Research on the structure learning of more complex problems is the topic of our future research.

2011 ◽  
Vol 58-60 ◽  
pp. 1773-1778
Author(s):  
Wei Gao

The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.


2019 ◽  
Vol 29 (1) ◽  
pp. 1235-1245
Author(s):  
Kishor Kumar Katha ◽  
Suresh Pabboju

Abstract In this paper, a fresh method is offered regarding training of particular neural networks. This technique is a combination of the adaptive genetic (AG) and cuckoo search (CS) algorithms, called the AGCS method. The intention of training a particular artificial neural network (ANN) is to obtain the finest weight load. With this protocol, a particular weight is taken into account as feedback, which is optimized by means of the hybrid AGCS protocol. In the beginning, a collection of weights is initialized and the similar miscalculation is discovered. Finally, during training of an ANN, we can easily overcome the training complications involving ANNs and also gain the finest overall performance with training of the ANN. We have implemented the proposed system in MATLAB, and the overall accuracy is about 93%, which is much better than that of the genetic algorithm (86%) and CS (88%) algorithm.


2000 ◽  
Vol 176 ◽  
pp. 135-136
Author(s):  
Toshiki Aikawa

AbstractSome pulsating post-AGB stars have been observed with an Automatic Photometry Telescope (APT) and a considerable amount of precise photometric data has been accumulated for these stars. The datasets, however, are still sparse, and this is a problem for applying nonlinear time series: for instance, modeling of attractors by the artificial neural networks (NN) to the datasets. We propose the optimization of data interpolations with the genetic algorithm (GA) and the hybrid system combined with NN. We apply this system to the Mackey–Glass equation, and attempt an analysis of the photometric data of post-AGB variables.


Author(s):  
Sandip K Lahiri ◽  
Kartik Chandra Ghanta

Four distinct regimes were found existent (namely sliding bed, saltation, heterogeneous suspension and homogeneous suspension) in slurry flow in pipeline depending upon the average velocity of flow. In the literature, few numbers of correlations has been proposed for identification of these regimes in slurry pipelines. Regime identification is important for slurry pipeline design as they are the prerequisite to apply different pressure drop correlation in different regime. However, available correlations fail to predict the regime over a wide range of conditions. Based on a databank of around 800 measurements collected from the open literature, a method has been proposed to identify the regime using artificial neural network (ANN) modeling. The method incorporates hybrid artificial neural network and genetic algorithm technique (ANN-GA) for efficient tuning of ANN meta parameters. Statistical analysis showed that the proposed method has an average misclassification error of 0.03%. A comparison with selected correlations in the literature showed that the developed ANN-GA method noticeably improved prediction of regime over a wide range of operating conditions, physical properties, and pipe diameters.


2002 ◽  
Vol 12 (01) ◽  
pp. 31-43 ◽  
Author(s):  
GARY YEN ◽  
HAIMING LU

In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.


SINERGI ◽  
2020 ◽  
Vol 24 (1) ◽  
pp. 29
Author(s):  
Widi Aribowo

Load shedding plays a key part in the avoidance of the power system outage. The frequency and voltage fluidity leads to the spread of a power system into sub-systems and leads to the outage as well as the severe breakdown of the system utility.  In recent years, Neural networks have been very victorious in several signal processing and control applications.  Recurrent Neural networks are capable of handling complex and non-linear problems. This paper provides an algorithm for load shedding using ELMAN Recurrent Neural Networks (RNN). Elman has proposed a partially RNN, where the feedforward connections are modifiable and the recurrent connections are fixed. The research is implemented in MATLAB and the performance is tested with a 6 bus system. The results are compared with the Genetic Algorithm (GA), Combining Genetic Algorithm with Feed Forward Neural Network (hybrid) and RNN. The proposed method is capable of assigning load releases needed and more efficient than other methods. 


Author(s):  
Suraphan Thawornwong ◽  
David Enke

During the last few years there has been growing literature on applications of artificial neural networks to business and financial domains. In fact, a great deal of attention has been placed in the area of stock return forecasting. This is due to the fact that once artificial neural network applications are successful, monetary rewards will be substantial. Many studies have reported promising results in successfully applying various types of artificial neural network architectures for predicting stock returns. This chapter reviews and discusses various neural network research methodologies used in 45 journal articles that attempted to forecast stock returns. Modeling techniques and suggestions from the literature are also compiled and addressed. The results show that artificial neural networks are an emerging and promising computational technology that will continue to be a challenging tool for future research.


Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2022 ◽  
pp. 1-30
Author(s):  
Arunaben Prahladbhai Gurjar ◽  
Shitalben Bhagubhai Patel

The new era of the world uses artificial intelligence (AI) and machine learning. The combination of AI and machine learning is called artificial neural network (ANN). Artificial neural network can be used as hardware or software-based components. Different topology and learning algorithms are used in artificial neural networks. Artificial neural network works similarly to the functionality of the human nervous system. ANN is working as a nonlinear computing model based on activities performed by human brain such as classification, prediction, decision making, visualization just by considering previous experience. ANN is used to solve complex, hard-to-manage problems by accruing knowledge about the environment. There are different types of artificial neural networks available in machine learning. All types of artificial neural networks work based of mathematical operation and require a set of parameters to get results. This chapter gives overview on the various types of neural networks like feed forward, recurrent, feedback, classification-predication.


2016 ◽  
pp. 89-112
Author(s):  
Pushpendu Kar ◽  
Anusua Das

The recent craze for artificial neural networks has spread its roots towards the development of neuroscience, pattern recognition, machine learning and artificial intelligence. The theoretical neuroscience is basically converging towards the basic concept that the brain acts as a complex and decentralized computer which can perform rigorous calculations in a different approach compared to the conventional digital computers. The motivation behind the study of neural networks is due to their similarity in the structure of the human central nervous system. The elementary processing component of an Artificial Neural Network (ANN) is called as ‘Neuron'. A large number of neurons interconnected with each other mimic the biological neural network and form an ANN. Learning is an inevitable process that can be used to train an ANN. We can only transfer knowledge to the neural network by the learning procedure. This chapter presents the detailed concepts of artificial neural networks in addition to some significant aspects on the present research work.


Sign in / Sign up

Export Citation Format

Share Document