RECOGNITION OF TELUGU CHARACTERS USING NEURAL NETWORKS

1995 ◽  
Vol 06 (03) ◽  
pp. 317-357 ◽  
Author(s):  
M.B. SUKHASWAMI ◽  
P. SEETHARAMULU ◽  
ARUN K. PUJARI

The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different “hands” in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

Author(s):  
Olga RUZAKOVA

The article presents a methodological approach to assessing the investment attractiveness of an enterprise based on the Hopfield neural network mathematical apparatus. An extended set of evaluation parameters of the investment process has been compiled. An algorithm for formalizing the decision-making process regarding the investment attractiveness of the enterprise based on the mathematical apparatus of neural networks has been developed. The proposed approach allows taking into account the constantly changing sets of quantitative and qualitative parameters, identifying the appropriate level of investment attractiveness of the enterprise with minimal money and time expenses – one of the standards of the Hopfield network, which is most similar to the one that characterizes the activity of the enterprise. Developed complex formalization of the investment process allows you to make investment decisions in the context of incompleteness and heterogeneity of information, based on the methodological tools of neural networks.


2019 ◽  
Vol 8 (2) ◽  
pp. 4928-4937 ◽  

Odia character and digits recognition area are vital issues of these days in computer vision. In this paper a Hope field neural network design to solve the printed Odia character recognition has been discussed. Optical Character Recognition (OCR) is the principle of applying conversion of the pictures from handwritten, printed or typewritten to machine encoded text version. Artificial Neural Networks (ANNs) trained as a classifier and it had been trained, supported the rule of Hopfield Network by exploitation code designed within the MATLAB. Preprocessing of data (image acquisition, binarization, skeletonization, skew detection and correction, image cropping, resizing, implementation and digitalization) all these activities have been carried out using MATLAB. The OCR, designed a number of the thought accuses non-standard speech for different types of languages. Segmentation, feature extraction, classification tasks is the well-known techniques for reviewing of Odia characters and outlined with their weaknesses, relative strengths. It is expected that who are interested to figure within the field of recognition of Odia characters are described in this paper. Recognition of Odia printed characters, numerals, machine characters of research areas finds costly applications within the banks, industries, offices. In this proposed work we devolve an efficient and robust mechanism in which Odia characters are recognized by the Hopfield Neural Networks (HNN).


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 726 ◽  
Author(s):  
Giorgio Gosti ◽  
Viola Folli ◽  
Marco Leonetti ◽  
Giancarlo Ruocco

In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14 N , as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14 N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 652
Author(s):  
Monurajan P ◽  
Ruhanbevi A ◽  
Manjula J

Artificial Neural Networks are interconnection of neurons inspired from the biological neural network of the brain. ANN is claimed to rule the future, spreads its wings to various areas of interest to name a few such as optimization, information technology, cryptography, image processing and even in medical diagnosis. There are devices which possess synaptic behaviour, one such device is memristor. Bridge circuit of memristors can be combined together to form neurons. Neurons can be made into a network with appropriate parameters to store data or images. Hopfield neural networks are chosen to store the data in associative memory. Hopfield neural networks are a significant feature in ANN which are recurrent in nature and in general are used as associative memory and in solving optimization problems such as the Travelling Salesman Problem. The paper deals on the construction of memristive Hopfield neural network using memristor bridging circuit and its application in the associative memory. This paper also illustrates the experiment with mathematical equations and the associative memory concept of the network using Matlab.  


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Xia Huang ◽  
Zhen Wang ◽  
Yuxia Li

A fractional-order two-neuron Hopfield neural network with delay is proposed based on the classic well-known Hopfield neural networks, and further, the complex dynamical behaviors of such a network are investigated. A great variety of interesting dynamical phenomena, including single-periodic, multiple-periodic, and chaotic motions, are found to exist. The existence of chaotic attractors is verified by the bifurcation diagram and phase portraits as well.


Author(s):  
Roberto A. Vazquez ◽  
Humberto Sossa

An associative memory AM is a special kind of neural network that allows recalling one output pattern given an input pattern as a key that might be altered by some kind of noise (additive, subtractive or mixed). Most of these models have several constraints that limit their applicability in complex problems such as face recognition (FR) and 3D object recognition (3DOR). Despite of the power of these approaches, they cannot reach their full power without applying new mechanisms based on current and future study of biological neural networks. In this direction, we would like to present a brief summary concerning a new associative model based on some neurobiological aspects of human brain. In addition, we would like to describe how this dynamic associative memory (DAM), combined with some aspects of infant vision system, could be applied to solve some of the most important problems of pattern recognition: FR and 3DOR.


2006 ◽  
Vol 16 (12) ◽  
pp. 3643-3654 ◽  
Author(s):  
JUN-JUH YAN ◽  
TEH-LU LIAO ◽  
JUI-SHENG LIN ◽  
CHAO-JUNG CHENG

This paper investigates the synchronization problem for a particular class of neural networks subject to time-varying delays and input nonlinearity. Using the variable structure control technique, a memoryless decentralized control law is established which guarantees exponential synchronization even when input nonlinearity is present. The proposed controller is suitable for application in delayed cellular neural networks and Hopfield neural networks with no restriction on the derivative of the time-varying delays. A two-dimensional cellular neural network and a four-dimensional Hopfield neural network, both with time-varying delays, are presented as illustrative examples to demonstrate the effectiveness of the proposed synchronization scheme.


2018 ◽  
Vol 29 (08) ◽  
pp. 1850076
Author(s):  
Jiandu Liu ◽  
Bokui Chen ◽  
Dengcheng Yan ◽  
Lei Wang

Calculating the exact number of fixed points and attractors of an arbitrary Hopfield neural network is a non-deterministic polynomial (NP)-hard problem. In this paper, we first calculate the average number of fixed points in such networks versus their size and threshold of neurons, in terms of a statistical method, which has been applied to the calculation of the average number of metastable states in spin glass systems. Then the same method is expanded to study the average number of attractors in such networks. The results of the calculation qualitatively agree well with the numerical calculation. The discrepancies between them are also well explained.


2019 ◽  
Vol 29 (14) ◽  
pp. 1950205 ◽  
Author(s):  
Marat Akhmet ◽  
Ejaily Milad Alejaily

In this paper, we provide a new method for constructing chaotic Hopfield neural networks. Our approach is based on structuring the domain to form a special set through the discrete evolution of the network state variables. In the chaotic regime, the formed set is invariant under the system governing the dynamics of the neural network. The approach can be viewed as an extension of the unimodality technique for one-dimensional map, thereby generating chaos from higher-dimensional systems. We show that the discrete Hopfield neural network considered is chaotic in the sense of Devaney, Li–Yorke, and Poincaré. Mathematical analysis and numerical simulation are provided to confirm the presence of chaos in the network.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


Sign in / Sign up

Export Citation Format

Share Document