AN OBJECT-ORIENTED TOOLBOX FOR ADAPTIVE NEURAL NETWORKS' IMPLEMENTATION

2001 ◽  
Vol 10 (03) ◽  
pp. 345-371
Author(s):  
GEORGE D. MANIOUDAKIS ◽  
SPIRIDON D. LIKOTHANASSIS

Neural Networks are massively parallel processing systems, that require expensive and usually not available hardware, in order to be realized. Fortunately, the development of effective and accessible software, makes their simulation easy. Thus, various neural network's implementation tools exist in the market, which are oriented to the specific learning algorithm used. Furthermore, they can simulate only fixed size networks. In this work, we present some object-oriented techniques that have been used to defined some types of neuron and network objects, that can be used to realize, in a localized approach, some fast and powerful learning algorithms which combine results of the optimal filtering and the multi-model partitioning theory. Thus, one can build and implement intelligent learning algorithms that face both, the training as well as the on-line adjustment of the network size. Furthermore, the design methodology used, results to a system modeled as a collection of concurrent executable objects, making easy the parallel implementation. The whole design results in a general purpose tool box which is characterized by maintainability, reusability, and increased modularity. The provided features are shown by the presentation of some practical applications.

Information ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 98 ◽  
Author(s):  
Tariq Ahmad ◽  
Allan Ramsay ◽  
Hanady Ahmed

Assigning sentiment labels to documents is, at first sight, a standard multi-label classification task. Many approaches have been used for this task, but the current state-of-the-art solutions use deep neural networks (DNNs). As such, it seems likely that standard machine learning algorithms, such as these, will provide an effective approach. We describe an alternative approach, involving the use of probabilities to construct a weighted lexicon of sentiment terms, then modifying the lexicon and calculating optimal thresholds for each class. We show that this approach outperforms the use of DNNs and other standard algorithms. We believe that DNNs are not a universal panacea and that paying attention to the nature of the data that you are trying to learn from can be more important than trying out ever more powerful general purpose machine learning algorithms.


Author(s):  
Stylianos Chatzidakis ◽  
Miltiadis Alamaniotis ◽  
Lefteri H. Tsoukalas

Creep rupture is becoming increasingly one of the most important problems affecting behavior and performance of power production systems operating in high temperature environments and potentially under irradiation as is the case of nuclear reactors. Creep rupture forecasting and estimation of the useful life is required to avoid unanticipated component failure and cost ineffective operation. Despite the rigorous investigations of creep mechanisms and their effect on component lifetime, experimental data are sparse rendering the time to rupture prediction a rather difficult problem. An approach for performing creep rupture forecasting that exploits the unique characteristics of machine learning algorithms is proposed herein. The approach seeks to introduce a mechanism that will synergistically exploit recent findings in creep rupture with the state-of-the-art computational paradigm of machine learning. In this study, three machine learning algorithms, namely General Regression Neural Networks, Artificial Neural Networks and Gaussian Processes, were employed to capture the underlying trends and provide creep rupture forecasting. The current implementation is demonstrated and evaluated on actual experimental creep rupture data. Results show that the Gaussian process model based on the Matérn kernel achieved the best overall prediction performance (56.38%). Significant dependencies exist on the number of training data, neural network size, kernel selection and whether interpolation or extrapolation is performed.


2003 ◽  
Vol 15 (12) ◽  
pp. 2727-2778 ◽  
Author(s):  
Jiří Šíma ◽  
Pekka Orponen

We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jian Li ◽  
Yongyan Zhao

As the national economy has entered a stage of rapid development, the national economy and social development have also ushered in the “14th Five-Year Plan,” and the country has also issued support policies to encourage and guide college students to start their own businesses. Therefore, the establishment of an innovation and entrepreneurship platform has a significant impact on China’s economy. This gives college students great support and help in starting a business. The theory of deep learning algorithms originated from the development of artificial neural networks and is another important field of machine learning. As the computing power of computers has been greatly improved, especially the computing power of GPU can quickly train deep neural networks, deep learning algorithms have become an important research direction. The deep learning algorithm is a nonlinear network structure and a standard modeling method in the field of machine learning. After modeling various templates, they can be identified and implemented. This article uses a combination of theoretical research and empirical research, based on the views and research content of some scholars in recent years, and introduces the basic framework and research content of this article. Then, deep learning algorithms are used to analyze the experimental data. Data analysis is performed, and relevant concepts of deep learning algorithms are combined. This article focuses on exploring the construction of an IAE (innovation and entrepreneurship) education platform and making full use of the role of deep learning algorithms to realize the construction of innovation and entrepreneurship platforms. Traditional methods need to extract features through manual design, then perform feature classification, and finally realize the function of recognition. The deep learning algorithm has strong data image processing capabilities and can quickly process large-scale data. Research data show that 49.5% of college students and 35.2% of undergraduates expressed their interest in entrepreneurship. Entrepreneurship is a good choice to relieve employment pressure.


Author(s):  
Shuxiang Xu ◽  
Yunling Liu

This chapter proposes a theoretical framework for parallel implementation of Deep Higher Order Neural Networks (HONNs). First, we develop a new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network). This will allow us to use a network of computers (rather than a single computer) to train a HONN to drastically increase its learning speed: all of the computers will be running the HONN simultaneously (parallel implementation). Next, we develop a new learning algorithm so that it can be used for HONN learning in a distributed system environment. Finally, we propose to improve the generalisation ability of the new learning algorithm as used in a distributed system environment. Theoretical analysis of the proposal is thoroughly conducted to verify the soundness of the new approach. Experiments will be performed to test the new algorithm in the future.


1996 ◽  
Vol 33 (9) ◽  
pp. 85-92 ◽  
Author(s):  
Ning Gong ◽  
Thierry Denoeux ◽  
Jean-Luc Bertrand-Krajewski

Models for solid transport in sewers during storm events are increasingly used by engineers and operators to improve their systems and the quality of receiving waters. However, a major difficulty that prevents more general use of these models is their calibration, which requires field data, accurate information about catchments and sewers, and a specific methodology. Therefore, research has been carried out to assess the ability of connectionist models to reproduce and replace usual models for use by an operator. Such models require fewer data, are self-calibrated, and very easy to use. The first stage presented in this paper consists in a comparison between neural networks and the HYPOCRAS model, using simulations of real pollutographs for single storm events. Two specific recurrent neural networks based on the HYPOCRAS model and a general-purpose recurrent multilayer network are used to simulate hydrographs and pollutographs of TSS. The learning algorithm and the performance criterion used for optimization of these networks are described in detail. Experimental results with simulated and real data are then presented.


2000 ◽  
Vol 10 (03) ◽  
pp. 227-241 ◽  
Author(s):  
OMER F. RANA

Neural learning algorithms generally involve a number of identical processing units, which are fully or partially connected, and involve an update function, such as a ramp, a sigmoid or a Gaussian function for instance. Some variations also exist, where units can be heterogeneous, or where an alternative update technique is employed, such as a pulse stream generator. Associated with connections are numerical values that must be adjusted using a learning rule, and and dictated by parameters that are learning rule specific, such as momentum, a learning rate, a temperature, amongst others. Usually, neural learning algorithms involve local updates, and a global interaction between units is often discouraged, except in instances where units are fully connected, or involve synchronous updates. In all of these instances, concurrency within a neural algorithm cannot be fully exploited without a suitable implementation strategy. A design scheme is described for translating a neural learning algorithm from inception to implementation on a parallel machine using PVM or MPI libraries, or onto programmable logic such as FPGAs. A designer must first describe the algorithm using a specialised Neural Language, from which a Petri net (PN) model is constructed automatically for verification, and building a performance model. The PN model can be used to study issues such as synchronisation points, resource sharing and concurrency within a learning rule. Specialised constructs are provided to enable a designer to express various aspects of a learning rule, such as the number and connectivity of neural nodes, the interconnection strategies, and information flows required by the learning algorithm. A scheduling and mapping strategy is then used to translate this PN model onto a multiprocessor template. We demonstrate our technique using a Kohonen and backpropagation learning rules, implemented on a loosely coupled workstation cluster, and a dedicated parallel machine, with PVM libraries.


1997 ◽  
Vol 20 (4) ◽  
pp. 559-559
Author(s):  
Enrico Blanzieri

The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.


1993 ◽  
Vol 02 (01) ◽  
pp. 133-162
Author(s):  
Peter Wohl

Neural algorithms require massive computation and very high communication bandwidth and are naturally expressed at a level of granularity finer than parallel systems can exploit efficiently. Mapping Neural Networks onto parallel computers has traditionally implied a form of clustering neurons and weights to increase the granularity. SIMD simulations may exceed a million connections per second using thousands of processors, but are often tailored to particular networks and learning algorithms. MIMD simulations required an even larger granularity to run efficiently and often trade flexibility for speed. An alternative technique based on pipelining fewer but larger messages through parallel. “broadcast/accumulate trees” is explored. “Lazy” allocation of messages reduces communication and memory requirements, curbing excess parallelism at run time. The mapping is flexible to changes in network architecture and learning algorithm and is suited for a variety of computer configurations. The method pushes the limits of parallelizing backpropagation and feed-forward type algorithms. Results exceed a million connections per second already on 30 processors and are up to ten times superior to previous results on similar hardware. The implementation techniques can also be applied in conjunction with others, including systolic and VLSI.


2016 ◽  
pp. 1-11
Author(s):  
Shuxiang Xu ◽  
Yunling Liu

This chapter proposes a theoretical framework for parallel implementation of Deep Higher Order Neural Networks (HONNs). First, we develop a new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network). This will allow us to use a network of computers (rather than a single computer) to train a HONN to drastically increase its learning speed: all of the computers will be running the HONN simultaneously (parallel implementation). Next, we develop a new learning algorithm so that it can be used for HONN learning in a distributed system environment. Finally, we propose to improve the generalisation ability of the new learning algorithm as used in a distributed system environment. Theoretical analysis of the proposal is thoroughly conducted to verify the soundness of the new approach. Experiments will be performed to test the new algorithm in the future.


Sign in / Sign up

Export Citation Format

Share Document