A Look-Ahead Fuzzy Back Propagation Network for Lot Output Time Series Prediction in a Wafer Fab

Author(s):  
Toly Chen
Author(s):  
T Chen

A post-classifying fuzzy-neural approach is proposed in this study for estimating the remaining cycle time of each job in a wafer fabrication plant, which has seldom been investigated in past studies but is a critical task for the wafer fabrication plant. In the methodology proposed, the fuzzy back-propagation network (FBPN) approach for job cycle time estimation is modified with the proportional adjustment approach to estimate the remaining cycle time instead. Besides, unlike existing cycle time estimation approaches, in the methodology proposed a job is not preclassified but rather post-classified after the estimation error has been generated. For this purpose, a back-propagation network is used as the post-classification algorithm. To evaluate the effectiveness of the methodology proposed, production simulation is used in this study to generate some test data. According to experimental results, the accuracy of estimating the remaining cycle time could be improved by up to 64 per cent with the proposed methodology.


2012 ◽  
Vol 2 (2) ◽  
pp. 50-67 ◽  
Author(s):  
Toly Chen

Variable replacement is a well-known technique to improve the forecasting performance, but has not been applied to the job cycle time forecasting, which is a critical task to a semiconductor manufacturer. To this end, in this study, principal component analysis (PCA) is applied to enhance the forecasting performance of the fuzzy back propagation network (FBPN) approach. First, to replace the original variables, PCA is applied to form variables that are independent of each other, and become new inputs to the FBPN. Subsequently, a FBPN is constructed to estimate the cycle times of jobs. According to the results of a case study, the hybrid PCA-FBPN approach was more efficient, while achieving a satisfactory estimation performance.


2009 ◽  
Vol 19 (06) ◽  
pp. 437-448 ◽  
Author(s):  
MD. ASADUZZAMAN ◽  
MD. SHAHJAHAN ◽  
KAZUYUKI MURASE

Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back propagation (BP) is a famous training method used in the multilayer networks but it often suffers from the drawback of slow convergence. To make the learning faster, we propose 'Fusion of Activation Functions' (FAF) in which different conventional activation functions (AFs) are combined to compute final activation. This has not been studied extensively yet. One of the sub goals of the paper is to check the role of linear AFs in combination. We investigate whether FAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging nine real benchmark classification and time series prediction problems. The FAF has been applied to 2-bit, 3-bit and 4-bit parity, the breast cancer, Diabetes, Heart disease, Iris, wine, Glass and Soybean classification problems. The algorithm is also tested with Mackey-Glass chaotic time series prediction problem. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid (SIG), arctangent (ATAN), logarithmic (LOG).


2017 ◽  
Vol 14 (2) ◽  
pp. 467-490 ◽  
Author(s):  
Predrag Pecev ◽  
Milos Rackovic

The subject of research presented in this paper is to model a neural network structure and appropriate training algorithm that is most suited for multiple dependent time series prediction / deduction. The basic idea is to take advantage of neural networks in solving the problem of prediction of synchronized basketball referees? movement during a basketball action. Presentation of time series stemming from the aforementioned problem, by using traditional Multilayered Perceptron neural networks (MLP), leads to a sort of paradox of backward time lapse effect that certain input and hidden layers nodes have on output nodes that correspond to previous moments in time. This paper describes conducted research and analysis of different methods of overcoming the presented problem. Presented paper is essentially split into two parts. First part gives insight on efforts that are put into training set configuration on standard Multi Layered Perceptron back propagation neural networks, in order to decrease backwards time lapse effects that certain input and hidden layers nodes have on output nodes. Second part of paper focuses on the results that a new neural network structure called LTR - MDTS provides. Foundation of LTR - MDTS design relies on a foundation on standard MLP neural networks with certain, left-to-right synapse removal to eliminate aforementioned backwards time lapse effect on the output nodes.


Author(s):  
Julia El Zini ◽  
Yara Rizk ◽  
Mariette Awad

AbstractRecurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT.In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt-PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.


Author(s):  
Mohammed H Adnan ◽  
Mustafa Muneer Isma’eel

The research aims to estimate stock returns using artificial neural networks and to test the performance of the Error Back Propagation network, for its effectiveness and accuracy in predicting the returns of stocks and their potential in the field of financial markets and to rationalize investor decisions. A sample of companies listed on the Iraq Stock Exchange was selected with (38) stock for a time series spanning (120) months for the years (2010_2019). The research found that there is a weakness in the network of Error Back Propagation training and the identification of data patterns of stock returns as individual inputs feeding the network due to the high fluctuation in the rates of returns leads to variation in proportions and in different directions, negatively and positively.


Sign in / Sign up

Export Citation Format

Share Document