Manufacturing Process Modeling and Optimization Based on Multi-Layer Perceptron Network

1998 ◽  
Vol 120 (1) ◽  
pp. 109-119 ◽  
Author(s):  
T. Warren Liao ◽  
L. J. Chen

It has been shown that a manufacturing process can be modeled (learned) using Multi-Layer Perceptron (MLP) neural network and then optimized directly using the learned network. This paper extends the previous work by examining several different MLP training algorithms for manufacturing process modeling and three methods for process optimization. The transformation method is used to convert a constrained objective function into an unconstrained one, which is then used as the error function in the process optimization stage. The simulation results indicate that: (i) the conjugate gradient algorithms with backtracking line search outperform the standard BP algorithm in convergence speed; (ii) the neural network approaches could yield more accurate process models than the regression method; (iii) the BP with simulated annealing method is the most reliable optimization method to generate the best optimal solution, and (iv) process optimization directly performed on the neural network is possible but cannot be especially automated totally, especially when the process concerned is a mixed integer problem.

Author(s):  
Chaitanya Vempati ◽  
Matthew I. Campbell

Neural networks are increasingly becoming a useful and popular choice for process modeling. The success of neural networks in effectively modeling a certain problem depends on the topology of the neural network. Generating topologies manually relies on previous neural network experience and is tedious and difficult. Hence there is a rising need for a method that generates neural network topologies for different problems automatically. Current methods such as growing, pruning and using genetic algorithms for this task are very complicated and do not explore all the possible topologies. This paper presents a novel method of automatically generating neural networks using a graph grammar. The approach involves representing the neural network as a graph and defining graph transformation rules to generate the topologies. The approach is simple, efficient and has the ability to create topologies of varying complexity. Two example problems are presented to demonstrate the power of our approach.


Author(s):  
Raghuram Mandyam Annasamy ◽  
Katia Sycara

Deep reinforcement learning techniques have demonstrated superior performance in a wide variety of environments. As improvements in training algorithms continue at a brisk pace, theoretical or empirical studies on understanding what these networks seem to learn, are far behind. In this paper we propose an interpretable neural network architecture for Q-learning which provides a global explanation of the model’s behavior using key-value memories, attention and reconstructible embeddings. With a directed exploration strategy, our model can reach training rewards comparable to the state-of-the-art deep Q-learning models. However, results suggest that the features extracted by the neural network are extremely shallow and subsequent testing using out-of-sample examples shows that the agent can easily overfit to trajectories seen during training.


Robotica ◽  
1997 ◽  
Vol 15 (1) ◽  
pp. 3-10 ◽  
Author(s):  
Ziqiang Mao ◽  
T. C. Hsia

This paper investigates the neural network approach to solve the inverse kinematics problem of redundant robot manipulators in an environment with obstacles. The solution technique proposed requires only the knowledge of the robot forward kinematics functions and the neural network is trained in the inverse modeling manner. Training algorithms for both the obstacle free case and the obstacle avoidance case are developed. For the obstacle free case, sample points can be selected in the work space as training patterns for the neural network. For the obstacle avoidance case, the training algorithm is augmented with a distance penalty function. A ball-covering object modeling technique is employed to calculate the distances between the robot links and the objects in the work space. It is shown that this technique is very computationally efficient. Extensive simulation results are presented to illustrate the success of the proposed solution schemes. Experimental results performed on a PUMA 560 robot manipulator is also presented.


2012 ◽  
Vol 468-471 ◽  
pp. 607-612
Author(s):  
Shi Ping Zhang ◽  
Yi Chao Ding ◽  
Jing Wang ◽  
Yuan Hui Li

It is difficult to build a strict mathematical model for WEDM due to the complication of the machining process and the nonlinear relation between process parameters and process targets. The neural network is suited to the modeling of complex system, because it has the functions of self-organized, self-learning and associative memory, and properties of distributed parallel type and high robustness. Therefore, this paper attempts to use the RBF neural network for the process modeling of WEDM.


2012 ◽  
Vol 463-464 ◽  
pp. 1011-1016 ◽  
Author(s):  
Adrian Olaru ◽  
Serban Olaru ◽  
Dan Paune ◽  
Oprean Aurel

The paper shown one assisted method to construct simple and complex neural network and to simulate on-line them. By on-line simulation of some more important neural simple and complex network is possible to know what will be the influences of all network parameters like the input data, weight, biases matrix, sensitive functions, closed loops and delay of time. There are shown some important neurons type, transfer functions, weights and biases of neurons, and some complex layers with different type of neurons. By using the proper virtual LabVIEW instrumentation in on-line using, were established some influences of the network parameters to the number of iterations before canceled the mean square error to the target. Numerical simulation used the proper teaching law and proper virtual instrumentation. In the optimization step of the research on used the minimization of the error function between the output and the target.


Author(s):  
S. Hensel ◽  
S. Goebbels ◽  
M. Kada

<p><strong>Abstract.</strong> The paper describes a workflow for generating LoD3 CityGML models (i.e. semantic building models with structured facades) based on textured LoD2 CityGML models by adding window and door objects. For each wall texture, bounding boxes of windows and doors are detected using “Faster R-CNN”, a deep neural network. We evaluate results for textures with different resolutions on the ICG Graz50 facade dataset. In general, detected bounding boxes match very well with the rectangular shape of most wall openings. Thus, no further classification of shapes is required. Windows are typically aligned to rows and columns, and only a few different types of windows exist for each facade. However, the neural network proposes rectangles of varying sizes, which are not always aligned perfectly. Thus, we use post-processing to obtain a more realistic appearance of facades. Window and door rectangles get aligned by solving a mixed integer linear optimization problem, which automatically leads to a clustering of these openings into few different classes of window and door types. Furthermore, an a-priori knowledge about the number of clusters is not required.</p>


Author(s):  
A.B.M. Wijaya ◽  
D.S. Ikawahyuni ◽  
Rospita Gea ◽  
Febe Maedjaja

Diabetes in Indonesia has been perceived as a grave health problem and has been a concern since the early 1980’s [2]. The prevalence of diabetes in adults in Indonesia, as stated by IDF, was 6.2% with the total case amounting to 10.681.400. Moreover, Indonesia is also in the top ten global countries with the highest diabetes case in 2013. This research will investigate the role of Deep Belief Network (DBN) and NeuroEvolution of Augmenting Topology (NEAT) in solving regression problems in detecting diabetes. DBN works by processing the data in unsupervised network architectures. The algorithm puts Restricted Boltzmann Machines (RBM) into a stacked process. The output of the first RBM will be the input for the next RBM. On the other hand, the NEAT algorithm works by investigating the neural network architecture and evaluating the architecture using a multi-layer perceptron algorithm. Collaboration with a Genetic Algorithm in NEAT is the key process in architecture development. The research results showed that DBN could be utilized as the initial weight for Backpropagation Neural Network at 22.61% on average. On the other hand, the NEAT algorithm could be used by collaborating with a multi-layer perceptron to solve this regression problem by providing 74.5% confidence. This work also reveals potential works in the future by combining the Backpropagation algorithm with NEAT as an evaluation function and by combining it with DBN algorithms to process the produced initial weight.


2013 ◽  
Vol 325-326 ◽  
pp. 970-983 ◽  
Author(s):  
Adrian Olaru ◽  
Serban Olaru ◽  
Aurel Oprean

In the optimisation stage of the systems one of the more important step is the optimisation of the dynamic behavior of all elements with priority the elements what have the slow frequency, like motors. The paper try to show how will be possible to optimise very easily the dynamic behavior of elements and systems, using LabVIEW propre instrumentation and the application of the transfer functions and neural tnetwork theory. By appling the virtual LabVIEW instrumentation is possible to choose on-line the optimal values for each constructive and functional parameters of the elements and the systems to obtain one good dynamic answer: maximal acceleration without vibration, minimum answer time and maximal precision. The paper presents some of the more important used transfer functions in the assisted analyse of the elements and systems and some practical results of the assisted optimisation by using the neural network method. In the research were been used some different way to optimize the convergence process, for example: using one time- delay of the first and second output from the neural layers; using the recursive link and time- delay; using the bipolar sigmoid hyperbolic tangent sensitive function replacing the sigmoid simple sensitive function. By on-line simulation of the neural network was possible to know what will be the influences of all network parameters like the input data, weight, biases matrix, sensitive functions, closed loops and time- delay, to the gradient errors, in a convergence process. In the optimization research we used the minimization of the gradient error function between the output and the target.


2012 ◽  
Vol 463-464 ◽  
pp. 1094-1097 ◽  
Author(s):  
Adrian Olaru ◽  
Serban Olaru ◽  
Dan Paune

The paper showed the assisted research of one new model of digital dynamic neural network by using the LabVIEW proper virtual instrumentation and proper mathematical model. In the research were used some different way to optimize the convergence process, for example: using one time- delay of the first and second output from the neural layers; using the recursive link and time- delay; using the bipolar sigmoid hyperbolic tangent sensitive function replacing the sigmoid simple sensitive function. By on-line simulation of the neural network it is possible to know what will be the influences of all network parameters like the input data, weight, biases matrix, sensitive functions, closed loops and time- delay, to the gradient errors, in a convergence process. By on-line using the proper virtual LabVIEW instrumentation, were established some influences of the network parameters: number of input vector data, number of neurons in each layers, to the number of iterations before canceled the mean square error to the target. In the optimization research we used the minimization of the gradient error function between the output and the target.


Sign in / Sign up

Export Citation Format

Share Document