Neural Network-Based Transfer Learning of Manipulator Inverse Displacement Analysis

2021 ◽  
Vol 13 (3) ◽  
Author(s):  
Houcheng Tang ◽  
Leila Notash

Abstract In this paper, the feasibility of applying transfer learning for modeling robot manipulators is examined. A neural network-based transfer learning approach of inverse displacement analysis of robot manipulators is studied. Neural networks with different structures are applied utilizing data from different configurations of a manipulator for training purposes. Then, the transfer learning was conducted between manipulators with different geometric layouts. The training is performed on both the neural networks with pretrained initial parameters and the neural networks with random initialization. To investigate the rate of convergence of data fitting comprehensively, different values of performance targets are defined. The computing epochs and performance measures are compared. It is presented that, depending on the structure of the neural network, the proposed transfer learning can accelerate the training process and achieve higher accuracy. For different datasets, the transfer learning approach improves their performance differently.

Author(s):  
Houcheng Tang ◽  
Leila Notash

Abstract In this paper, a neural network based transfer learning approach of inverse displacement analysis of robot manipulators is studied. Neural networks with different structures are applied utilizing data from different configurations of a manipulator for training purposes. Then the transfer learning was conducted between manipulators with different geometric layouts. The training is performed on both the neural networks with pretrained initial parameters and the neural networks with random initialization. To investigate the rate of convergence of data fitting comprehensively, different values of performance targets are defined. The computing epochs and performance measures are compared. It is presented that, depending on the structure of neural network, the proposed transfer learning can accelerate the training process and achieve higher accuracy. For different datasets, the transfer learning approach improves their performance differently.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2019 ◽  
Vol 2019 (02) ◽  
pp. 89-98
Author(s):  
Vijayakumar T

Predicting the category of tumors and the types of the cancer in its early stage remains as a very essential process to identify depth of the disease and treatment available for it. The neural network that functions similar to the human nervous system is widely utilized in the tumor investigation and the cancer prediction. The paper presents the analysis of the performance of the neural networks such as the, FNN (Feed Forward Neural Networks), RNN (Recurrent Neural Networks) and the CNN (Convolutional Neural Network) investigating the tumors and predicting the cancer. The results obtained by evaluating the neural networks on the breast cancer Wisconsin original data set shows that the CNN provides 43 % better prediction than the FNN and 25% better prediction than the RNN.


Author(s):  
Daniela Danciu

Neural networks—both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established via learning. The chapter deals with the second kind of dynamics. More precisely, since the emergent computational capabilities of a recurrent neural network can be achieved provided it has suitable dynamical properties when viewed as a system with several equilibria, the chapter deals with those qualitative properties connected to the achievement of such dynamical properties as global asymptotics and gradient-like behavior. In the case of the neural networks with delays, these aspects are reformulated in accordance with the state of the art of the theory of time delay dynamical systems.


2007 ◽  
Vol 11 (6) ◽  
pp. 1883-1896 ◽  
Author(s):  
A. Piotrowski ◽  
S. G. Wallis ◽  
J. J. Napiórkowski ◽  
P. M. Rowiński

Abstract. The prediction of temporal concentration profiles of a transported pollutant in a river is still a subject of ongoing research efforts worldwide. The present paper is aimed at studying the possibility of using Multi-Layer Perceptron Neural Networks to evaluate the whole concentration versus time profile at several cross-sections of a river under various flow conditions, using as little information about the river system as possible. In contrast with the earlier neural networks based work on longitudinal dispersion coefficients, this new approach relies more heavily on measurements of concentration collected during tracer tests over a range of flow conditions, but fewer hydraulic and morphological data are needed. The study is based upon 26 tracer experiments performed in a small river in Edinburgh, UK (Murray Burn) at various flow rates in a 540 m long reach. The only data used in this study were concentration measurements collected at 4 cross-sections, distances between the cross-sections and the injection site, time, as well as flow rate and water velocity, obtained according to the data measured at the 1st and 2nd cross-sections. The four main features of concentration versus time profiles at a particular cross-section, namely the peak concentration, the arrival time of the peak at the cross-section, and the shapes of the rising and falling limbs of the profile are modeled, and for each of them a separately designed neural network was used. There was also a variant investigated in which the conservation of the injected mass was assured by adjusting the predicted peak concentration. The neural network methods were compared with the unit peak attenuation curve concept. In general the neural networks predicted the main features of the concentration profiles satisfactorily. The predicted peak concentrations were generally better than those obtained using the unit peak attenuation method, and the method with mass-conservation assured generally performed better than the method that did not account for mass-conservation. Predictions of peak travel time were also better using the neural networks than the unit peak attenuation method. Including more data into the neural network training set clearly improved the prediction of the shapes of the concentration profiles. Similar improvements in peak concentration were less significant and the travel time prediction appeared to be largely unaffected.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Fereshteh Mataeimoghadam ◽  
M. A. Hakim Newton ◽  
Abdollah Dehzangi ◽  
Abdul Karim ◽  
B. Jayaram ◽  
...  

Abstract Protein structure prediction is a grand challenge. Prediction of protein structures via the representations using backbone dihedral angles has recently achieved significant progress along with the on-going surge of deep neural network (DNN) research in general. However, we observe that in the protein backbone angle prediction research, there is an overall trend to employ more and more complex neural networks and then to throw more and more features to the neural networks. While more features might add more predictive power to the neural network, we argue that redundant features could rather clutter the scenario and more complex neural networks then just could counterbalance the noise. From artificial intelligence and machine learning perspectives, problem representations and solution approaches do mutually interact and thus affect performance. We also argue that comparatively simpler predictors can more easily be reconstructed than the more complex ones. With these arguments in mind, we present a deep learning method named Simpler Angle Predictor (SAP) to train simpler DNN models that enhance protein backbone angle prediction. We then empirically show that SAP can significantly outperform existing state-of-the-art methods on well-known benchmark datasets: for some types of angles, the differences are 6–8 in terms of mean absolute error (MAE). The SAP program along with its data is available from the website https://gitlab.com/mahnewton/sap.


Author(s):  
V. N. Gridin ◽  
I. A. Evdokimov ◽  
B. R. Salem ◽  
V. I. Solodovnikov

The analysis of key stages, implementation features and functioning principles of the neural networks, including deep neural networks, has been carried out. The problems of choosing the number of hidden elements, methods for the internal topology selection and setting parameters are considered. It is shown that in the training and validation process it is possible to control the capacity of a neural network and evaluate the qualitative characteristics of the constructed model. The issues of construction processes automation and hyperparameters optimization of the neural network structures are considered depending on the user's tasks and the available source data. A number of approaches based on the use of probabilistic programming, evolutionary algorithms, and recurrent neural networks are presented.


2001 ◽  
Vol 11 (06) ◽  
pp. 561-572 ◽  
Author(s):  
ROSELI A. FRANCELIN ROMERO ◽  
JANUSZ KACPRYZK ◽  
FERNANDO GOMIDE

An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.


2021 ◽  
Vol 36 (1) ◽  
pp. 623-628
Author(s):  
Bapatu Siva Kumar Reddy ◽  
P. Vishnu Vardhan

Aim: The study aims to identify or recognize the alphabets using neural networks and fuzzy classifier/logic. Methods and materials: Neural network and fuzzy classifier are used for comparing the recognition of characters. For each classifier sample size is 20. Character recognition was developed using MATLAB R2018a, a software tool. The algorithm is again compared with the Fuzzy classifier to know the accuracy level. Results: Performance of both fuzzy classifier and neural networks are calculated by the accuracy value. The mean value of the fuzzy classifier is 82 and the neural network is 77. The recognition rate (accuracy) with the data features is found to be 98.06%. Fuzzy classifier shows higher significant value of P=0.002 < P=0.005 than the neural networks in recognition of characters. Conclusion: The independent tests for this study shows a higher accuracy level of alphabetical character recognition for Fuzzy classifier when compared with neural networks. Henceforth, the fuzzy classifier shows higher significant than the neural networks in recognition of characters.


2019 ◽  
Author(s):  
Daniel Cleather

Musculoskeletal models have been used to estimate the muscle and joint contact forces expressed during movement. One limitation of this approach, however, is that such models are computationally demanding, which limits the possibility of using them for real-time feedback. One solution to this problem is to train a neural network to approximate the performance of the model, and then to use the neural network to give real-time feedback. In this study, neural networks were trained to approximate the FreeBody musculoskeletal model for jumping and landing tasks. The neural networks were better able to approximate jumping than landing, which was probably a result of the greater variability in the landing data set used in this study. In addition, a neural network that was based on a reduced set of inputs was also trained to approximate the outputs of FreeBody during a landing task. These results demonstrate the feasibility of using neural networks to approximate the results of musculoskeletal models in order to provide real-time feedback. In addition, these neural networks could be based upon a reduced set of kinematic variables taken from a 2-dimensional video record, making the implementation of mobile applications a possibility.


Sign in / Sign up

Export Citation Format

Share Document