A feed-forward neural network learning the inverse kinetics of a soft cable-driven manipulator moving in three-dimensional space

Author(s):  
Michele Giorelli ◽  
Federico Renda ◽  
Gabriele Ferri ◽  
Cecilia Laschi
Author(s):  
Michele Giorelli ◽  
Federico Renda ◽  
Gabriele Ferri ◽  
Cecilia Laschi

The solution of the inverse kinetics problem of soft manipulators is essential to generate paths in the task space to perform grasping operations. To address this issue, researchers have proposed different iterative methods based on Jacobian matrix. However, although these methods guarantee a good degree of accuracy, they suffer from singularities, long-term convergence, parametric uncertainties and high computational cost. To overcome intrinsic problems of iterative algorithms, we propose here a neural network learning of the inverse kinetics of a soft manipulator. To our best knowledge, this represents the first attempt in this direction. A preliminary work on the feasibility of the neural network solution has been proposed for a conical shape manipulator driven by cables. After the training, a feed-forward neural network (FNN) is able to represent the relation between the manipulator tip position and the forces applied to the cables. The results show that a desired tip position can be achieved quickly with a degree of accuracy of 0.73% relative average error with respect to total length of arm.


Author(s):  
Polad Geidarov

Introduction: Metric recognition methods make it possible to preliminarily and strictly determine the structures of feed-forward neural networks, namely, the number of neurons, layers, and connections based on the initial parameters of the recognition problem. They also make it possible to analytically calculate the synapse weights of network neurons based on metric expressions. The setup procedure for these networks includes a sequential analytical calculation of the values of each synapse weight in the weight table for neurons of the zero or first layer, which allows us to obtain a working feed-forward neural network at the initial stage without the use of training algorithms. Then feed-forward neural networks can be trained by well-known learning algorithms, which generally speeds up the process of their creation and training. Purpose: To determine how much time the process of calculating the values of weights requires and, accordingly, how reasonable it is to preliminarily calculate the weights of a feed-forward neural network. Results: An algorithm is proposed and implemented for the automated calculation of all values of synapse weight tables for the zero and first layers as applied to the task of recognizing black-and-white monochrome symbol images. The proposed algorithm is described in the Builder C++ software environment. The possibility of optimizing the process of calculating the weights of synapses in order to accelerate the entire algorithm is considered. The time spent on calculating these weights for different configurations of neural networks based on metric recognition methods is estimated. Examples of creating and calculating synapse weight tables according to the considered algorithm are given. According to them, the analytical calculation of the weights of a neural network takes just seconds or minutes, being in no way comparable to the time necessary for training a neural network. Practical relevance: Analytical calculation of the weights of a neural network can significantly accelerate the process of creating and training a feed-forward neural network. Based on the proposed algorithm, we can implement one for calculating three-dimensional weight tables for more complex images, either blackand-white or color grayscale ones.


2015 ◽  
Vol 10 (3) ◽  
pp. 035006 ◽  
Author(s):  
M Giorelli ◽  
F Renda ◽  
M Calisti ◽  
A Arienti ◽  
G Ferri ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document