Stable Robotic Grasping of Multiple Objects using Deep Neural Networks

Robotica ◽  
2020 ◽  
pp. 1-14
Author(s):  
Dongeon Kim ◽  
Ailing Li ◽  
Jangmyung Lee

SUMMARY Optimal grasping points for a robotic gripper were derived, based on object and hand geometry, using deep neural networks (DNNs). The optimal grasping cost functions were derived using probability density functions for each local cost function of the normal distribution. Using the DNN, the optimum height and width were set for the robot hand to grasp objects, whose geometric and mass centre points were also considered in obtaining the optimum grasping positions for the robot fingers and the object. The proposed algorithm was tested on 10 differently shaped objects and showed improved grip performance compared to conventional methods.

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3091
Author(s):  
Jelena Nikolić ◽  
Danijela Aleksić ◽  
Zoran Perić ◽  
Milan Dinčić

Motivated by the fact that uniform quantization is not suitable for signals having non-uniform probability density functions (pdfs), as the Laplacian pdf is, in this paper we have divided the support region of the quantizer into two disjunctive regions and utilized the simplest uniform quantization with equal bit-rates within both regions. In particular, we assumed a narrow central granular region (CGR) covering the peak of the Laplacian pdf and a wider peripheral granular region (PGR) where the pdf is predominantly tailed. We performed optimization of the widths of CGR and PGR via distortion optimization per border–clipping threshold scaling ratio which resulted in an iterative formula enabling the parametrization of our piecewise uniform quantizer (PWUQ). For medium and high bit-rates, we demonstrated the convenience of our PWUQ over the uniform quantizer, paying special attention to the case where 99.99% of the signal amplitudes belong to the support region or clipping region. We believe that the resulting formulas for PWUQ design and performance assessment are greatly beneficial in neural networks where weights and activations are typically modelled by the Laplacian distribution, and where uniform quantization is commonly used to decrease memory footprint.


2021 ◽  
pp. 1-35
Author(s):  
Aaron R. Voelker ◽  
Peter Blouw ◽  
Xuan Choo ◽  
Nicole Sandra-Yaffa Dumont ◽  
Terrence C. Stewart ◽  
...  

Abstract While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.


2005 ◽  
Vol 17 (2) ◽  
pp. 331-334 ◽  
Author(s):  
Jinwen Ma ◽  
Zhiyong Liu ◽  
Lei Xu

The one-bit-matching conjecture for independent component analysis (ICA) has been widely believed in the ICA community. Theoretically, it has been proved that under the assumption of zero skewness for the model probability density functions, the global maximum of a cost function derived from the typical objective function on the ICA problem with the one-bit-matching condition corresponds to a feasible solution of the ICA problem. In this note, we further prove that all the local maximums of the cost function correspond to the feasible solutions of the ICA problem in the two-source case under the same assumption. That is, as long as the one-bit-matching condition is satisfied, the two-source ICA problem can be successfully solved using any local descent algorithm of the typical objective function with the assumption of zero skewness for all the model probability density functions.


Author(s):  
Kunio Takezawa

When data are found to be realizations of a specific distribution, constructing the probability density function based on this distribution may not lead to the best prediction result. In this study, numerical simulations are conducted using data that follow a normal distribution, and we examine whether probability density functions that have shapes different from that of the normal distribution can yield larger log-likelihoods than the normal distribution in the light of future data. The results indicate that fitting realizations of the normal distribution to a different probability density function produces better results from the perspective of predictive ability. Similarly, a set of simulations using the exponential distribution shows that better predictions are obtained when the corresponding realizations are fitted to a probability density function that is slightly different from the exponential distribution. These observations demonstrate that when the form of the probability density function that generates the data is known, the use of another form of the probability density function may achieve more desirable results from the standpoint of prediction.


2018 ◽  
Vol 77 (20) ◽  
pp. 27231-27267 ◽  
Author(s):  
Aldonso Becerra ◽  
J. Ismael de la Rosa ◽  
Efrén González ◽  
A. David Pedroza ◽  
N. Iracemi Escalante

2021 ◽  
Vol 4 (2) ◽  
pp. 101-116
Author(s):  
Okoli C.O. ◽  
Nwosu D.F. ◽  
Osuji G.A. ◽  
Nsiegbe N.A.

In this study, we considered various transformation problems for a left-truncated normal distribution recently announced by several researchers and then possibly seek to establish a unified approach to such transformation problems for certain type of random variable and their associated probability density functions in the generalized setting. The results presented in this research, actually unify, improve and as well trivialized the results recently announced by these researchers in the literature, particularly for a random variable that follows a left-truncated normal distribution. Furthermore, we employed the concept of approximation theory to establish the existence of the optimal value y_max in the interval denoted by (σ_a,σ_b) ((σ_p,σ_q)) corresponding to the so-called interval of normality estimated by these authors in the literature using the Monte carol simulation method.


Author(s):  
Tuan Hoang ◽  
Thanh-Toan Do ◽  
Tam V. Nguyen ◽  
Ngai-Man Cheung

This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations. First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights. However, this approach would result in some mismatch: the gradient descent updates full-precision weights, but it does not update the quantized weights. To address this issue, we propose a novel method that enables direct updating of quantized weights with learnable quantization levels to minimize the cost function using gradient descent. Second, to obtain low bit-width activations, existing works consider all channels equally. However, the activation quantizers could be biased toward a few channels with high-variance. To address this issue, we propose a method to take into account the quantization errors of individual channels. With this approach, we can learn activation quantizers that minimize the quantization errors in the majority of channels. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the image classification task, using AlexNet, ResNet and MobileNetV2 architectures on CIFAR-100 and ImageNet datasets.


Sign in / Sign up

Export Citation Format

Share Document