scholarly journals Improved recurrent neural network-based manipulator control with remote center of motion constraints: Experimental results

2020 ◽  
Vol 131 ◽  
pp. 291-299 ◽  
Author(s):  
Hang Su ◽  
Yingbai Hu ◽  
Hamid Reza Karimi ◽  
Alois Knoll ◽  
Giancarlo Ferrigno ◽  
...  
Author(s):  
Zhan Li ◽  
Shuai Li

AbstractRedundancy manipulators need favorable redundancy resolution to obtain suitable control actions to guarantee accurate kinematic control. Among numerous kinematic control applications, some specific tasks such as minimally invasive manipulation/surgery require the distal link of a manipulator to translate along such fixed point. Such a point is known as remote center of motion (RCM) to constrain motion planning and kinematic control of manipulators. Recurrent neural network (RNN) which possesses parallel processing ability, is a powerful alternative and has achieved success in conventional redundancy resolution and kinematic control with physical constraints of joint limits. However, up to now, there still is few related works on the RNNs for redundancy resolution and kinematic control of manipulators with RCM constraints considered yet. In this paper, for the first time, an RNN-based approach with a simplified neural network architecture is proposed to solve the redundancy resolution issue with RCM constraints, with a new and general dynamic optimization formulation containing the RCM constraints investigated. Theoretical results analyze and convergence properties of the proposed simplified RNN for redundancy resolution of manipulators with RCM constraints. Simulation results further demonstrate the efficiency of the proposed method in end-effector path tracking control under RCM constraints based on a redundant manipulator.


2019 ◽  
Vol 11 (12) ◽  
pp. 247
Author(s):  
Xin Zhou ◽  
Peixin Dong ◽  
Jianping Xing ◽  
Peijia Sun

Accurate prediction of bus arrival times is a challenging problem in the public transportation field. Previous studies have shown that to improve prediction accuracy, more heterogeneous measurements provide better results. So what other factors should be added into the prediction model? Traditional prediction methods mainly use the arrival time and the distance between stations, but do not make full use of dynamic factors such as passenger number, dwell time, bus driving efficiency, etc. We propose a novel approach that takes full advantage of dynamic factors. Our approach is based on a Recurrent Neural Network (RNN). The experimental results indicate that a variety of prediction algorithms (such as Support Vector Machine, Kalman filter, Multilayer Perceptron, and RNN) have significantly improved performance after using dynamic factors. Further, we introduce RNN with an attention mechanism to adaptively select the most relevant input factors. Experiments demonstrate that the prediction accuracy of RNN with an attention mechanism is better than RNN with no attention mechanism when there are heterogeneous input factors. The experimental results show the superior performances of our approach on the data set provided by Jinan Public Transportation Corporation.


2020 ◽  
Vol 34 (05) ◽  
pp. 9402-9409
Author(s):  
Lingyong Yan ◽  
Xianpei Han ◽  
Ben He ◽  
Le Sun

Bootstrapping for entity set expansion (ESE) has long been modeled as a multi-step pipelined process. Such a paradigm, unfortunately, often suffers from two main challenges: 1) the entities are expanded in multiple separate steps, which tends to introduce noisy entities and results in the semantic drift problem; 2) it is hard to exploit the high-order entity-pattern relations for entity set expansion. In this paper, we propose an end-to-end bootstrapping neural network for entity set expansion, named BootstrapNet, which models the bootstrapping in an encoder-decoder architecture. In the encoding stage, a graph attention network is used to capture both the first- and the high-order relations between entities and patterns, and encode useful information into their representations. In the decoding stage, the entities are sequentially expanded through a recurrent neural network, which outputs entities at each stage, and its hidden state vectors, representing the target category, are updated at each expansion step. Experimental results demonstrate substantial improvement of our model over previous ESE approaches.


2018 ◽  
Vol 12 (04) ◽  
pp. 523-540
Author(s):  
Kento Masui ◽  
Akiyoshi Ochiai ◽  
Shintaro Yoshizawa ◽  
Hideki Nakayama

The task of visual relationship recognition (VRR) is to recognize multiple objects and their relationships in an image. A fundamental difficulty of this task is class–number scalability, since the number of possible relationships we need to consider causes combinatorial explosion. Another difficulty of this task is modeling how to avoid outputting semantically redundant relationships. To overcome these challenges, this paper proposes a novel architecture with a recurrent neural network (RNN) and triplet unit (TU). The RNN allows our model to be optimized for outputting a sequence of relationships. By optimizing our model to a semantically diverse relationship sequence, we increase the variety in output relationships. At each step of the RNN, our TU enables the model to classify a relationship while achieving class–number scalability by decomposing a relationship into a subject–predicate–object (SPO) triplet. We evaluate our model on various datasets and compare the results to a baseline. These experimental results show our model’s superior recall and precision with fewer predictions compared to the baseline, even as it produces greater variety in relationships.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Yan Chu ◽  
Xiao Yue ◽  
Lei Yu ◽  
Mikhailov Sergei ◽  
Zhengkui Wang

Captioning the images with proper descriptions automatically has become an interesting and challenging problem. In this paper, we present one joint model AICRL, which is able to conduct the automatic image captioning based on ResNet50 and LSTM with soft attention. AICRL consists of one encoder and one decoder. The encoder adopts ResNet50 based on the convolutional neural network, which creates an extensive representation of the given image by embedding it into a fixed length vector. The decoder is designed with LSTM, a recurrent neural network and a soft attention mechanism, to selectively focus the attention over certain parts of an image to predict the next sentence. We have trained AICRL over a big dataset MS COCO 2014 to maximize the likelihood of the target description sentence given the training images and evaluated it in various metrics like BLEU, METEROR, and CIDEr. Our experimental results indicate that AICRL is effective in generating captions for the images.


Author(s):  
Sho Takase ◽  
Jun Suzuki ◽  
Masaaki Nagata

This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks


1992 ◽  
Vol 03 (04) ◽  
pp. 395-404 ◽  
Author(s):  
REI-YAO WU ◽  
WEN-HSIANG TSAI

A single-layer recurrent neural network is proposed to perform thinning of binary images. This network iteratively removes the contour points of an object shape by template matching. The set of templates is specially designed for a one-pass parallel thinning algorithm. The proposed neural network produce the same results as the algorithm. Neurons in the neural network performs a sigma-pi function to collect inputs. To obtain this function, the templates used in the algorithm are transformed to equivalent Boolean expressions. After the neural network converges, a perfectly 8-connected skeleton is derived. Good experimental results show the feasibility of the proposed approach.


Author(s):  
Qiannan Zhu ◽  
Xiaofei Zhou ◽  
Zeliang Song ◽  
Jianlong Tan ◽  
Li Guo

With the rapid information explosion of news, making personalized news recommendation for users becomes an increasingly challenging problem. Many existing recommendation methods that regard the recommendation procedure as the static process, have achieved better recommendation performance. However, they usually fail with the dynamic diversity of news and user’s interests, or ignore the importance of sequential information of user’s clicking selection. In this paper, taking full advantages of convolution neural network (CNN), recurrent neural network (RNN) and attention mechanism, we propose a deep attention neural network DAN for news recommendation. Our DAN model presents to use attention-based parallel CNN for aggregating user’s interest features and attention-based RNN for capturing richer hidden sequential features of user’s clicks, and combines these features for new recommendation. We conduct experiment on real-world news data sets, and the experimental results demonstrate the superiority and effectiveness of our proposed DAN model.


Author(s):  
Bowei Shan ◽  
Yong Fang

AbstractThis paper develops an arithmetic coding algorithm based on delta recurrent neural network for edge computing devices called DRAC. Our algorithm is implemented on a Xilinx Zynq 7000 Soc board. We evaluate DRAC with four datasets and compare it with the state-of-the-art compressor DeepZip. The experimental results show that DRAC outperforms DeepZip and achieves 5X speedup ratio and 20X power consumption saving.


Sign in / Sign up

Export Citation Format

Share Document