object velocity
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 2)

H-INDEX

6
(FIVE YEARS 0)

2020 ◽  
Vol 20 (11) ◽  
pp. 1464
Author(s):  
Bjoern Joerges ◽  
Laurence Harris
Keyword(s):  

2018 ◽  
Vol 54 (14) ◽  
pp. 894-896
Author(s):  
Zhikun Liao ◽  
Dawei Lu ◽  
Jiemin Hu ◽  
Jun Zhang

2018 ◽  
Vol 37 (8) ◽  
pp. 867-889
Author(s):  
María-Teresa Lorente ◽  
Eduardo Owen ◽  
Luis Montano

This work addresses a new technique of motion planning and navigation for differential-drive robots in dynamic environments. Static and dynamic objects are represented directly on the control space of the robot, where decisions on the best motion are made. A new model representing the dynamism and the prediction of the future behavior of the environment is defined, the dynamic object velocity space (DOVS). A formal definition of this model is provided, establishing the properties for its characterization. An analysis of its complexity, compared with other methods, is performed. The model contains information about the future behavior of obstacles, mapped on the robot control space. It allows planning of near-time-optimal safe motions within the visibility space horizon, not only for the current sampling period. Navigation strategies are developed based on the identification of situations in the model. The planned strategy is applied and updated for each sampling time, adapting to changes occurring in the scenario. The technique is evaluated in randomly generated simulated scenarios, based on metrics defined using safety and time-to-goal criteria. An evaluation in real-world experiments is also presented.


2016 ◽  
Vol 127 (3) ◽  
pp. e58-e59
Author(s):  
O.C. Banea ◽  
J. Casanova-Molla ◽  
M. Morales ◽  
C. Cabib ◽  
R. Arca ◽  
...  

2010 ◽  
Vol 7 (9) ◽  
pp. 39-39
Author(s):  
M. Disch ◽  
K. De Valois
Keyword(s):  

2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Kenta Goto ◽  
Katsunari Shibata

To develop a robot that behaves flexibly in the real world, it is essential that it learns various necessary functions autonomously without receiving significant information from a human in advance. Among such functions, this paper focuses on learning “prediction” that is attracting attention recently from the viewpoint of autonomous learning. The authors point out that it is important to acquire through learning not only the way of predicting future information, but also the purposive extraction of prediction target from sensor signals. It is suggested that through reinforcement learning using a recurrent neural network, both emerge purposively and simultaneously without testing individually whether or not each piece of information is predictable. In a task where an agent gets a reward when it catches a moving object that can possibly become invisible, it was observed that the agent learned to detect the necessary factors of the object velocity before it disappeared, to relay the information among some hidden neurons, and finally to catch the object at an appropriate position and timing, considering the effects of bounces off a wall after the object became invisible.


Sign in / Sign up

Export Citation Format

Share Document