omnidirectional images
Recently Published Documents


TOTAL DOCUMENTS

209
(FIVE YEARS 61)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Seif Eddine Guerbas ◽  
Nathan Crombez ◽  
Guillaume Caron ◽  
El Mustapha Mouaddib

2021 ◽  
pp. 1-14
Author(s):  
Asuto Taniguchi ◽  
Fumihiro Sasaki ◽  
Mototsugu Muroi ◽  
Ryota Yamashina

2021 ◽  
Vol 11 (16) ◽  
pp. 7521
Author(s):  
Mónica Ballesta ◽  
Luis Payá ◽  
Sergio Cebollada ◽  
Oscar Reinoso ◽  
Francisco Murcia

Understanding the environment is an essential ability for robots to be autonomous. In this sense, Convolutional Neural Networks (CNNs) can provide holistic descriptors of a scene. These descriptors have proved to be robust in dynamic environments. The aim of this paper is to perform hierarchical localization of a mobile robot in an indoor environment by means of a CNN. Omnidirectional images are used as the input of the CNN. Experiments include a classification study in which the CNN is trained so that the robot is able to find out the room where it is located. Additionally, a transfer learning technique transforms the original CNN into a regression CNN which is able to estimate the coordinates of the position of the robot in a specific room. Regarding classification, the room retrieval task is performed with considerable success. As for the regression stage, when it is performed along with an approach based on splitting rooms, it also provides relatively accurate results.


Author(s):  
Dandan Zhu ◽  
Yongqing Chen ◽  
Defang Zhao ◽  
Xiongkuo Min ◽  
Qiangqiang Zhou ◽  
...  

2021 ◽  
Author(s):  
Zoltan Kato ◽  
Gabor Nagy ◽  
Martin Humenberger ◽  
Gabriela Csurka

2021 ◽  
Author(s):  
Dandan Zhu ◽  
Yongqing Chen ◽  
Xiongkuo Min ◽  
Yucheng Zhu ◽  
Guokai Zhang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


Sign in / Sign up

Export Citation Format

Share Document