Mobile Robot Exploration in Indoor Environment Using Topological Structure with Invisible Barcode

Author(s):  
Jinwook Huh ◽  
Kyungmin Lee ◽  
Wan Chung ◽  
Woong Jeong ◽  
Kyung Kim
ETRI Journal ◽  
2007 ◽  
Vol 29 (2) ◽  
pp. 189-200 ◽  
Author(s):  
Jinwook Huh ◽  
Woong Sik Chung ◽  
Sang Yep Nam ◽  
Wan Kyun Chung

Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 40
Author(s):  
Hirokazu Madokoro ◽  
Hanwool Woo ◽  
Stephanie Nix ◽  
Kazuhito Sato

This study was conducted to develop original benchmark datasets that simultaneously include indoor–outdoor visual features. Indoor visual information related to images includes outdoor features to a degree that varies extremely by time, weather, and season. We obtained time-series scene images using a wide field of view (FOV) camera mounted on a mobile robot moving along a 392-m route in an indoor environment surrounded by transparent glass walls and windows for two directions in three seasons. For this study, we propose a unified method for extracting, characterizing, and recognizing visual landmarks that are robust to human occlusion in a real environment in which robots coexist with people. Using our method, we conducted an evaluation experiment to recognize scenes divided up to 64 zones with fixed intervals. The experimentally obtained results using the datasets revealed the performance and characteristics of meta-parameter optimization, mapping characteristics to category maps, and recognition accuracy. Moreover, we visualized similarities between scene images using category maps. We also identified cluster boundaries obtained from mapping weights.


Author(s):  
Donato Di Paola ◽  
Annalisa Milella ◽  
Grazia Cicirelli ◽  
Arcangelo Distante

This paper presents a novel vision-based approach for indoor environment monitoring by a mobile robot. The proposed system is based on computer vision methods to match the current scene with a stored one, looking for new or removed objects. The matching process uses both keypoint features and colour information. A PCA-SIFT algorithm is employed for feature extraction and matching. Colour-based segmentation is performed separately, using HSV coding. A fuzzy logic inference system is applied to fuse information from both steps and decide whether a significant variation of the scene has occurred. Results from experimental tests demonstrate the feasibility of the proposed method in robot surveillance applications.


Sign in / Sign up

Export Citation Format

Share Document