scholarly journals Robust Visual Compass Using Hybrid Features for Indoor Environments

Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 220 ◽  
Author(s):  
Ruibin Guo ◽  
Keju Peng ◽  
Dongxiang Zhou ◽  
Yunhui Liu

Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy.

2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Kanghyu Lee ◽  
David K. Han ◽  
Hanseok Ko

We propose a novel remote heart rate (HR) estimation method using facial images based on video analytics. Most of previous methods have been demonstrated in well-controlled indoor environments. In contrast, this paper proposes a practical video analytic framework under actual driving conditions by extracting key HR inducing features. In particular, when cars are driven, effective and stable HR estimation becomes challenging as there are many dynamic elements, such as rapid illumination changes, vibrations, and ambient lighting that can exist in the vehicle interior. To overcome those disturbances of HR estimation, the driver face region is first detected and cropped to the region of interest (RoI). Second, the components related to HR are extracted from mixed noisy components using ensemble empirical mode decomposition (EEMD). Finally, the extracted signal is analyzed in frequency domain and smoothed with temporal filtering. To verify our approach, the proposed method is compared with recent prominent methods employing a public HCI dataset. It has been demonstrated that the proposed approach delivers superior performance under driving conditions using Bland-Altman plots.


2013 ◽  
Vol 20 (4) ◽  
pp. 40-48 ◽  
Author(s):  
Shaojie Shen ◽  
Nathan Michael ◽  
Vijay Kumar

2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


2021 ◽  
Author(s):  
Kazuyuki Kaneda ◽  
Tatsuya Ooba ◽  
Hideki Shimada ◽  
Osamu Shiku ◽  
Yuji Teshima

Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guangbing Zhou ◽  
Jing Luo ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shige Meng ◽  
...  

Purpose Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments. Design/methodology/approach Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments. Findings The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation. Originality/value Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.


Sign in / Sign up

Export Citation Format

Share Document