Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot

Robotica ◽  
2014 ◽  
Vol 33 (2) ◽  
pp. 436-450 ◽  
Author(s):  
Chia-How Lin ◽  
Kai-Tai Song

SUMMARYThis paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. A novel image processing procedure is developed to estimate the distance between the robot and obstacles based-on inverse perspective transformation (IPT) in an image plane. A robust image processing solution is proposed to detect and segment a drivable ground area within the camera view. The proposed method integrates robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, distance measurement results are obtained similar to those of a laser range finder for mobile robot obstacle avoidance and navigation. The merit of this algorithm is that the mobile robot can have the capacity of path finding and obstacle avoidance by using a single monocular camera. Practical experimental results on a wheeled mobile robot show that the proposed imaging system successfully obtains distances of surrounding objects for reactive navigation in an indoor environment.

Author(s):  
Byunghoon Chung ◽  
Peter Knuepfer ◽  
Sooyong Lee

We propose a novel technique for acquiring effective information for obstacle avoidance in mobile robot navigation and for object detection in vision image. Instead of simply receiving data at a single point, we actively give perturbations to the system and measure the response from the system. Correlating the input and the output, we can formulate the correlation function and, useful information such as gradient can be obtained from this function. This algorithm is applied to the obstacle avoidance in mobile robot navigation and the object detection in vision image processing.


Author(s):  
Moustafa M. Kurdi

This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System). QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.


Author(s):  
E. L. Buhle ◽  
U. Aebi

CTEM brightfield images are formed by a combination of relatively high resolution elastically scattered electrons and unscattered and inelastically scattered electrons. In the case of electron spectroscopic images (ESI), the inelastically scattered electrons cause a loss of both contrast and spatial resolution in the image. In the case of ESI imaging on the Zeiss EM902, the transmited electrons are dispersed into their various energy components by passing them through a magnetic prism spectrometer; a slit is then placed in the image plane of the prism to select the electrons of a given energy loss for image formation. The purpose of this study was to compare CTEM with ESI images recorded on a Zeiss EM902 of ordered protein arrays. Digital image processing was employed to analyze the average unit cell morphologies of the two types of images.


Author(s):  
Hannes Lichte

Generally, the electron object wave o(r) is modulated both in amplitude and phase. In the image plane of an ideal imaging system we would expect to find an image wave b(r) that is modulated in exactly the same way, i. e. b(r) =o(r). If, however, there are aberrations, the image wave instead reads as b(r) =o(r) * FT(WTF) i. e. the convolution of the object wave with the Fourier transform of the wave transfer function WTF . Taking into account chromatic aberration, illumination divergence and the wave aberration of the objective lens, one finds WTF(R) = Echrom(R)Ediv(R).exp(iX(R)) . The envelope functions Echrom(R) and Ediv(R) damp the image wave, whereas the effect of the wave aberration X(R) is to disorder amplitude and phase according to real and imaginary part of exp(iX(R)) , as is schematically sketched in fig. 1.Since in ordinary electron microscopy only the amplitude of the image wave can be recorded by the intensity of the image, the wave aberration has to be chosen such that the object component of interest (phase or amplitude) is directed into the image amplitude. Using an aberration free objective lens, for X=0 one sees the object amplitude, for X= π/2 (“Zernike phase contrast”) the object phase. For a real objective lens, however, the wave aberration is given by X(R) = 2π (.25 Csλ3R4 + 0.5ΔzλR2), Cs meaning the coefficient of spherical aberration and Δz defocusing. Consequently, the transfer functions sin X(R) and cos(X(R)) strongly depend on R such that amplitude and phase of the image wave represent only fragments of the object which, fortunately, supplement each other. However, recording only the amplitude gives rise to the fundamental problems, restricting resolution and interpretability of ordinary electron images:


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110264
Author(s):  
Jiqing Chen ◽  
Chenzhi Tan ◽  
Rongxian Mo ◽  
Hongdu Zhang ◽  
Ganwei Cai ◽  
...  

Among the shortcomings of the A* algorithm, for example, there are many search nodes in path planning, and the calculation time is long. This article proposes a three-neighbor search A* algorithm combined with artificial potential fields to optimize the path planning problem of mobile robots. The algorithm integrates and improves the partial artificial potential field and the A* algorithm to address irregular obstacles in the forward direction. The artificial potential field guides the mobile robot to move forward quickly. The A* algorithm of the three-neighbor search method performs accurate obstacle avoidance. The current pose vector of the mobile robot is constructed during obstacle avoidance, the search range is narrowed to less than three neighbors, and repeated searches are avoided. In the matrix laboratory environment, grid maps with different obstacle ratios are compared with the A* algorithm. The experimental results show that the proposed improved algorithm avoids concave obstacle traps and shortens the path length, thus reducing the search time and the number of search nodes. The average path length is shortened by 5.58%, the path search time is shortened by 77.05%, and the number of path nodes is reduced by 88.85%. The experimental results fully show that the improved A* algorithm is effective and feasible and can provide optimal results.


Sign in / Sign up

Export Citation Format

Share Document