scholarly journals Evaluation of Field of View Width in Stereo-vision-Based Visual Homing

Robotica ◽  
2019 ◽  
Vol 38 (5) ◽  
pp. 787-803
Author(s):  
D. M. Lyons ◽  
B. Barriage ◽  
L. Del Signore

SummaryVisual homing is a local navigation technique used to direct a robot to a previously seen location by comparing the image of the original location with the current visual image. Prior work has shown that exploiting depth cues such as image scale or stereo-depth in homing leads to improved homing performance. While it is not unusual to use a panoramic field of view (FOV) camera in visual homing, it is unusual to have a panoramic FOV stereo-camera. So, while the availability of stereo-depth information may improve performance, the concomitant-restricted FOV may be a detriment to performance, unless specialized stereo hardware is used. In this paper, we present an investigation of the effect on homing performance of varying the FOV widths in a stereo-vision-based visual homing algorithm using a common stereo-camera. We have collected six stereo-vision homing databases – three indoor and three outdoor. Based on over 350,000 homing trials, we show that while a larger FOV yields performance improvements for larger homing offset angles, the relative improvement falls off with increasing FOVs, and in fact decreases for the widest FOV tested. We conduct additional experiments to identify the cause of this fall-off in performance, which we term the ‘blinder’ effect, and which we predict should affect other correspondence-based visual homing algorithms.

Robotica ◽  
2015 ◽  
Vol 34 (12) ◽  
pp. 2741-2758 ◽  
Author(s):  
Paramesh Nirmal ◽  
Damian M. Lyons

SUMMARYVisual Homing is a navigation method based on comparing a stored image of a goal location to the current image to determine how to navigate to the goal location. It is theorized that insects such as ants and bees employ visual homing techniques to return to their nest or hive, and inspired by this, several researchers have developed elegant robot visual homing algorithms. Depth information, from visual scale, or other modality such as laser ranging, can improve the quality of homing. While insects are not well equipped for stereovision, stereovision is an effective robot sensor. We describe the challenges involved in using stereovision derived depth in visual homing and our proposed solutions. Our algorithm,Homing with Stereovision(HSV), utilizes a stereo camera mounted on a pan-tilt unit to build composite wide-field stereo images and estimate distance and orientation from the robot to the goal location. HSV is evaluated in a set of 200 indoor trials using two Pioneer 3-AT robots showing it effectively leverages stereo depth information when compared to a depth from scale approach.


2021 ◽  
Vol 33 (3) ◽  
pp. 604-609
Author(s):  
Daisuke Kondo ◽  

The teleoperation of construction machinery has been introduced to mines and disaster sites. However, the work efficiency of teleoperations is lower than that of onboard operations owing to limitations in the viewing angle and insufficient depth information. To solve these problems and realize effective teleoperations, the Komatsu MIRAI Construction Equipment Cooperative Research Center is developing the next-generation teleoperation cockpit. In this study, we develop a display for teleoperations with a wide field-of-view, a portable projection screen, and a system that reproduces motion parallax, which is suitable for depth perception in the operating range of construction machinery.


Robotica ◽  
2018 ◽  
Vol 36 (8) ◽  
pp. 1225-1243 ◽  
Author(s):  
Jose-Pablo Sanchez-Rodriguez ◽  
Alejandro Aceves-Lopez

SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


2019 ◽  
Vol 16 (6) ◽  
pp. 172988141989351
Author(s):  
Xi Zhang ◽  
Yuanzhi Xu ◽  
Haichao Li ◽  
Lijing Zhu ◽  
Xin Wang ◽  
...  

For the purpose of obtaining high-precision in stereo vision calibration, a large-size precise calibration target, which can cover more than half of the field of view is vital. However, large-scale calibration targets are very difficult to fabricate. Based on the idea of error tracing, a high-precision calibration method for vision system with large field of view by constructing a virtual 3-D calibration target with a laser tracker was proposed in this article. A virtual 3-D calibration target that covers the whole measurement space can be established flexibly and the measurement precision of the vision system can be traceable to the laser tracker. First, virtual 3-D targets by calculating rigid body transformation with unit quaternion method were constructed. Then, the high-order distortion camera model was taken into consideration. Besides, the calibration parameters were solved with Levenberg–Marquardt optimization algorithm. In the experiment, a binocular stereo vision system with the field of view of 4 × 3 × 2 m3 was built for verifying the validity and precision of the proposed calibration method. It is measured that the accuracy with the proposed method can be greatly improved comparing with traditional plane calibration method. The method can be widely used in industrial applications, such as in the field of calibrating large-scale vision-based coordinate metrology, and six-degrees of freedom pose tracking system for dimensional measurement of workpiece, as well as robotics geometrical accuracy detection and compensation.


2020 ◽  
Vol 20 (10) ◽  
pp. 5406-5414 ◽  
Author(s):  
Sunil Jacob ◽  
Varun G. Menon ◽  
Saira Joseph

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3355
Author(s):  
Chengtao Cai ◽  
Bing Fan ◽  
Xin Liang ◽  
Qidan Zhu

By combining the advantages of 360-degree field of view cameras and the high resolution of conventional cameras, the hybrid stereo vision system could be widely used in surveillance. As the relative position of the two cameras is not constant over time, its automatic rectification is highly desirable when adopting a hybrid stereo vision system for practical use. In this work, we provide a method for rectifying the dynamic hybrid stereo vision system automatically. A perspective projection model is proposed to reduce the computation complexity of the hybrid stereoscopic 3D reconstruction. The rectification transformation is calculated by solving a nonlinear constrained optimization problem for a given set of corresponding point pairs. The experimental results demonstrate the accuracy and effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document