scholarly journals A Real-Time Panoramic Vision System for Autonomous Navigation

Author(s):  
S. Dasgupta ◽  
A. Banerjee
Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4708
Author(s):  
Xiaodong Guo ◽  
Zhoubo Wang ◽  
Wei Zhou ◽  
Zhenhai Zhang

This paper summarized the research status, imaging model, systems calibration, distortion correction, and panoramic expansion of panoramic vision systems, pointed out the existing problems and put forward the prospect of future research. According to the research status of panoramic vision systems, a panoramic vision system with single viewpoint of refraction and reflection is designed. The systems had the characteristics of fast acquisition, low manufacturing cost, fixed single-view imaging, integrated imaging, and automatic switching depth of field. Based on these systems, an improved nonlinear optimization polynomial fitting method is proposed to calibrate the monocular HOVS, and the binocular HOVS is calibrated with the Aruco label. This method not only improves the robustness of the calibration results, but also simplifies the calibration process. Finally, a real-time method of panoramic map of multi-function vehicle based on vcam is proposed.


2014 ◽  
Vol 668-669 ◽  
pp. 1098-1101
Author(s):  
Jian Wang ◽  
Zhen Hai Zhang ◽  
Ke Jie Li ◽  
Hai Yan Shao ◽  
Tao Xu ◽  
...  

Catadioptric panoramic vision system has been widely used in many fields, and also plays a very important role in environment perception of unmanned platform especially. However, the resolution of system is not very high, usually less than 5 million pixels at present. Even if the resolution is high, but the unwrapping and rectification of panoramic video is carried out off-line. Further, the system is also applied in stationary state or low stationary moving. This paper proposes an unwrapping and rectification method based on high-resolution catadioptric panoramic vision system used during non-stationary moving. It can segment dynamic circular mark region accurately and get the coordinates of center of circular image real-timely, shorten the time of image processing, meanwhile the coordinates of center and radius of the circular mark region would be obtained, so the image distortion caused by inaccurate center coordinates can be reduced. During image rectification, after achieving radial distortions parameters (K1, K2, K3), decentering distortions parameters (P1, P2), and the correction factor that has no physical meanings, we can used those for fitting the rectification polynomial, so the panoramic video can be rectified without distortion.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive surrounding humans and their actions for safe and autonomous navigation. In this work, moving humans are the target objects. For robot vision, real-time performance is an important requirement. Therefore, we propose a robot vision system in which the original images captured by a camera sensor are described by optical flow. These images are then used as inputs to a classifier. For classifying images into human and not-human classifications, and the actions, we use a convolutional neural network (CNN), rather than coding invariant features. Moreover, we present a local search window as a novel detector for clipping partial images around target objects in an original image. Through the experiments, we ultimately show that the robot vision system is able to detect moving humans and recognize action in real time.


Author(s):  
Christopher J. Hall ◽  
Daniel Morgan ◽  
Austin Jensen ◽  
Haiyang Chao ◽  
Calvin Coopmans ◽  
...  

This paper, was originally prepared for and presented at the 2008 AUVSI Student UAS Competition, it provides the OSAM-UAV (Open-Source Autonomous Multiple Unmanned Aerial Vehicle) team’s design of an unmanned aircraft system for remote target recognition missions. Our OSAM-UAVs are designed to be small in size with strong airframes, and low-cost using open-source in both autopilot hardware and flight control software. A robust EPP-based delta wing airframe is used to prevent damage to the airframe during landing or even crashes. Autonomous navigation is achieved using an open-source Paparazzi autopilot, which gives special attention to safety during operation. Our system has been further enhanced by using the Xbow MNAV Inertial Measurement Unit (IMU) in place of the Paparazzi’s standard infrared (IR) sensors, for better georeferencing. An array of light-weight video cameras have been embedded in the airframe, which stream video to the ground control station through wireless transmitters in real-time. The ground control system includes a computer vision system, which processes and geo-references images in real-time for target recognition. Experimental results show the successful autonomous waypoint navigation and real-time image processing.


Author(s):  
Giuseppe Placidi ◽  
Danilo Avola ◽  
Luigi Cinque ◽  
Matteo Polsinelli ◽  
Eleni Theodoridou ◽  
...  

AbstractVirtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.


2005 ◽  
Vol 56 (8-9) ◽  
pp. 831-842 ◽  
Author(s):  
Monica Carfagni ◽  
Rocco Furferi ◽  
Lapo Governi

Sign in / Sign up

Export Citation Format

Share Document