An augmented-reality-based real-time panoramic vision system for autonomous navigation

Author(s):  
S. Dasgupta ◽  
A. Banerjee
Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive humans and their actions for safe autonomous navigation. For simultaneous human detection and action recognition, the real-time performance of the robot vision is an important issue. In this paper, we propose a robot vision system in which original images captured by a camera sensor are described by the optical flow. These images are then used as inputs for the human and action classifications. For the image inputs, two classifiers based on convolutional neural networks are developed. Moreover, we describe a novel detector (a local search window) for clipping partial images around the target human from the original image. Since the camera sensor moves together with the robot, the camera movement has an influence on the calculation of optical flow in the image, which we address by further modifying the optical flow for changes caused by the camera movement. Through the experiments, we show that the robot vision system can detect humans and recognize the action in real time. Furthermore, we show that a moving robot can achieve human detection and action recognition by modifying the optical flow.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4708
Author(s):  
Xiaodong Guo ◽  
Zhoubo Wang ◽  
Wei Zhou ◽  
Zhenhai Zhang

This paper summarized the research status, imaging model, systems calibration, distortion correction, and panoramic expansion of panoramic vision systems, pointed out the existing problems and put forward the prospect of future research. According to the research status of panoramic vision systems, a panoramic vision system with single viewpoint of refraction and reflection is designed. The systems had the characteristics of fast acquisition, low manufacturing cost, fixed single-view imaging, integrated imaging, and automatic switching depth of field. Based on these systems, an improved nonlinear optimization polynomial fitting method is proposed to calibrate the monocular HOVS, and the binocular HOVS is calibrated with the Aruco label. This method not only improves the robustness of the calibration results, but also simplifies the calibration process. Finally, a real-time method of panoramic map of multi-function vehicle based on vcam is proposed.


2014 ◽  
Vol 668-669 ◽  
pp. 1098-1101
Author(s):  
Jian Wang ◽  
Zhen Hai Zhang ◽  
Ke Jie Li ◽  
Hai Yan Shao ◽  
Tao Xu ◽  
...  

Catadioptric panoramic vision system has been widely used in many fields, and also plays a very important role in environment perception of unmanned platform especially. However, the resolution of system is not very high, usually less than 5 million pixels at present. Even if the resolution is high, but the unwrapping and rectification of panoramic video is carried out off-line. Further, the system is also applied in stationary state or low stationary moving. This paper proposes an unwrapping and rectification method based on high-resolution catadioptric panoramic vision system used during non-stationary moving. It can segment dynamic circular mark region accurately and get the coordinates of center of circular image real-timely, shorten the time of image processing, meanwhile the coordinates of center and radius of the circular mark region would be obtained, so the image distortion caused by inaccurate center coordinates can be reduced. During image rectification, after achieving radial distortions parameters (K1, K2, K3), decentering distortions parameters (P1, P2), and the correction factor that has no physical meanings, we can used those for fitting the rectification polynomial, so the panoramic video can be rectified without distortion.


Author(s):  
Satoshi Hoshino ◽  
◽  
Kyohei Niimura

Mobile robots equipped with camera sensors are required to perceive surrounding humans and their actions for safe and autonomous navigation. In this work, moving humans are the target objects. For robot vision, real-time performance is an important requirement. Therefore, we propose a robot vision system in which the original images captured by a camera sensor are described by optical flow. These images are then used as inputs to a classifier. For classifying images into human and not-human classifications, and the actions, we use a convolutional neural network (CNN), rather than coding invariant features. Moreover, we present a local search window as a novel detector for clipping partial images around target objects in an original image. Through the experiments, we ultimately show that the robot vision system is able to detect moving humans and recognize action in real time.


Author(s):  
Christopher J. Hall ◽  
Daniel Morgan ◽  
Austin Jensen ◽  
Haiyang Chao ◽  
Calvin Coopmans ◽  
...  

This paper, was originally prepared for and presented at the 2008 AUVSI Student UAS Competition, it provides the OSAM-UAV (Open-Source Autonomous Multiple Unmanned Aerial Vehicle) team’s design of an unmanned aircraft system for remote target recognition missions. Our OSAM-UAVs are designed to be small in size with strong airframes, and low-cost using open-source in both autopilot hardware and flight control software. A robust EPP-based delta wing airframe is used to prevent damage to the airframe during landing or even crashes. Autonomous navigation is achieved using an open-source Paparazzi autopilot, which gives special attention to safety during operation. Our system has been further enhanced by using the Xbow MNAV Inertial Measurement Unit (IMU) in place of the Paparazzi’s standard infrared (IR) sensors, for better georeferencing. An array of light-weight video cameras have been embedded in the airframe, which stream video to the ground control station through wireless transmitters in real-time. The ground control system includes a computer vision system, which processes and geo-references images in real-time for target recognition. Experimental results show the successful autonomous waypoint navigation and real-time image processing.


2018 ◽  
Vol 1 (2) ◽  
pp. 17-23
Author(s):  
Takialddin Al Smadi

This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality.   © 2018 JASET, International Scholars and Researchers Association


2015 ◽  
Vol 6 (2) ◽  
Author(s):  
Rujianto Eko Saputro ◽  
Dhanar Intan Surya Saputra
Keyword(s):  

Media pembelajaran ternyata selalu mengikuti perkembangan teknologi yangada, mulai dari teknologi cetak, audio visual, komputer sampai teknologi gabunganantara teknologi cetak dengan komputer. Saat ini media pembelajaran hasil gabunganteknologi cetak dan komputer dapat diwujudkan dengan media teknologi AugmentedReality (AR). Augmented Reality (AR) adalah teknologi yang digunakan untukmerealisasikan dunia virtual ke dalam dunia nyata secara real-time. Organ pencernaanmanusia terdiri atas Mulut, Kerongkongan atau esofagus, Lambung, Usus halus, danUsus besar. Media pembelajaran mengenal organ pencernaan manusia pada saat inisangat monoton, yaitu melalui gambar, buku atau bahkan alat proyeksi lainnya.Menggunakan Augmented Reality yang mampu merealisasikan dunia virtual ke dunianyata, dapat mengubah objek-objek tersebut menjadi objek 3D, sehingga metodepembelajaran tidaklah monoton dan anak-anak jadi terpacu untuk mengetahuinya lebihlanjut, seperti mengetahui nama organ dan keterangan dari masing-masing organtersebut.


Sign in / Sign up

Export Citation Format

Share Document