omnidirectional vision
Recently Published Documents


TOTAL DOCUMENTS

304
(FIVE YEARS 21)

H-INDEX

20
(FIVE YEARS 1)

Author(s):  
Sergio Cebollada ◽  
Luis Payá ◽  
María Flores ◽  
Vicente Román ◽  
Adrián Peidró ◽  
...  

2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110593
Author(s):  
Ivan Kholodilin ◽  
Yuan Li ◽  
Qinglin Wang ◽  
Paul David Bourke

Recent advancements in deep learning require a large amount of the annotated training data containing various terms and conditions of the environment. Thus, developing and testing algorithms for the navigation of mobile robots can be expensive and time-consuming. Motivated by the aforementioned problems, this article presents a photorealistic simulator for the computer vision community working with omnidirectional vision systems. Built using unity, the simulator integrates sensors, mobile robots, and elements of the indoor environment and allows one to generate synthetic photorealistic data sets with automatic ground truth annotations. With the aid of the proposed simulator, two practical applications are studied, namely extrinsic calibration of the vision system and three-dimensional reconstruction of the indoor environment. For the proposed calibration and reconstruction techniques, the processes themselves are simple, robust, and accurate. Proposed methods are evaluated experimentally with data generated by the simulator. The proposed simulator and supporting materials are available online: http://www.ilabit.org .


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhen Tong

As a sensor with a wide field of view, the panoramic vision sensor is efficient and convenient in perceiving the characteristic information of the surrounding environment and plays an important role in the experience of artistic design of images. The transformation of visual and other sensory experiences in art design is to integrate sound, image, texture, taste, and smell with each other through reasonable rules, to create more excellent crossborder art design works. To improve the sensory experience that art design works bring to the audience, the combination of vision and other sensory experiences can maximize the advantages of multiple information dissemination methods and combine the omnidirectional visual sensor with the sensory experience of art design images. In the method part, this article introduces the omnidirectional vision sensor, art design image, and sensory experience modes and content and introduces the hyperbolic concave mirror theory and the Micusik perspective projection imaging model. In the experimental part, the experimental environment, experimental objects, and experimental procedures of this article are introduced. In the analysis part, this article analyzes the six aspects of image database dependency test, performance, comparison of different distortion types, false detection rate and missing detection rate, algorithm time-consuming comparison, sensory experience analysis, and feature point screening. Among the feelings of the art design image, for the first image, 87.21% of the audience’s feelings are happy, indicating that the main idea of this image can bring joy to people. In the second image, the audience’s feelings are mostly sad. For the third image, more than half of the audience’s feelings are melancholy. For the fourth image, 69.34% of the audience’s inner feelings are calm. It explains that the difference in the content of art design images can bring different sensory experiences to people.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Bin Tan

With the continuous emergence and innovation of computer technology, mobile robots are a relatively hot topic in the field of artificial intelligence. It is an important research area of more and more scholars. The core of mobile robots is to be able to realize real-time perception of the surrounding environment and self-positioning and to conduct self-navigation through this information. It is the key to the robot’s autonomous movement and has strategic research significance. Among them, the goal recognition ability of the soccer robot vision system is the basis of robot path planning, motion control, and collaborative task completion. The main recognition task in the vision system is the omnidirectional vision system. Therefore, how to improve the accuracy of target recognition and the light adaptive ability of the robot omnidirectional vision system is the key issue of this paper. Completed the system construction and program debugging of the omnidirectional mobile robot platform, and tested its omnidirectional mobile function, positioning and map construction capabilities in the corridor and indoor environment, global navigation function in the indoor environment, and local obstacle avoidance function. How to use the local visual information of the robot more perfectly to obtain more available information, so that the “eyes” of the robot can be greatly improved by relying on image recognition technology, so that the robot can obtain more accurate environmental information by itself has always been domestic and foreign one of the goals of the joint efforts of scholars. Research shows that the standard error of the experimental group’s shooting and dribbling test scores before and the experimental group’s shooting and dribbling test results after the standard error level is 0.004, which is less than 0.05, which proves the use of soccer-assisted robot-assisted training. On the one hand, we tested the positioning and navigation functions of the omnidirectional mobile robot, and on the other hand, we verified the feasibility of positioning and navigation algorithms and multisensor fusion algorithms.


2021 ◽  
Author(s):  
Mohammad Abadi ◽  
◽  
Mohammad Alashti ◽  
Patrick Holthaus ◽  
Catherine Menon ◽  
...  

2021 ◽  
Vol 11 (8) ◽  
pp. 3360
Author(s):  
Huei-Yung Lin ◽  
Chien-Hsing He

This paper presents a novel self-localization technique for mobile robots based on image feature matching from omnidirectional vision. The proposed method first constructs a virtual space with synthetic omnidirectional imaging to simulate a mobile robot equipped with an omnidirectional vision system in the real world. In the virtual space, a number of vertical and horizontal lines are generated according to the structure of the environment. They are imaged by the virtual omnidirectional camera using the catadioptric projection model. The omnidirectional images derived from the virtual and real environments are then used to match the synthetic lines and real scene edges. Finally, the pose and trajectory of the mobile robot in the real world are estimated by the efficient perspective-n-point (EPnP) algorithm based on the line feature matching. In our experiments, the effectiveness of the proposed self-localization technique was validated by the navigation of a mobile robot in a real world environment.


2021 ◽  
pp. 913-915
Author(s):  
Peter Sturm

Sign in / Sign up

Export Citation Format

Share Document