Estimating Pose of Omnidirectional Camera by Convolutional Neural Network

Author(s):  
Le Wang ◽  
Yuhao Shan ◽  
Shigang Li ◽  
Takahiro Kosaki
2020 ◽  
Vol 32 (6) ◽  
pp. 1193-1199
Author(s):  
Shunya Tanaka ◽  
◽  
Yuki Inoue

An omnidirectional camera can simultaneously capture all-round (360°) environmental information as well as the azimuth angle of a target object or person. By configuring a stereo camera set with two omnidirectional cameras, we can easily determine the azimuth angle of a target object or person per camera on the image information captured by the left and right cameras. A target person in an image can be localized by using a region-based convolutional neural network and the distance measured by the parallax in the combined azimuth angles.


2018 ◽  
Vol 30 (4) ◽  
pp. 540-551 ◽  
Author(s):  
Shingo Nakamura ◽  
◽  
Tadahiro Hasegawa ◽  
Tsubasa Hiraoka ◽  
Yoshinori Ochiai ◽  
...  

The Tsukuba Challenge is a competition, in which autonomous mobile robots run on a route set on a public road under a real environment. Their task includes not only simple running but also finding multiple specific persons at the same time. This study proposes a method that would realize person searching. While many person-searching algorithms use a laser sensor and a camera in combination, our method only uses an omnidirectional camera. The search target is detected using a convolutional neural network (CNN) that performs a classification of the search target. Training a CNN requires a great amount of data for which pseudo images created by composition are used. Our method is implemented in an autonomous mobile robot, and its performance has been verified in the Tsukuba Challenge 2017.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document