Localization of RGB-D Camera Networks by Skeleton-Based Viewpoint Invariance Transformation

Author(s):  
Yun Han ◽  
Sheng-Luen Chung ◽  
Jeng-Sheng Yeh ◽  
Qi-Jun Chen
2014 ◽  
Vol 63 (7) ◽  
pp. 074211
Author(s):  
Han Yun ◽  
Chung Sheng-Luen ◽  
Yeh Jeng-Sheng ◽  
Chen Qi-Jun

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


1983 ◽  
Vol 6 ◽  
pp. 399-404 ◽  
Author(s):  
Ian Halliday ◽  
Arthur A. Griffin ◽  
Alan T. Blackwell

Camera networks for the study of bright fireballs now have a history approaching two decades• It was hoped that the networks would produce a statistically significant group of recovered meteorites with accurate orbits. Due to the great difficulty in locating the meteorites from a photographed event, there are still only three meteorites with orbits determined from suitable photographs; Pribram, Lost City and Innisfree (Ceplecha I96I, McCrosky et al. 1971, Halliday et al. 1978, respectively). Networks do, however, provide an alternative approach to the problem. Instead of determining approximate orbits from visual observations of recovered meteorite falls, it is now preferable to use reliable orbits from the camera networks for fireballs which are believed to have dropped meteorites that could not be located, or, that are believed to have been physically identical to meteorites, although no appreciable mass survived the atmospheric flight. This paper will review current knowledge based on this approach to the problem.


2016 ◽  
Vol 16 (10) ◽  
pp. 3875-3886 ◽  
Author(s):  
Yong Wang ◽  
Dianhong Wang ◽  
Xufan Zhang ◽  
Jun Chen ◽  
Yamin Li

Sign in / Sign up

Export Citation Format

Share Document