viewpoint change
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 8)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 12 (1) ◽  
pp. 293
Author(s):  
Rafał Kukołowicz ◽  
Maksymilian Chlipala ◽  
Juan Martinez-Carranza ◽  
Moncy Sajeev Idicula ◽  
Tomasz Kozacki

Near-eye holographic displays are the holy grail of wear-on 3D display devices because they are intended to project realistic wide-angle virtual scenes with parameters matching human vision. One of the key features of a realistic perspective is the ability to move freely around the virtual scene. This can be achieved by addressing the display with wide-angle computer-generated holograms (CGHs) that enable continuous viewpoint change. However, to the best of our knowledge there is no technique able to generate these types of content. Thus, in this work we propose an accurate and non-paraxial hologram update method for wide-angle CGHs that supports continuous viewpoint change around the scene. This method is based on the assumption that with a small change in perspective, two consecutive holograms share overlapping data. This enables reusing the corresponding part of the information from the previous view, eliminating the need to generate an entirely new hologram. Holographic information for the next viewpoint is calculated in two steps: first, a tool approximating the Angular Spectrum Propagation is proposed to generate the hologram data from previous viewpoint; and second, the efficient Phase Added Stereogram algorithm is utilized for generating the missing hologram content. This methodology offers fast and accurate calculations at the same time. Numerical and optical experiments are carried out to support the results of the proposed method.


2020 ◽  
Author(s):  
Edward Heywood-Everett ◽  
Daniel H Baker ◽  
Tom Hartley

There are at least two distinct ways in which the brain encodes spatial information: in egocentric representations locations are encoded relative to the observer, whereas in allocentric representations locations are encoded relative to the environment. Both inform spatial memory, but the extent to which they influence behaviour varies depending on the task. In the present study, two preregistered experiments used a psychophysical approach to measure the precision of spatial memory while varying ego- and allocentric task demands. Participants were asked to detect the changed location of one of four objects when seen from a new viewpoint (rotated by 0°, 5°, 15°, 45° or 135°). Experiment 1 used a Same/Different task and Experiment 2 used a 2AFC task. Psychophysical thresholds were calculated, showing that in both experiments, spatial change detection thresholds showed a monotonic but non-linear increase as viewpoint change increased. This was consistent with a preregistered model including distinct parameters corresponding to egocentric and allocentric contributions that change lawfully as a function of viewpoint shift. Our results provide a clearer understanding of how underlying memory representations interact to inform our spatial knowledge of the environment.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 376
Author(s):  
Qingming Zhang ◽  
Buhai Shi

This paper presents a novel method to extract local features, which instead of calculating local extrema computes global maxima in a discretized scale-space representation. To avoid interpolating scales on few data points and to achieve perfect rotation invariance, two essential techniques, increasing the width of kernels in pixel and utilizing disk-shaped convolution templates, are adopted in this method. Since the size of a convolution template is finite and finite templates can introduce computational error into convolution, we sufficiently discuss this problem and work out an upper bound of the computational error. The upper bound is utilized in the method to ensure that all features obtained are computed under a given tolerance. Besides, the technique of relative threshold to determine features is adopted to reinforce the robustness for the scene of changing illumination. Simulations show that this new method attains high performance of repeatability in various situations including scale change, rotation, blur, JPEG compression, illumination change, and even viewpoint change.


2019 ◽  
Vol 9 (16) ◽  
pp. 3336 ◽  
Author(s):  
Tzu-Wei Mi ◽  
Mau-Tsuen Yang

With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object in 360-degree videos. Relative to ordinary videos, 360-degree videos in an equirectangular format have special characteristics such as viewpoint change, occlusion, deformation, lighting change, scale change, and camera shakiness. Tracking algorithms designed for ordinary videos may not work well on 360-degree videos. Therefore, we thoroughly evaluate the performance of eight modern trackers in terms of accuracy and speed on 360-degree videos. The pros and cons of these trackers on 360-degree videos are discussed. Possible improvements to adapt these trackers to 360-degree videos are also suggested. Finally, we provide a dataset containing nine 360-degree videos with ground truth of target positions as a benchmark for future research.


Author(s):  
M. Chen ◽  
Q. Zhu ◽  
S. Yan ◽  
Y. Zhao

<p><strong>Abstract.</strong> Feature matching is a fundamental technical issue in many applications of photogrammetry and remote sensing. Although recently developed local feature detectors and descriptors have contributed to the advancement of point matching, challenges remain with regard to urban area images that are characterized by large discrepancies in viewing angles. In this paper, we define a concept of local geometrical structure (LGS) and propose a novel feature matching method by exploring the LGS of interest points to specifically address difficult situations in matching points on wide-baseline urban area images. In this study, we first detect interest points from images using a popular detector and compute the LGS of each interest point. Then, the interest points are classified into three categories on the basis of LGS. Thereafter, a hierarchical matching framework that is robust to image viewpoint change is proposed to compute correspondences, in which different feature region computation methods, description methods, and matching strategies are designed for various types of interest points according to their LGS properties. Finally, random sample consensus algorithm based on fundamental matrix is applied to eliminate outliers. The proposed method can generate similar feature descriptors for corresponding interest points under large viewpoint variation even in discontinuous areas that benefit from the LGS-based adaptive feature region construction. Experimental results demonstrate that the proposed method provides significant improvements in correct match number and matching precision compared with other traditional matching methods for urban area wide-baseline images.</p>


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2751 ◽  
Author(s):  
Xizhe Xue ◽  
Ying Li ◽  
Qiang Shen

With the increasing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), visual tracking using UAVs has become more and more important due to its many new applications, including automatic navigation, obstacle avoidance, traffic monitoring, search and rescue, etc. However, real-world aerial tracking poses many challenges due to platform motion and image instability, such as aspect ratio change, viewpoint change, fast motion, scale variation and so on. In this paper, an efficient object tracking method for UAV videos is proposed to tackle these challenges. We construct the fused features to capture the gradient information and color characteristics simultaneously. Furthermore, cellular automata is introduced to update the appearance template of target accurately and sparsely. In particular, a high confidence model updating strategy is developed according to the stability function. Systematic comparative evaluations performed on the popular UAV123 dataset show the efficiency of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document