motion parallax
Recently Published Documents


TOTAL DOCUMENTS

533
(FIVE YEARS 69)

H-INDEX

35
(FIVE YEARS 4)

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yongquan Ge ◽  
Xianzhi Yu ◽  
Mingzhi Chen ◽  
Chengxin Yu ◽  
Yingchun Liu ◽  
...  

The height irregularity and complexity of steel structures bring difficulties to dynamic deformation monitoring. PDMS (photogrammetric dynamic monitoring system) can obtain the dynamic deformation of the steel structure, but the flexibility of monitoring is limited because the camera station can only be placed on the ground. In this study, UAV (unmanned aerial vehicle) -PDMS is innovatively proposed to be used in monitoring dynamic deformation of steel structures, and it is verified in the steel frame test and Jinan Olympic Sports Center Tennis Stadium test. To solve the problem that the attitude of UAV cannot be strictly maintained in the hovering process, the improved Z-MP (zero-centered motion parallax) method is used, and the monitoring results are compared with the original Z-MP method. The feasibility of UAV-PDMS applied to steel structure deformation monitoring and the feasibility of improving the Z-MP method to reduce UAV hovering error are verified. The monitoring results showed that the steel structures of the Jinan Olympic Sports Center Tennis Stadium were robust, and the deformations were elastic and within the permissible value.


2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Author(s):  
Kevin Hunke ◽  
Jacob Engelmann ◽  
Hanno Gerd Meyer ◽  
Axel Schneider

2021 ◽  
Author(s):  
Jarbas Jácome ◽  
Arlindo Gomes ◽  
Willams de Lima Costa ◽  
Lucas Silva Figueiredo ◽  
Jader Abreu ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Jianyu Hua ◽  
Erkai Hua ◽  
Fengbin Zhou ◽  
Jiacheng Shi ◽  
Chinhua Wang ◽  
...  

AbstractGlasses-free three-dimensional (3D) displays are one of the game-changing technologies that will redefine the display industry in portable electronic devices. However, because of the limited resolution in state-of-the-art display panels, current 3D displays suffer from a critical trade-off among the spatial resolution, angular resolution, and viewing angle. Inspired by the so-called spatially variant resolution imaging found in vertebrate eyes, we propose 3D display with spatially variant information density. Stereoscopic experiences with smooth motion parallax are maintained at the central view, while the viewing angle is enlarged at the periphery view. It is enabled by a large-scale 2D-metagrating complex to manipulate dot/linear/rectangular hybrid shaped views. Furthermore, a video rate full-color 3D display with an unprecedented 160° horizontal viewing angle is demonstrated. With thin and light form factors, the proposed 3D system can be integrated with off-the-shelf purchased flat panels, making it promising for applications in portable electronics.


2021 ◽  
Author(s):  
Philip R L Parker ◽  
Eliott T T Abe ◽  
Natalie T Beatie ◽  
Emmalyn S P Leonard ◽  
Dylan M Martins ◽  
...  

In natural contexts, sensory processing and motor output are closely coupled, which is reflected in the fact that many brain areas contain both sensory and movement signals. However, standard reductionist paradigms decouple sensory decisions from their natural motor consequences, and head-fixation prevents the natural sensory consequences of self-motion. In particular, movement through the environment provides a number of depth cues beyond stereo vision that are poorly understood. To study the integration of visual processing and motor output in a naturalistic task, we investigated distance estimation in freely moving mice. We found that mice use vision to accurately jump across a variable gap, thus directly coupling a visual computation to its corresponding ethological motor output. Monocular eyelid suture did not affect performance, thus mice can use cues that do not depend on binocular disparity and stereo vision. Under monocular conditions, mice performed more vertical head movements, consistent with the use of motion parallax cues, and optogenetic suppression of primary visual cortex impaired task performance. Together, these results show that mice can use monocular cues, relying on visual cortex, to accurately judge distance. Furthermore, this behavioral paradigm provides a foundation for studying how neural circuits convert sensory information into ethological motor output.


2021 ◽  
Vol 21 (9) ◽  
pp. 2035
Author(s):  
Xue Teng ◽  
Laurie Wilcox ◽  
Robert Allison

Photonics ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 337
Author(s):  
Jiacheng Shi ◽  
Jianyu Hua ◽  
Fengbin Zhou ◽  
Min Yang ◽  
Wen Qiao

Glasses-free augmented reality (AR) 3D display has attracted great interest in its ability to merge virtual 3D objects with real scenes naturally, without the aid of any wearable devices. Here we propose an AR vector light field display based on a view combiner and an off-the-shelf purchased projector. The view combiner is sparsely covered with pixelated multilevel blazed gratings (MBG) for the projection of perspective virtual images. Multi-order diffraction of the MBG is designed to increase the viewing distance and vertical viewing angle. In a 20-inch prototype, multiple sets of 16 horizontal views form a smooth parallax. The viewing distance of the 3D scene is larger than 5 m. The vertical viewing angle is 15.6°. The light efficiencies of all views are larger than 53%. We demonstrate that the displayed virtual 3D scene retains natural motion parallax and high brightness while having a consistent occlusion effect with natural objects. This research can be extended to applications in areas such as human–computer interaction, entertainment, education, and medical care.


Sign in / Sign up

Export Citation Format

Share Document