viewing distance
Recently Published Documents


TOTAL DOCUMENTS

430
(FIVE YEARS 96)

H-INDEX

35
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Yannick Sauer ◽  
Alexandra Sipatchin ◽  
Siegfried Wahl ◽  
Miguel García García

AbstractVirtual reality as a research environment has seen a boost in its popularity during the last decades. Not only the usage fields for this technology have broadened, but also a research niche has appeared as the hardware improved and became more affordable. Experiments in vision research are constructed upon the basis of accurately displaying stimuli with a specific position and size. For classical screen setups, viewing distance and pixel position on the screen define the perceived position for subjects in a relatively precise fashion. However, projection fidelity in HMDs strongly depends on eye and face physiological parameters. This study introduces an inexpensive method to measure the perceived field of view and its dependence upon the eye position and the interpupillary distance, using a super wide angle camera. Measurements of multiple consumer VR headsets show that manufacturers’ claims regarding field of view of their HMDs are mostly unrealistic. Additionally, we performed a “Goldmann” perimetry test in VR to obtain subjective results as a validation of the objective camera measurements. Based on this novel data, the applicability of these devices to test humans’ field of view was evaluated.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 499
Author(s):  
Hua Zhang ◽  
Xinwen Hu ◽  
Ruoyun Gou ◽  
Lingjun Zhang ◽  
Bolun Zheng ◽  
...  

The human visual system (HVS), affected by viewing distance when perceiving the stereo image information, is of great significance to study of stereoscopic image quality assessment. Many methods of stereoscopic image quality assessment do not have comprehensive consideration for human visual perception characteristics. In accordance with this, we propose a Rich Structural Index (RSI) for Stereoscopic Image objective Quality Assessment (SIQA) method based on multi-scale perception characteristics. To begin with, we put the stereo pair into the image pyramid based on Contrast Sensitivity Function (CSF) to obtain sensitive images of different resolution . Then, we obtain local Luminance and Structural Index (LSI) in a locally adaptive manner on gradient maps which consider the luminance masking and contrast masking. At the same time we use Singular Value Decomposition (SVD) to obtain the Sharpness and Intrinsic Structural Index (SISI) to effectively capture the changes introduced in the image (due to distortion). Meanwhile, considering the disparity edge structures, we use gradient cross-mapping algorithm to obtain Depth Texture Structural Index (DTSI). After that, we apply the standard deviation method for the above results to obtain contrast index of reference and distortion components. Finally, for the loss caused by the randomness of the parameters, we use Support Vector Machine Regression based on Genetic Algorithm (GA-SVR) training to obtain the final quality score. We conducted a comprehensive evaluation with state-of-the-art methods on four open databases. The experimental results show that the proposed method has stable performance and strong competitive advantage.


Author(s):  
Catherine Simon ◽  
Shalet Paul

Background: Digital eye strain (DES) is an emerging public health problem due to continuous exposure to electronic gadgets and digital devices for educational, occupational or entertainment purposes, especially during this COVID-19 pandemic. Children are more vulnerable to DES, as they continue to attend online classes but are unaware of early symptoms of DES and do not complain till their vision deteriorates. The objective of this study was to assess the prevalence and risk factors of DES among school children during this pandemic.Methods: A questionnaire-based cross-sectional study was conducted among 176 school children aged 12-16 years, studying in 8th, 9th and 10th standards of a randomly selected school in Kollam district of Kerala, using the validated computer vision syndrome questionnaire (CVSQ), sent online via Google form to parents/guardians for recording their children’s pattern of digital device usage and DES symptoms.Results: The prevalence of DES among school children was 29.5%. Their commonest symptom was headache (n=125, 69.9%). The smartphone was the most commonly used digital device (n=159, 93.5%). The independent risk factors of DES were the preferred use of smart phone (adjusted odds ratio (AOR)=2.846; 95% CI=1.371-5.906; p=0.005) and viewing distance of digital device <18 inches (AOR=2.762; 95% CI=1.331-5.731; p=0.006).Conclusions: This study has highlighted some of the risk factors associated with DES. A concerted effort is needed to raise awareness about DES by experts in the health and education sectors, along with parents and teachers, so that digital device use among children can be optimised.


2021 ◽  
Author(s):  
M. Ishihara ◽  
K. Suzuki ◽  
J. Heo

It has been shown that with aging, cataracts become cloudy and colour perception and visual acuity deteriorate. As the world's population ages, there is a need for signage that considers older people's visual characteristics. This study aimed to clarify the effects of sign components on visual perception and identify differences in the effects of age. We conducted a psychological evaluation using the semantic differential method on 20 young and 10 elderly. The results showed that the younger participants gave higher ratings to many questions than the older participants. The factor analysis results showed that the items of the questionnaire consisted of a "visibility factor" and "harmony factor". The elderly were more likely to be affected by the viewing distance than the young invisibility.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sergio Delle Monache ◽  
Iole Indovina ◽  
Myrka Zago ◽  
Elena Daprati ◽  
Francesco Lacquaniti ◽  
...  

Gravity is a physical constraint all terrestrial species have adapted to through evolution. Indeed, gravity effects are taken into account in many forms of interaction with the environment, from the seemingly simple task of maintaining balance to the complex motor skills performed by athletes and dancers. Graviceptors, primarily located in the vestibular otolith organs, feed the Central Nervous System with information related to the gravity acceleration vector. This information is integrated with signals from semicircular canals, vision, and proprioception in an ensemble of interconnected brain areas, including the vestibular nuclei, cerebellum, thalamus, insula, retroinsula, parietal operculum, and temporo-parietal junction, in the so-called vestibular network. Classical views consider this stage of multisensory integration as instrumental to sort out conflicting and/or ambiguous information from the incoming sensory signals. However, there is compelling evidence that it also contributes to an internal representation of gravity effects based on prior experience with the environment. This a priori knowledge could be engaged by various types of information, including sensory signals like the visual ones, which lack a direct correspondence with physical gravity. Indeed, the retinal accelerations elicited by gravitational motion in a visual scene are not invariant, but scale with viewing distance. Moreover, the “visual” gravity vector may not be aligned with physical gravity, as when we watch a scene on a tilted monitor or in weightlessness. This review will discuss experimental evidence from behavioral, neuroimaging (connectomics, fMRI, TMS), and patients’ studies, supporting the idea that the internal model estimating the effects of gravity on visual objects is constructed by transforming the vestibular estimates of physical gravity, which are computed in the brainstem and cerebellum, into internalized estimates of virtual gravity, stored in the vestibular cortex. The integration of the internal model of gravity with visual and non-visual signals would take place at multiple levels in the cortex and might involve recurrent connections between early visual areas engaged in the analysis of spatio-temporal features of the visual stimuli and higher visual areas in temporo-parietal-insular regions.


Author(s):  
Alejandro Rubio Barañano ◽  
Muhammad Faisal ◽  
Brendan T. Barrett ◽  
John G. Buckley

AbstractViewing one’s smartphone whilst walking commonly leads to a slowing of walking. Slowing walking speed may occur because of the visual constraints related to reading the hand-held phone whilst in motion. We determine how walking-induced phone motion affects the ability to read on-screen information. Phone-reading performance (PRP) was assessed whilst participants walked on a treadmill at various speeds (Slow, Customary, Fast). The fastest speed was repeated, wearing an elbow brace (Braced) or with the phone mounted stationary (Fixed). An audible cue (‘text-alert’) indicated participants had 2 s to lift/view the phone and read aloud a series of digits. PRP was the number of digits read correctly. Each condition was repeated 5 times. 3D-motion analyses determined phone motion relative to the head, from which the variability in acceleration in viewing distance, and in the point of gaze in space in the up-down and right-left directions were assessed. A main effect of condition indicated PRP decreased with walking speed; particularly so for the Braced and Fixed conditions (p = 0.022). Walking condition also affected the phone’s relative motion (p < 0.001); post-hoc analysis indicated that acceleration variability for the Fast, Fixed and Braced conditions were increased compared to that for Slow and Customary speed walking (p ≤ 0.05). There was an inverse association between phone acceleration variability and PRP (p = 0.02). These findings may explain why walking speed slows when viewing a hand-held phone: at slower speeds, head motion is smoother/more regular, enabling the motion of the phone to be coupled with head motion, thus making fewer demands on the oculomotor system. Good coupling ensures that the retinal image is stable enough to allow legibility of the information presented on the screen.


2021 ◽  
Author(s):  
Marco Gandolfo ◽  
Hendrik Naegele ◽  
Marius V. Peelen

Boundary extension (BE) is a classical memory illusion in which observers remember more of a scene than was presented. According to predictive accounts, BE reflects the integration of visual input and expectations of what is beyond the boundaries of a scene. Alternatively, according to normalization accounts, BE reflects one end of a normalization process towards the typically-experienced viewing distance of a scene, such that BE and boundary contraction (BC) are equally common. Here, we show that BE and BC depend on depth-of-field (DOF), as determined by the aperture settings on a camera. Photographs with naturalistic DOF led to the strongest BE across a large stimulus set, while BC was primarily observed for unnaturalistic DOF. The relationship between DOF and BE was confirmed in three controlled experiments that isolated DOF from co-varying factors. In line with predictive accounts, we propose that BE is strongest for scene images that resemble day-to-day visual experience.


2021 ◽  
Author(s):  
Irene Caprara ◽  
Peter Janssen

Abstract To perform tasks like grasping, the brain has to process visual object information so that the grip aperture can be adjusted before touching the object. Previous studies have demonstrated that the posterior subsector of the Anterior Intraparietal area (pAIP) is connected to area 45B, and its anterior counterpart (aAIP) to F5a. However, the role of area 45B and F5a in visually-guided grasping is poorly understood. Here, we investigated the role of area 45B, F5a and F5p in object processing during visually-guided grasping in two monkeys. If the presentation of an object activates a motor command related to the preshaping of the hand, as in F5p, such neurons should prefer objects presented within reachable distance. Conversely, neurons encoding a purely visual representation of an object – possibly in area 45B and F5a – should be less affected by viewing distance. Contrary to our expectations, we found that most neurons in area 45B were object- and viewing distance-selective (mostly Near-preferring). Area F5a showed much weaker object selectivity compared to 45B, with a similar preference for objects presented at the Near position. Finally, F5p neurons were less object selective and frequently Far-preferring. In sum, area 45B – but not F5p– prefers objects presented in peripersonal space.


2021 ◽  
Vol 10 (10) ◽  
pp. 667
Author(s):  
Qingtong Shi ◽  
Bo Ai ◽  
Yubo Wen ◽  
Wenjun Feng ◽  
Chenxi Yang ◽  
...  

In three-dimensional (3D) digital Earth environment, there are many problems when using the existing methods to express the ocean current, such as uneven distribution of seed points, density leap in scale change and messy visualization. In this paper, a new dynamic visualization method of multi-hierarchy flow field based on particle system is proposed; Specifically, three typical spherical uniform algorithms are studied and compared, and the streamline becoming denser from the equator to the poles on globe is eliminated by placing seed points using Marsaglia polar method as the most efficient. In addition, a viewport-adaptive adjustment algorithm is proposed, which realizes that the density of particles is always suitable to any viewing distance during continuous zooming. To solve the visual representation deficiency, we design a new dynamic pattern to enhance the expression and perception of current, which makes up for the shortcoming of the arrow glyph and streamline methods. Finally, a prototype of GPU parallel and viewport coherence is achieved, whose feasibility and effectiveness are verified by a series of experiments. The results show that our method can not only represent ocean current data clearly and efficiently, but also has outstanding uniformity and hierarchy effect.


Sign in / Sign up

Export Citation Format

Share Document