eye position
Recently Published Documents


TOTAL DOCUMENTS

724
(FIVE YEARS 66)

H-INDEX

73
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Yannick Sauer ◽  
Alexandra Sipatchin ◽  
Siegfried Wahl ◽  
Miguel García García

AbstractVirtual reality as a research environment has seen a boost in its popularity during the last decades. Not only the usage fields for this technology have broadened, but also a research niche has appeared as the hardware improved and became more affordable. Experiments in vision research are constructed upon the basis of accurately displaying stimuli with a specific position and size. For classical screen setups, viewing distance and pixel position on the screen define the perceived position for subjects in a relatively precise fashion. However, projection fidelity in HMDs strongly depends on eye and face physiological parameters. This study introduces an inexpensive method to measure the perceived field of view and its dependence upon the eye position and the interpupillary distance, using a super wide angle camera. Measurements of multiple consumer VR headsets show that manufacturers’ claims regarding field of view of their HMDs are mostly unrealistic. Additionally, we performed a “Goldmann” perimetry test in VR to obtain subjective results as a validation of the objective camera measurements. Based on this novel data, the applicability of these devices to test humans’ field of view was evaluated.


Author(s):  
Xiaoqin Jin ◽  
Yi Peng ◽  
Samer Abdo Al-wesabi ◽  
Jun Deng ◽  
Yue Ming ◽  
...  

Abstract Purpose To evaluate and compare different surgical approaches for the treatment of Helveston syndrome and provide further information for preoperative planning. Methods From February 2008 to December 2018, data of 52 patients with Helveston syndrome were retrospectively reviewed. Different surgical approaches were selected based on the extent of A-pattern exotropia, dissociated vertical deviation (DVD), and both superior oblique muscle overaction (SOOA) with fundus photograph intorsion. Eye position, A-pattern, DVD, superior oblique muscle function, and binocular vision function were evaluated pre- and postoperatively. The average follow-up duration was 20.5 months. Results Nine cases underwent simultaneous horizontal deviation correction with bilateral superior rectus recession, 24 underwent simultaneous horizontal deviation correction with bilateral superior oblique muscle lengthening, and 19 underwent two stages of horizontal deviation correction with superior oblique muscle lengthening, and later bilateral superior rectus recession. A-pattern, DVD, SOOA, and fundus intorsion were all collapsed in all patients postoperatively. Forty-five patients had an orthophoric eye position with considerably aligned ocular movements postoperatively. The total success rate was 86.5%. Postoperatively, eight of the 10 patients with diplopia experienced a recovery of binocular single vision and three had a recovery of rudimentary stereopsis (Titmus 3000–400 s of arc). The compensatory head posture of patients improved significantly postoperatively. Conclusions The surgical planning of Helveston syndrome should be designed based on the degree of the A-pattern, SOOA, DVD, and the intorsion in fundus photographs, and the appropriate approach should be selected to improve patient satisfaction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Apoorva Karsolia ◽  
Scott B. Stevenson ◽  
Vallabh E. Das

AbstractKnowledge of eye position in the brain is critical for localization of objects in space. To investigate the accuracy and precision of eye position feedback in an unreferenced environment, subjects with normal ocular alignment attempted to localize briefly presented targets during monocular and dichoptic viewing. In the task, subjects’ used a computer mouse to position a response disk at the remembered location of the target. Under dichoptic viewing (with red (right eye)–green (left eye) glasses), target and response disks were presented to the same or alternate eyes, leading to four conditions [green target–green response cue (LL), green–red (LR), red–green (RL), and red–red (RR)]. Time interval between target and response disks was varied and localization errors were the difference between the estimated and real positions of the target disk. Overall, the precision of spatial localization (variance across trials) became progressively worse with time. Under dichoptic viewing, localization errors were significantly greater for alternate-eye trials as compared to same-eye trials and were correlated to the average phoria of each subject. Our data suggests that during binocular dissociation, spatial localization may be achieved by combining a reliable versional efference copy signal with a proprioceptive signal that is unreliable perhaps because it is from the wrong eye or is too noisy.


2021 ◽  
Vol 13 (10) ◽  
pp. 250
Author(s):  
Luis A. Corujo ◽  
Emily Kieson ◽  
Timo Schloesser ◽  
Peter A. Gloor

Creating intelligent systems capable of recognizing emotions is a difficult task, especially when looking at emotions in animals. This paper describes the process of designing a “proof of concept” system to recognize emotions in horses. This system is formed by two elements, a detector and a model. The detector is a fast region-based convolutional neural network that detects horses in an image. The model is a convolutional neural network that predicts the emotions of those horses. These two elements were trained with multiple images of horses until they achieved high accuracy in their tasks. In total, 400 images of horses were collected and labeled to train both the detector and the model while 40 were used to test the system. Once the two components were validated, they were combined into a testable system that would detect equine emotions based on established behavioral ethograms indicating emotional affect through the head, neck, ear, muzzle, and eye position. The system showed an accuracy of 80% on the validation set and 65% on the test set, demonstrating that it is possible to predict emotions in animals using autonomous intelligent systems. Such a system has multiple applications including further studies in the growing field of animal emotions as well as in the veterinary field to determine the physical welfare of horses or other livestock.


2021 ◽  
Vol 21 (9) ◽  
pp. 2110
Author(s):  
Marta Tabanelli ◽  
Marina De Vitis ◽  
Rossella Breveglieri ◽  
Claudio Galletti ◽  
Patrizia Fattori

2021 ◽  
Vol 21 (9) ◽  
pp. 2118
Author(s):  
Megan Roussy ◽  
Rogelio Luna ◽  
Benjamin Corrigan ◽  
Adam Sachs ◽  
Lena Palaniyappan ◽  
...  

2021 ◽  
Vol 11 (8) ◽  
pp. 1071
Author(s):  
Eleanor S. Smith ◽  
Trevor J. Crawford

The memory-guided saccade task requires the remembrance of a peripheral target location, whilst inhibiting the urge to make a saccade ahead of an auditory cue. The literature has explored the endophenotypic deficits associated with differences in target laterality, but less is known about target amplitude. The data presented came from Crawford et al. (1995), employing a memory-guided saccade task among neuroleptically medicated and non-medicated patients with schizophrenia (n = 31, n = 12), neuroleptically medicated and non-medicated bipolar affective disorder (n = 12, n = 17), and neurotypical controls (n = 30). The current analyses explore the relationships between memory-guided saccades toward targets with different eccentricities (7.5° and 15°), the discernible behaviour exhibited amongst diagnostic groups, and cohorts distinguished based on psychotic symptomatology. Saccade gain control and final eye position were reduced among medicated-schizophrenia patients. These metrics were reduced further among targets with greater amplitudes (15°), indicating greater deficit. The medicated cohort exhibited reduced gain control and final eye positions in both amplitudes compared to the non-medicated cohort, with deficits markedly observed for the furthest targets. No group differences in symptomatology (positive and negative) were reported, however, a greater deficit was observed toward the larger amplitude. This suggests that within the memory-guided saccade paradigm, diagnostic classification is more prominent in characterising disparities in saccade performance than symptomatology.


2021 ◽  
Vol 41 (4) ◽  
pp. 705-711
Author(s):  
Tao Jiang ◽  
Fei Li ◽  
Jing Yu ◽  
Ruo-nan Huang ◽  
Rui Gao ◽  
...  

Author(s):  
Miranda Morrison ◽  
Hassen Kerkeni ◽  
Athanasia Korda ◽  
Simone Räss ◽  
Marco D. Caversaccio ◽  
...  

Abstract Objective The alternate cover test (ACT) in patients with acute vestibular syndrome is part of the ‘HINTS’ battery test. Although quantitative, the ACT is highly dependent on the examiner’s experience and could theoretically vary greatly between examiners. In this study, we sought to validate an automated video-oculography (VOG) system based on eye tracking and dedicated glasses. Methods We artificially induced a vertical strabismus to simulate a skew deviation on ten healthy subjects, aged from 26 to 66, using different press-on Fresnel prisms on one eye while recording eye position with VOG of the contralateral eye. We then compared the system’s performance to that of a blinded trained orthoptist using conventional, semi-quantitative method of skew measurement known as the alternate prism cover test (APCT) as a gold standard. Results We found a significant correlation between the reference APCT and the Skew VOG (Pearson’s R2 = 0.606, p < 0.05). There was a good agreement between the two tests (intraclass correlation coefficient 0.852, 95 CI 0.728–0.917, p < 0.001). The overall accuracy of the VOG was estimated at 80.53% with an error rate of 19.46%. There was no significant difference in VOG skew estimations compared with the gold standard except for very small skews. Conclusions VOG offers an objective and quantitative skew measurement and proved to be accurate in measuring vertical eye misalignment compared to the ACT with prisms. Precision was moderate, which mandates a sufficient number of tests per subject.


2021 ◽  
Vol 15 ◽  
Author(s):  
Edmund T. Rolls

First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.


Sign in / Sign up

Export Citation Format

Share Document