observer motion
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 3)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


2020 ◽  
Vol 14 (4) ◽  
pp. 1-15
Author(s):  
J. Farley Norman

In contrast to many machine vision systems, we human observers can readily recognize solid objects and visually discriminate their 3-D shapes even under changes in viewpoint and variations in object orientation and lighting.  While the importance of binocular disparity has been known since the 1830's, the importance and perceptual informativeness of visual contours for object recognition and discrimination is not adequately appreciated.  This article will review those scientific contributions that demonstrate that visual contours and their deformations over time (in response to object or observer motion) carry as much or more information about object shape than other forms of visual information.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1264 ◽  
Author(s):  
Tomasz Hachaj

This paper proposes a method for improving human motion classification by applying bagging and symmetry to Principal Component Analysis (PCA)-based features. In contrast to well-known bagging algorithms such as random forest, the proposed method recalculates the motion features for each “weak classifier” (it does not randomly sample a feature set). The proposed classification method was evaluated on a challenging (even to a human observer) motion capture recording dataset of martial arts techniques performed by professional karate sportspeople. The dataset consisted of 360 recordings in 12 motion classes. Because some classes of these motions might be symmetrical (which means that they are performed with a dominant left or right hand/leg), an analysis was conducted to determine whether accounting for symmetry could improve the recognition rate of a classifier. The experimental results show that applying the proposed classifiers’ bagging procedure increased the recognition rate (RR) of the Nearest-Neighbor (NNg) and Support Vector Machine (SVM) classifiers by more than 5% and 3%, respectively. The RR of one trained classifier (SVM) was higher when we did not use symmetry. On the other hand, the application of symmetry information for bagged NNg improved its recognition rate compared with the results without symmetry information. We can conclude that symmetry information might be helpful in situations in which it is not possible to optimize the decision borders of the classifier (for example, when we do not have direct information about class labels). The experiment presented in this paper shows that, in this case, bagging and mirroring might help find a similar object in the training set that shares the same class label. Both the dataset that was used for the evaluation and the implementation of the proposed method can be downloaded, so the experiment is easily reproducible.


2018 ◽  
Vol 115 (16) ◽  
pp. 4264-4269 ◽  
Author(s):  
Daria Genzel ◽  
Michael Schutte ◽  
W. Owen Brimijoin ◽  
Paul R. MacNeilage ◽  
Lutz Wiegrebe

Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.


2013 ◽  
Vol 19 (2) ◽  
pp. 171-184 ◽  
Author(s):  
Mark Gould ◽  
Damian R. Poulter ◽  
Shaun Helman ◽  
John P. Wann

2008 ◽  
Vol 11 (10) ◽  
pp. 1223-1230 ◽  
Author(s):  
Thomas Wolbers ◽  
Mary Hegarty ◽  
Christian Büchel ◽  
Jack M Loomis

2003 ◽  
Vol 26 (1) ◽  
pp. 24-25
Author(s):  
James J. Clark

AbstractWe argue that any theory of color physicalism must include consideration of ecological interactions. Ecological and sensorimotor contingencies resulting from relative surface motion and observer motion give rise to measurable effects on the spectrum of light reflecting from surfaces. These contingencies define invariant manifolds in a sensory-spatial space, which is the physical underpinning of all subjective color experiences.


Sign in / Sign up

Export Citation Format

Share Document