aperture problem
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 9)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Christian Quaia ◽  
Incheol Kang ◽  
Bruce G Cumming

Direction selective neurons in primary visual cortex (area V1) are affected by the aperture problem, i.e., they are only sensitive to motion orthogonal to their preferred orientation. A solution to this problem first emerges in the middle temporal (MT) area, where a subset of neurons (called pattern cells) combine motion information across multiple orientations and directions, becoming sensitive to pattern motion direction. These cells are expected to play a prominent role in subsequent neural processing, but they are intermixed with cells that behave like V1 cells (component cells), and others that do not clearly fall in either group. The picture is further complicated by the finding that cells that behave like pattern cells with one type of pattern, might behave like component cells for another. We recorded from macaque MT neurons using multi-contact electrodes while presenting both type I and unikinetic plaids, in which the components were 1D noise patterns. We found that the indices that have been used in the past to classify neurons as pattern or component cells work poorly when the properties of the stimulus are not optimized for the cell being recorded, as is always the case with multi-contact arrays. We thus propose alternative measures, which considerably ameliorate the problem, and allow us to gain insights in the signals carried by individual MT neurons. We conclude that arranging cells along a component-to-pattern continuum is an oversimplification, and that the signals carried by individual cells only make sense when embodied in larger populations.


2021 ◽  
Vol 150 (4) ◽  
pp. A303-A303
Author(s):  
Daniel J. Tollin ◽  
Matthew J. Goupell ◽  
G. Christopher Stecker

Author(s):  
Stephen Grossberg

This chapter explains why and how tracking of objects moving relative to an observer, and visual optic flow navigation of an observer relative to the world, are controlled by complementary cortical streams through MT--MSTv and MT+-MSTd, respectively. Target tracking uses subtractive processing of visual signals to extract an object’s bounding contours as they move relative to a background. Navigation by optic flow uses additive processing of an entire scene to derive properties such as an observer’s heading, or self-motion direction, as it moves through the scene. The chapter explains how the aperture problem for computing heading in natural scenes is solved in MT+-MSTd using a hierarchy of processing stages that is homologous to the one that solves the aperture problem for computing motion direction in MT--MSTv. Both use feedback which obeys the ART Matching Rule to select final perceptual representations and choices. Compensation for eye movements using corollary discharge, or efference copy, signals enables an accurate heading direction to be computed. Neurophysiological data about heading direction are quantitatively simulated. Log polar processing by the cortical magnification factor simplifies computation of motion direction. This space-variant processing is maximally position invariant due to the cortical choice of network parameters. How smooth pursuit occurs, and is maintained during accurate tracking, is explained. Goal approach and obstacle avoidance are explained by attractor-repeller networks. Gaussian peak shifts control steering to a goal, as well as peak shift and behavioral contrast during operant conditioning, and vector decomposition during the relative motion of object parts.


2020 ◽  
Vol 28 (23) ◽  
pp. 34677
Author(s):  
André F. Müller ◽  
Claas Falldorf ◽  
Marcel Lotzgeselle ◽  
Gerd Ehret ◽  
Ralf B. Bergmann

2020 ◽  
Vol 12 (15) ◽  
pp. 2390 ◽  
Author(s):  
Fan Shi ◽  
Fang Qiu ◽  
Xiao Li ◽  
Yunwei Tang ◽  
Ruofei Zhong ◽  
...  

In recent years, satellites capable of capturing videos have been developed and launched to provide high definition satellite videos that enable applications far beyond the capabilities of remotely sensed imagery. Moving object detection and moving object tracking are among the most essential and challenging tasks, but existing studies have mainly focused on vehicles. To accurately detect and then track more complex moving objects, specifically airplanes, we need to address the challenges posed by the new data. First, slow-moving airplanes may cause foreground aperture problem during detection. Second, various disturbances, especially parallax motion, may cause false detection. Third, airplanes may perform complex motions, which requires a rotation-invariant and scale-invariant tracking algorithm. To tackle these difficulties, we first develop an Improved Gaussian-based Background Subtractor (IPGBBS) algorithm for moving airplane detection. This algorithm adopts a novel strategy for background and foreground adaptation, which can effectively deal with the foreground aperture problem. Then, the detected moving airplanes are tracked by a Primary Scale Invariant Feature Transform (P-SIFT) keypoint matching algorithm. The P-SIFT keypoint of an airplane exhibits high distinctiveness and repeatability. More importantly, it provides a highly rotation-invariant and scale-invariant feature vector that can be used in the matching process to determine the new locations of the airplane in the frame sequence. The method was tested on a satellite video with eight moving airplanes. Compared with state-of-the-art algorithms, our IPGBBS algorithm achieved the best detection accuracy with the highest F1 score of 0.94 and also demonstrated its superiority on parallax motion suppression. The P-SIFT keypoint matching algorithm could successfully track seven out of the eight airplanes. Based on the tracking results, movement trajectories of the airplanes and their dynamic properties were also estimated.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 64
Author(s):  
Martin Lages ◽  
Suzanne Heron

Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
M. Sandoval-Hernandez ◽  
H. Vazquez-Leal ◽  
L. Hernandez-Martinez ◽  
U. A. Filobello-Nino ◽  
V. M. Jimenez-Fernandez ◽  
...  

This article introduces two approximations that allow the evaluation of Fresnel integrals without the need for using numerical algorithms. These equations accomplish the characteristic of being continuous in the same interval as Fresnel. Both expressions have been determined applying the least squares method to suitable expressions. Accuracy of equations improves as x increases; as for small values of x, it is possible to achieve an absolute error less than 8×10-5. To probe the efficiency of the equations, two case studies are presented, both applied in the optics field. The first case is related to the semi-infinite opaque screen for Fresnel diffraction. In this case study Fresnel integrals are evaluated with the proposed equations to calculate the irradiance distribution and the Cornu spiral for diffraction computations of the Fresnel diffraction; obtained results show a good accuracy. The second case is related to the double aperture problem for Fresnel diffraction.


Sign in / Sign up

Export Citation Format

Share Document