scholarly journals The integration of local chromatic motion signals is sensitive to contrast polarity

2011 ◽  
Vol 28 (3) ◽  
pp. 239-246 ◽  
Author(s):  
SOPHIE M. WUERGER ◽  
ALEXA RUPPERTSBERG ◽  
STEPHANIE MALEK ◽  
MARCO BERTAMINI ◽  
JASNA MARTINOVIC

AbstractGlobal motion integration mechanisms can utilize signals defined by purely chromatic information. Is global motion integration sensitive to the polarity of such color signals? To answer this question, we employed isoluminant random dot kinematograms (RDKs) that contain a single chromatic contrast polarity or two different polarities. Single-polarity RDKs consisted of local motion signals with either a positive or a negative S or L–M component, while in the different-polarity RDKs, half the dots had a positive S or L–M component, and the other half had a negative S or L–M component. In all RDKs, the polarity and the motion direction of the local signals were uncorrelated. Observers discriminated between 50% coherent motion and random motion, and contrast thresholds were obtained for 81% correct responses. Contrast thresholds were obtained for three different dot densities (50, 100, and 200 dots). We report two main findings: (1) dependence on dot density is similar for both contrast polarities (+S vs. −S, +LM vs. −LM) but slightly steeper for S in comparison to LM and (2) thresholds for different-polarity RDKs are significantly higher than for single-polarity RDKs, which is inconsistent with a polarity-blind integration mechanism. We conclude that early motion integration mechanisms are sensitive to the polarity of the local motion signals and do not automatically integrate information across different polarities.

2007 ◽  
Vol 24 (1) ◽  
pp. 1-8 ◽  
Author(s):  
ALEXA I. RUPPERTSBERG ◽  
SOPHIE M. WUERGER ◽  
MARCO BERTAMINI

There is common consensus now that color-defined motion can be perceived by the human visual system. For global motion integration tasks based on isoluminant random dot kinematograms conflicting evidence exists, whether observers can (Ruppertsberg et al., 2003) or cannot (Bilodeau & Faubert, 1999) extract a common motion direction for stimuli modulated along the isoluminant red-green axis. Here we report conditions, in which S-cones contribute to chromatic global motion processing. When the display included extra-foveal regions, the individual elements were large (∼0.3°) and the displacement was large (∼1°), stimuli modulated along the yellowish-violet axis proved to be effective in a global motion task. The color contrast thresholds for detection for both color axes were well below the contrasts required for global motion integration, and therefore the discrimination-to-detection ratio was >1. We conclude that there is significant S-cone input to chromatic global motion processing and the extraction of global motion is not mediated by the same mechanism as simple detection. Whether the koniocellular or the magnocellular pathway is involved in transmitting S-cone signals is a topic of current debate (Chatterjee & Callaway, 2002).


2020 ◽  
Vol 38 (5) ◽  
pp. 395-405
Author(s):  
Luca Battaglini ◽  
Federica Mena ◽  
Clara Casco

Background: To study motion perception, a stimulus consisting of a field of small, moving dots is often used. Generally, some of the dots coherently move in the same direction (signal) while the rest move randomly (noise). A percept of global coherent motion (CM) results when many different local motion signals are combined. CM computation is a complex process that requires the integrity of the middle-temporal area (MT/V5) and there is evidence that increasing the number of dots presented in the stimulus makes such computation more efficient. Objective: In this study, we explored whether anodal direct current stimulation (tDCS) over MT/V5 would increase individual performance in a CM task at a low signal-to-noise ratio (SNR, i.e. low percentage of coherent dots) and with a target consisting of a large number of moving dots (high dot numerosity, e.g. >250 dots) with respect to low dot numerosity (<60 dots), indicating that tDCS favour the integration of local motion signal into a single global percept (global motion). Method: Participants were asked to perform a CM detection task (two-interval forced-choice, 2IFC) while they received anodal, cathodal, or sham stimulation on three different days. Results: Our findings showed no effect of cathodal tDCS with respect to the sham condition. Instead, anodal tDCS improves performance, but mostly when dot numerosity is high (>400 dots) to promote efficient global motion processing. Conclusions: The present study suggests that tDCS may be used under appropriate stimulus conditions (low SNR and high dot numerosity) to boost the global motion processing efficiency, and may be useful to empower clinical protocols to treat visual deficits.


2019 ◽  
Vol 6 (3) ◽  
pp. 190114
Author(s):  
William Curran ◽  
Lee Beattie ◽  
Delfina Bilello ◽  
Laura A. Coulter ◽  
Jade A. Currie ◽  
...  

Prior experience influences visual perception. For example, extended viewing of a moving stimulus results in the misperception of a subsequent stimulus's motion direction—the direction after-effect (DAE). There has been an ongoing debate regarding the locus of the neural mechanisms underlying the DAE. We know the mechanisms are cortical, but there is uncertainty about where in the visual cortex they are located—at relatively early local motion processing stages, or at later global motion stages. We used a unikinetic plaid as an adapting stimulus, then measured the DAE experienced with a drifting random dot test stimulus. A unikinetic plaid comprises a static grating superimposed on a drifting grating of a different orientation. Observers cannot see the true motion direction of the moving component; instead they see pattern motion running parallel to the static component. The pattern motion of unikinetic plaids is encoded at the global processing level—specifically, in cortical areas MT and MST—and the local motion component is encoded earlier. We measured the direction after-effect as a function of the plaid's local and pattern motion directions. The DAE was induced by the plaid's pattern motion, but not by its component motion. This points to the neural mechanisms underlying the DAE being located at the global motion processing level, and no earlier than area MT.


2020 ◽  
Author(s):  
Zhiyan Wang ◽  
Masako Tamaki ◽  
Kazuhisa Shibata ◽  
Michael S. Worden ◽  
Takashi Yamada ◽  
...  

AbstractWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. In a previous psychophysical study, subjects were repeatedly exposed to a task-irrelevant global motion display that induced the perception of not only the local motions but also a global motion moving in the direction of the spatiotemporal average of the local motion vectors. As a result, subjects enhanced their sensitivity only to the local moving directions, suggesting that early visual areas (V1/V2) that process local motions are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, we employed a decoded neurofeedback technique (DecNef) using functional magnetic resonance imaging. During the DecNef training, subjects were trained to induce the activity patterns in V1/V2 that were similar to those evoked by the actual presentation of the global motion display. The DecNef training was conducted with neither the actual presentation of the display nor the subjects’ awareness of the purpose of the experiment. As a result, subjects increased the sensitivity to the local motion directions but not specifically to the global motion direction. The training effect was strictly confined to V1/V2. Moreover, subjects reported that they neither perceived nor imagined any motion during the DecNef training. These results together suggest that that V1/V2 are sufficient for exposure-based task-irrelevant VPL to occur unconsciously.Significance StatementWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. Previous psychophysical experiments suggest that early visual areas (V1/V2) are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, using decoded fMRI neurofeedback, the activity patterns similar to those evoked by the presentation of a complex motion display were repeatedly induced only in early visual areas. The training sensitized only the local motion directions and not the global motion direction, suggesting that V1/V2 are involved in task-irrelevant VPL.


i-Perception ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 204166952096110
Author(s):  
Chien-Chung Chen ◽  
Hiroshi Ashida ◽  
Xirui Yang ◽  
Pei-Yin Chen

In a stimulus with multiple moving elements, an observer may perceive that the whole stimulus moves in unison if (a) one can associate an element in one frame with one in the next (correspondence) and (b) a sufficient proportion of correspondences signal a similar motion direction (coherence). We tested the necessity of these two conditions by asking the participants to rate the perceived intensity of linear, concentric, and radial motions for three types of stimuli: (a) random walk motion, in which the direction of each dot was randomly determined for each frame, (b) random image sequence, which was a set of uncorrelated random dot images presented in sequence, and (c) global motion, in which 35% of dots moved coherently. The participants perceived global motion not only in the global motion conditions but also in the random image sequences, though not in random walk motion. The type of perceived motion in the random image sequences depends on the spatial context of the stimuli. Thus, although there is neither a fixed correspondence across different frames nor a coherent motion direction, observers can still perceive global motion in the random image sequence. This result cannot be explained by motion energy or local aperture border effects.


2004 ◽  
Vol 63 (3) ◽  
pp. 173-182 ◽  
Author(s):  
Nobuko Takahashi

The present study examined the effect of the spatial configuration of local signals on motion integration across space. The perceived coherency was measured in different configurations of apertures and combinations of motion directions. The results showed the following. (1) Motion integration across separate apertures is affected by the spatial configuration of the apertures. The perceived coherency was highest when the apertures were arranged symmetrically with respect to the coherent direction. (2) Though the spatial configuration of apertures are the same, the assignment of each local motion to each apertures has an effect, and converging local motions are integrated more than diverging local motions. (3) There is a limit to the direction difference of local motions. These results suggest that the spatial structure of global motion behind apertures has a considerable effect on the integration of local motions in apertures.


2009 ◽  
Vol 26 (2) ◽  
pp. 237-248 ◽  
Author(s):  
JASNA MARTINOVIC ◽  
GEORG MEYER ◽  
MATTHIAS M. MÜLLER ◽  
SOPHIE M. WUERGER

AbstractThe purpose of this study was to test whether color–motion correlations carried by a pure color difference (S-cone component only) can be used to improve global motion extraction. We also examined the neural markers of color–motion correlation processing in event-related potentials. Color and motion information was dissociated using a two-colored random dot kinematogram, wherein coherent motion and motion noise differed from each other only in their S-cone component, with spatial and temporal parameters set so that global motion processing relied solely on a constant L-M component. Hence, when color and the local motion direction are correlated, more efficient segregation of coherent motion can only be brought about by the S-cone difference, and crucially, this S-cone component does not provide any effective input to a global motion mechanism but only changes the color appearance of the moving dots. The color contrasts (vector length in the S vs. L-M plane) of both the dots carrying coherent motion and the dots moving randomly were fixed at motion discrimination threshold to ensure equal effectiveness for motion extraction. In the behavioral experiment, participants were asked to discriminate between coherent and random motion, and d′ was determined for three different conditions: uncorrelated, uncued correlated, and cued correlated. In the electroencephalographic experiment, participants discriminated direction of motion for uncued correlated and cued correlated conditions. Color–motion correlations were found to improve performance. Cueing a specific color also modulated the N1 component of the event-related potential, with sources in visual area middle temporal. We conclude that S-cone signals “invisible” to the motion system can influence the analysis by direction-selective motion mechanisms through grouping of local motion signals by color. This grouping mechanism must precede motion processing and is likely to be under attentional control.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 84-84
Author(s):  
W H A Beaudot

A neuromorphic model of the retino-cortical motion processing stream is proposed which incorporates both feedforward and feedback mechanisms. The feedforward stream consists of motion integration from the retina to the MT area. Retinal spatiotemporal filtering provides X-like and Y-like visual inputs with band-pass characteristics to the V1 area (Beaudot, 1996 Perception25 Supplement, 30 – 31). V1 direction-selective cells respond to local motion resulting from nonlinear interactions between retinal inputs. MT direction-selective cells respond to global motion resulting from spatial convergence and temporal integration of V1 signals. This feedforward stream provides a fine representation of local motion in V1 and a coarse representation of global motion in MT. However, it is unable to deal with the aperture problem. Solving this problem requires the adjunction of local constraints related to both smoothness and discontinuity of coherent motion, as well as some minimisation techniques to obtain the optimal solution. We propose a plausible neural substrate for this computation by incorporating excitatory intracortical feedbacks in V1 and their modulation by reciprocal connections from MT. The underlying enhancement or depression of V1 responses according to the strength of MT responses reflects changes in the spatiotemporal properties of the V1 receptive fields. This mechanism induces a dynamic competition between local and global motion representations in V1. On convergence of these dynamics, responses of V1 direction-selective cells provide a fine representation of ‘true’ motion, thus solving the aperture problem and allowing a figure - ground segregation based on coherent motion. The model is compatible with recent anatomical, physiological, and psychophysical evidence [Bullier et al, 1996 Journal de Physiologie (Paris)90 217 – 220].


2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


2015 ◽  
Vol 115 ◽  
pp. 83-91 ◽  
Author(s):  
Arijit Chakraborty ◽  
Nicola S. Anstice ◽  
Robert J. Jacobs ◽  
Nabin Paudel ◽  
Linda L. LaGasse ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document