scholarly journals Visual motion assists in social cognition

2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.

i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952093732
Author(s):  
Masahiko Terao ◽  
Shin’ya Nishida

Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.


2019 ◽  
Vol 30 (4) ◽  
pp. 2659-2673
Author(s):  
Shaun L Cloherty ◽  
Jacob L Yates ◽  
Dina Graf ◽  
Gregory C DeAngelis ◽  
Jude F Mitchell

Abstract Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.


2021 ◽  
Author(s):  
Scott T. Steinmetz ◽  
Oliver W. Layton ◽  
Nate V. Powell ◽  
Brett Fajen

This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.


2018 ◽  
Vol 4 (1) ◽  
pp. 501-523 ◽  
Author(s):  
Shin'ya Nishida ◽  
Takahiro Kawabe ◽  
Masataka Sawayama ◽  
Taiki Fukiage

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


Author(s):  
Daniela Perani ◽  
Paola Scifo ◽  
Guido M. Cicchini ◽  
Pasquale Della Rosa ◽  
Chiara Banfi ◽  
...  

AbstractMotion perception deficits in dyslexia show a large intersubjective variability, partly reflecting genetic factors influencing brain architecture development. In previous work, we have demonstrated that dyslexic carriers of a mutation of the DCDC2 gene have a very strong impairment in motion perception. In the present study, we investigated structural white matter alterations associated with the poor motion perception in a cohort of twenty dyslexics with a subgroup carrying the DCDC2 gene deletion (DCDC2d+) and a subgroup without the risk variant (DCDC2d–). We observed significant deficits in motion contrast sensitivity and in motion direction discrimination accuracy at high contrast, stronger in the DCDC2d+ group. Both motion perception impairments correlated significantly with the fractional anisotropy in posterior ventral and dorsal tracts, including early visual pathways both along the optic radiation and in proximity of occipital cortex, MT and VWFA. However, the DCDC2d+ group showed stronger correlations between FA and motion perception impairments than the DCDC2d– group in early visual white matter bundles, including the optic radiations, and in ventral pathways located in the left inferior temporal cortex. Our results suggest that the DCDC2d+ group experiences higher vulnerability in visual motion processing even at early stages of visual analysis, which might represent a specific feature associated with the genotype and provide further neurobiological support to the visual-motion deficit account of dyslexia in a specific subpopulation.


2008 ◽  
Vol 276 (1655) ◽  
pp. 263-268 ◽  
Author(s):  
William Curran ◽  
Colin W.G Clifford ◽  
Christopher P Benton

It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction after-effect (DAE). Both result in inaccurate perception of direction when a moving pattern is either superimposed on (direction repulsion), or presented following adaptation to (DAE), another pattern moving in a different direction. Remarkable similarities in tuning characteristics suggest that common processes underlie the two illusions. What is not clear, however, is whether the processes driving the two illusions are expressions of the same or different neural substrates. Here we report two experiments demonstrating that direction repulsion and the DAE are, in fact, expressions of different neural substrates. Our strategy was to use each of the illusions to create a distorted perceptual representation upon which the mechanisms generating the other illusion could potentially operate. We found that the processes mediating direction repulsion did indeed access the distorted perceptual representation induced by the DAE. Conversely, the DAE was unaffected by direction repulsion. Thus parallels in perceptual phenomenology do not necessarily imply common neural substrates. Our results also demonstrate that the neural processes driving the DAE occur at an earlier stage of motion processing than those underlying direction repulsion.


1998 ◽  
Vol 53 (7-8) ◽  
pp. 622-627
Author(s):  
Walter J. Gillner

Abstract In the early steps of visual information processing motion is one of the most important queues for the development of spatial representations. Obstacle detection and egomotion estimation are only two examples of the powerfulness of visual motion detection systems. The underlying process of information extraction has to be active due to the observer’s capabilities of egomotion. This means that the observer’s motion has an impact on the pro­jected retinal motion field. Therefore one of the challenging tasks for biological as well as for technical vision systems is to couple retinal motion and egomotion and to uncouple egomotion and object motion. The following sections describe a model that couples visual motion processing with the egomotion parameters of a moving observer. Beneath a theoreti­cal introduction of the model an application to traffic scene analysis is presented. A t last the paper relates the model to biological motion processing systems.


Brain ◽  
2021 ◽  
Author(s):  
Gaurav H Patel ◽  
Sophie C Arkin ◽  
Daniel Ruiz-Betancourt ◽  
Fabiola I Plaza ◽  
Safia A Mirza ◽  
...  

Abstract Schizophrenia is associated with marked impairments in social cognition. However, the neural correlates of these deficits remain unclear. Here we use naturalistic stimuli to examine the role of the right temporoparietal junction/posterior superior temporal sulcus (TPJ-pSTS)—an integrative hub for the cortical networks pertinent to the understanding complex social situations—in social inference, a key component of social cognition, in schizophrenia. 27 schizophrenia participants (SzP) and 21 healthy controls watched a clip of the movie “The Good, the Bad, and the Ugly” while high resolution multiband fMRI images were collected. We used inter-subject correlation (ISC) to measure the evoked activity, which we then compared to social cognition as measured by The Awareness of Social Inference Test (TASIT). We also compared between groups the TPJ-pSTS BOLD activity 1) relationship with the motion content in the movie, 2) synchronization with other cortical areas involved in the viewing of the movie, and 3) relationship with the frequency of saccades made during the movie. Activation deficits were greatest in middle TPJ (TPJm) and correlated significantly with impaired TASIT performance across groups. Follow-up analyses of the TPJ-pSTS revealed decreased synchronization with other cortical areas, decreased correlation with the motion content of the movie, and decreased correlation with the saccades made during the movie. The functional impairment of the TPJm, a hub area in the middle of the TPJ-pSTS, predicts deficits in social inference in SzP by disrupting the integration of visual motion processing into the TPJ. This disrupted integration then affects the use of the TPJ to guide saccades during the visual scanning of the movie clip. These findings suggest that the TPJ may be a treatment target for improving deficits in a key component of social cognition in SzP.


2021 ◽  
Author(s):  
Ana Gómez-Granados ◽  
Isaac Kurtzer ◽  
Tarkeshwar Singh

AbstractAn important window into sensorimotor function is how we catch moving objects. Studies that examined catching of free-falling objects report that the timing of the motor response is independent of the momentum of the projectile, whereas the motor response amplitude scales with projectile momentum. However, this pattern may not be a general strategy of catching since objects accelerate under gravity in a characteristic manner (unlike object motion in the horizontal plane) and the human visual motion-processing system is not adept at encoding acceleration. Accordingly, we developed a new experimental paradigm using a robotic manipulandum and augmented reality where participants stabilized against the impact of a virtual object moving at constant velocity in the horizontal plane. Participants needed to apply an impulse that mirrored the object momentum to bring it to rest and received explicit feedback on their performance. In different blocks, object momentum was varied by an increase in its speed or mass. In contrast to previous reports on free falling objects, we observed that increasing object speed caused earlier onset of arm muscle activity and limb force relative to the impending time to contact. Also, arm force increased as a function of target momentum linked to changes in speed or mass. Our results demonstrate velocity-dependent timing to catch objects and a complex pattern of scaling to momentum.


Sign in / Sign up

Export Citation Format

Share Document