scholarly journals Fixation-pattern similarity analysis reveals adaptive changes in face-viewing strategies following aversive learning

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Lea Kampermann ◽  
Niklas Wilming ◽  
Arjen Alink ◽  
Christian Büchel ◽  
Selim Onat

Animals can effortlessly adapt their behavior by generalizing from past aversive experiences, allowing to avoid harm in novel situations. We studied how visual information was sampled by eye-movements during this process called fear generalization, using faces organized along a circular two-dimensional perceptual continuum. During learning, one face was conditioned to predict a harmful event, whereas the most dissimilar face stayed neutral. This introduced an adversity gradient along one specific dimension, while the other, unspecific dimension was defined solely by perceptual similarity. Aversive learning changed scanning patterns selectively along the adversity-related dimension, but not the orthogonal dimension. This effect was mainly located within the eye region of faces. Our results provide evidence for adaptive changes in viewing strategies of faces following aversive learning. This is compatible with the view that these changes serve to sample information in a way that allows discriminating between safe and adverse for a better threat prediction.

2017 ◽  
Author(s):  
Lea Kampermann ◽  
Niklas Wilming ◽  
Arjen Alink ◽  
Christian Büchel ◽  
Selim Onat

AbstractAnimals can effortlessly adapt their behavior by generalizing from past experiences, and avoid harm in novel aversive situations. In our current understanding, the perceptual similarity between learning and generalization samples is viewed as one major factor driving aversive generalization. Alternatively, the threat-prediction account proposes that perceptual similarity should lead to generalization to the extent it predicts harmful outcomes. We tested these views using a two-dimensional perceptual continuum of faces. During learning, one face is conditioned to predict a harmful event, whereas the most dissimilar face stays neutral; introducing an adversity gradient defined only along one dimension. Learning changed the way how humans sampled information during viewing of faces. These occurred specifically along the adversity gradient leading to an increased dissimilarity of eye-movement patterns along the threat-related dimension. This provides evidence for the threat-prediction account of generalization, which conceives perceptual factors to be relevant to the extent they predict harmful outcomes.


2017 ◽  
Vol 14 (2) ◽  
pp. 234-252
Author(s):  
Emilia Christie Picelli Sanches ◽  
Claudia Mara Scudelari Macedo ◽  
Juliana Bueno

A acessibilidade na educação de pessoas cegas é um direito que deve ser cumprido. Levando-se em consideração que o design da informação almeja transmitir uma informação de forma efetiva ao receptor, e que uma imagem estática precisa ser adaptada para que um aluno cego tenha acesso a esse conteúdo visual, propõe-se uma maneira de traduzir a informação visual para o tátil. O propósito deste artigo, então, é apresentar um modelo para tradução de imagens estáticas bidimensionais em imagens táteis tridimensionais. Por isso, parte de uma breve revisão da literatura sobre cegueira, percepção tátil e imagens táteis. Na sequência, apresenta o modelo de tradução em três partes: (1) recomendações da literatura; (2) estrutura e (3) modelo preliminar para teste. Depois, descreve o teste do modelo realizado com dois designers com habilidades de modelagem digital (potenciais usuários). Como resultado dos testes, obtiveram-se duas modelagens distintas, uma utilizando da elevação e outra utilizando texturas, porém, os dois participantes realizaram com sucesso a tarefa pretendida. Ainda, a partir dos resultados dos obtidos, também, foi possível perceber falhas no modelo que necessitam ser ajustadas para as próximas etapas da pesquisa.+++++Accessibility in education of blind people is a right that must be fulfilled. Considering that information design aims to transmit an information in an effective way to the receiver, and that a static image needs to be adapted so that a blind student can have access to this visual content, it is proposed a way to translate the visual information to the tactile sense. The purpose of this paper is to present a translating model of static two-dimensional images into three-dimensional tactile images. First, it starts from a brief literature review aboutblindness, tactile perception and tactile images. Second, it presents the translating model in three sections: (1) literature recommendations; (2) structure and (3) finished model for testing. Then, it describes the tests with the model and two designers with digital modelling abilities (potential users). As a result from the tests, two distinct models were obtained, one using elevation and other using textures, although, both participants successfully made the intended task. Also from the test results, it was possible to perceive flaws on the model that need to be adjusted for the next steps of the research.


1990 ◽  
Vol 4 (6) ◽  
pp. 555-578 ◽  
Author(s):  
Anne Morel ◽  
Jean Bullier

AbstractA number of lines of evidence suggest that, in the macaque monkey, inferior parietal and inferotemporal cortices process different types of visual information. It has been suggested that visual information reaching these two subdivisions follows separate pathways from the striate cortex through the prestriate cortex. We examined directly this possibility by placing injections of the retrograde fluorescent tracers, fast blue and diamidino yellow, in inferior parietal and inferotemporal cortex and examining the spatial pattern of cortical areas containing labeled cells in two-dimensional reconstructions of the cortex.The results of injections in inferotemporal cortex show that TEO receives afferents from areas V2, ventral V3, V3A, central V4, V4t, and DPL in prestriate cortex and from areas IPa, PGa, and FST in the superior temporal sulcus (STS). Area TEp receives afferents only from V4 in prestriate cortex and from IPa, PGa, and FST in the anterior STS. Area TEa receives no prestriate input and is innervated by IPa, PGa, FST, and TPO in the anterior STS.The results of injections in inferior parietal cortex demonstrate that POa receives afferents from dorsal V3, V3A, peripheral V4, DPL, and PO in prestriate cortex, from MST and *VIP and from IPa, PGa, TPO, and FST in anterior STS. Area PGc (corresponding to 7a) is innervated by PO, MST, and by TPO in the anterior STS.Examination of the two-dimensional reconstructions of the pattern of labeling after combined injections of fast blue and diamidino yellow in areas POa and TEO revealed that these areas are principally innervated by different prestriate areas. Only a small region, centered on area V3A and extending into V4 and DPL, contained cells labeled by either injection as well as a small number of double-labeled cells. In contrast, areas POa and TEO receive afferents from extensive common regions in the anterior STS corresponding to areas IPa, PGa, and FST.These results directly demonstrate that visual information from the striate cortex reaches inferior parietal and inferotemporal cortices through largely separate prestriate cortical pathways. On the other hand, both parietal and inferotemporal cortices receive common inputs from extensive regions in the anterior STS which may play a role in linking the processing occurring in these two cortical subdivisions of the visual system.


2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


1991 ◽  
Vol 66 (3) ◽  
pp. 777-793 ◽  
Author(s):  
J. W. McClurkin ◽  
T. J. Gawne ◽  
B. J. Richmond ◽  
L. M. Optican ◽  
D. L. Robinson

1. Using behaving monkeys, we studied the visual responses of single neurons in the parvocellular layers of the lateral geniculate nucleus (LGN) to a set of two-dimensional black and white patterns. We found that monkeys could be trained to make sufficiently reliable and stable fixations to enable us to plot and characterize the receptive fields of individual neurons. A qualitative examination of rasters and a statistical analysis of the data revealed that the responses of neurons were related to the stimuli. 2. The data from 5 of the 13 "X-like" neurons in our sample indicated the presence of antagonistic center and surround mechanisms and linear summation of luminance within center and surround mechanisms. We attribute the lack of evidence for surround antagonism in the eight neurons that failed to exhibit center-surround antagonism either to a mismatch between the size of the pixels in the stimuli and the size of the receptive field or to the lack of a surround mechanism (i.e., the type II neurons of Wiesel and Hubel). 3. The data from five other neurons confirm and extend previous reports indicating that the surround regions of X-like neurons can have nonlinearities. The responses of these neurons were not modulated when a contrast-reversing, bipartite stimulus was centered on the receptive field, which suggests a linear summation within the center and surround mechanisms. However, it was frequently the case for these neurons that stimuli of identical pattern but opposite contrast elicited responses of similar polarity, which indicates nonlinear behavior. 4. We found a wide variety of temporal patterns in the responses of individual LGN neurons, which included differences in the magnitude, width, and number of peaks of the initial on-transient and in the magnitude of the later sustained component. These different temporal patterns were repeatable and clearly different for different visual patterns. These results suggest that visual information may be carried in the shape as well as in the amplitude of the response waveform.


2012 ◽  
Vol 25 (0) ◽  
pp. 169
Author(s):  
Tomoaki Nakamura ◽  
Yukio P. Gunji

The majority of research on audio–visual interaction focused on spatio-temporal factors and synesthesia-like phenomena. Especially, research on synesthesia-like phenomena has been advanced by Marks et al., and they found synesthesia-like correlation between brightness and size of visual stimuli and pitch of auditory stimuli (Marks, 1987). It seems that main interest of research on synesthesia-like phenomena is what perceptual similarity/difference between synesthetes and non-synesthetes is. We guessed that cross-modal phenomena of non-synesthetes on perceptual level emerge as a function to complement the absence or ambiguity of a certain stimulus. To verify the hypothesis, we investigated audio–visual interaction using movement (speed) of an object as visual stimuli and sine-waves as auditory stimuli. In this experiment objects (circles) moved at a fixed speed in one trial and the objects were masked in arbitrary positions, and auditory stimuli (high, middle, low pitch) were given simultaneously with the disappearance of objects. Subject reported the expected position of the objects when auditory stimuli stopped. Result showed that correlation between the position, i.e., the movement speed, of the object and pitch of sound was found. We conjecture that cross-modal phenomena on non-synesthetes tend to occur when one of sensory stimuli are absent/ambiguous.


Sign in / Sign up

Export Citation Format

Share Document