scholarly journals Electrophysiological correlates of the interplay between low-level visual features and emotional content during word reading

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Sebastian Schindler ◽  
Antonio Schettino ◽  
Gilles Pourtois
2018 ◽  
Author(s):  
Sebastian Schindler ◽  
Antonio Schettino ◽  
Gilles Pourtois

Processing affectively charged visual stimuli typically results in increased amplitude of specific event-related potential (ERP) components. Low-level features similarly modulate electrophysiological responses, with amplitude changes proportional to variations in stimulus size and contrast. However, it remains unclear whether emotion-related amplifications during visual word processing are necessarily intertwined with changes in specific low-level features or, instead, may act independently.In this pre-registered electrophysiological study, we varied font size and contrast of neutral and negative words while participants were monitoring their semantic content. We examined ERP responses associated with early sensory and attentional processes as well as later stages of stimulus processing. Results showed amplitude modulations by low-level visual features early on following stimulus onset – i.e., P1 and N1 components –, while the LPP was independently modulated by these visual features. Independent effects of size and emotion were observed only at the level of the EPN. Here, larger EPN amplitudes for negative were observed only for small high contrast and large low contrast words. These results suggest that early increase in sensory processing at the EPN level for negative words is not automatic, but bound to specific combinations of low-level features, occurring presumably via attentional control processes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


2021 ◽  
Author(s):  
Maryam Nematollahi Arani

Object recognition has become a central topic in computer vision applications such as image search, robotics and vehicle safety systems. However, it is a challenging task due to the limited discriminative power of low-level visual features in describing the considerably diverse range of high-level visual semantics of objects. Semantic gap between low-level visual features and high-level concepts are a bottleneck in most systems. New content analysis models need to be developed to bridge the semantic gap. In this thesis, algorithms based on conditional random fields (CRF) from the class of probabilistic graphical models are developed to tackle the problem of multiclass image labeling for object recognition. Image labeling assigns a specific semantic category from a predefined set of object classes to each pixel in the image. By well capturing spatial interactions of visual concepts, CRF modeling has proved to be a successful tool for image labeling. This thesis proposes novel approaches to empowering the CRF modeling for robust image labeling. Our primary contributions are twofold. To better represent feature distributions of CRF potentials, new feature functions based on generalized Gaussian mixture models (GGMM) are designed and their efficacy is investigated. Due to its shape parameter, GGMM can provide a proper fit to multi-modal and skewed distribution of data in nature images. The new model proves more successful than Gaussian and Laplacian mixture models. It also outperforms a deep neural network model on Corel imageset by 1% accuracy. Further in this thesis, we apply scene level contextual information to integrate global visual semantics of the image with pixel-wise dense inference of fully-connected CRF to preserve small objects of foreground classes and to make dense inference robust to initial misclassifications of the unary classifier. Proposed inference algorithm factorizes the joint probability of labeling configuration and image scene type to obtain prediction update equations for labeling individual image pixels and also the overall scene type of the image. The proposed context-based dense CRF model outperforms conventional dense CRF model by about 2% in terms of labeling accuracy on MSRC imageset and by 4% on SIFT Flow imageset. Also, the proposed model obtains the highest scene classification rate of 86% on MSRC dataset.


Author(s):  
Anne H.H. Ngu ◽  
Jialie Shen ◽  
John Shepherd

The optimized distance-based access methods currently available for multimedia databases are based on two major assumptions: a suitable distance function is known a priori, and the dimensionality of image features is low. The standard approach to building image databases is to represent images via vectors based on low-level visual features and make retrieval based on these vectors. However, due to the large gap between the semantic notions and low-level visual content, it is extremely difficult to define a distance function that accurately captures the similarity of images as perceived by humans. Furthermore, popular dimension reduction methods suffer from either the inability to capture the nonlinear correlations among raw data or very expensive training cost. To address the problems, in this chapter we introduce a new indexing technique called Combining Multiple Visual Features (CMVF) that integrates multiple visual features to get better query effectiveness. Our approach is able to produce low-dimensional image feature vectors that include not only low-level visual properties but also high-level semantic properties. The hybrid architecture can produce feature vectors that capture the salient properties of images yet are small enough to allow the use of existing high-dimensional indexing methods to provide efficient and effective retrieval.


2007 ◽  
Vol 47 (19) ◽  
pp. 2483-2498 ◽  
Author(s):  
Olivier Le Meur ◽  
Patrick Le Callet ◽  
Dominique Barba

2019 ◽  
Author(s):  
Michael B. Bone ◽  
Fahad Ahmad ◽  
Bradley R. Buchsbaum

AbstractWhen recalling an experience of the past, many of the component features of the original episode may be, to a greater or lesser extent, reconstructed in the mind’s eye. There is strong evidence that the pattern of neural activity that occurred during an initial perceptual experience is recreated during episodic recall (neural reactivation), and that the degree of reactivation is correlated with the subjective vividness of the memory. However, while we know that reactivation occurs during episodic recall, we have lacked a way of precisely characterizing the contents—in terms of its featural constituents—of a reactivated memory. Here we present a novel approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations of image stimuli derived from a deep convolutional neural network to decode neural reactivation in fMRI data collected while participants performed an episodic recall task. We show that neural reactivation associated with low-level visual features (e.g. edges), high-level visual features (e.g. facial features), and semantic features (e.g. “terrier”) occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level visual features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the relative contributions of low- and high-level features to the vividness of visual memories, clarify the role of the frontal cortex during episodic recall, and challenge a strict interpretation the posterior-to-anterior visual hierarchy.


2019 ◽  
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Bettina Gathmann ◽  
robert.moeck ◽  
thomas straube

Emotional facial expressions lead to modulations of early event-related potentials (ERPs). However, it has so far remained unclear in how far these modulations represent face-specific effects rather than differences in low-level visual features, and to which extent they depend on available processing resources. To examine these questions, we conducted two preregistered independent experiments (N = 40 in each experiment) using different variants of a novel task which manipulates peripheral perceptual load across levels but keeps overall visual stimulation constant. Centrally, task-irrelevant angry, neutral and happy faces and their Fourier phase-scrambled versions, which preserved low-level visual features, were presented. The results of both studies showed load-independent P1 and N170 emotion effects. Importantly, we could confirm by using Bayesian analyses that these emotion effects were face-independent for the P1 but not for the N170 component. We conclude that firstly, ERP modulations during the P1 interval strongly depend on low-level visual information, while the emotional N170 modulation requires the processing of figural facial features. Secondly, both P1 and N170 modulations appear to be immune to a large range of variations in perceptual load.


Sign in / Sign up

Export Citation Format

Share Document