scholarly journals Spatial narrative context modulates semantic (but not visual) competition during discourse processing

2018 ◽  
Author(s):  
Glenn Patrick Williams ◽  
Anuenue Kukona ◽  
Yuki Kamide

Recent research highlights the influence of (e.g., task) context on conceptual retrieval. In order to assess whether conceptual representations are context-dependent rather than static, we investigated the influence of spatial narrative context on accessibility for lexical-semantic information by exploring competition effects. In two visual world experiments, participants listened to narratives describing semantically related (piano-trumpet; Experiment 1) or visually similar (bat-cigarette; Experiment 2) objects in the same or separate narrative locations while viewing arrays displaying these (“target” and “competitor”) objects and other distractors. Upon re-mention of the target, we analysed eye movements to the competitor. In Experiment 1, we observed semantic competition only when targets and competitors were described in the same location; in Experiment 2, we observed visual competition regardless of context. We interpret these results as consistent with context-dependent approaches, such that spatial narrative context dampens accessibility for semantic but not visual information in the visual world.

2013 ◽  
Vol 21 (3) ◽  
pp. 803-808 ◽  
Author(s):  
Rodolfo Bernal-Gamboa ◽  
Juan M. Rosas ◽  
José E. Callejas-Aguilera

2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2003 ◽  
Vol 15 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Ela I. Olivares ◽  
Jaime Iglesias ◽  
Socorro Rodríguez-Holguín

N400 brain event-related potential (ERP) is a mismatch negativity originally found in response to semantic incongruences of a linguistic nature and is used paradigmatically to investigate memory organization in various domains of information, including that of faces. In the present study, we analyzed different mismatch negativities evoked in N400-like paradigms related to recognition of newly learned faces with or without associated verbal information. ERPs were compared in the following conditions: (1) mismatching features (eyes-eyebrows) using a facial context corresponding to the faces learned without associated verbal information (“pure” intradomain facial processing); (2) mismatching features using a facial context corresponding to the faces learned with associated occupations and proper names (“nonpure” intradomain facial processing); (3) mismatching occupations using a facial context (cross-domain processing); and (4) mismatching names using an occupation context (intra-domain verbal processing). Results revealed that mismatching stimuli in the four conditions elicited a mismatch negativity analogous to N400 but with different timing and topo-graphical patterns. The onset of the mismatch negativity occurred earliest in Conditions 1 and 2, followed by Condition 4, and latest in Condition 3. The negativity had the shortest duration in Task 1 and the longest duration in Task 3. Bilateral parietal activity was confirmed in all conditions, in addition to a predominant right posterior temporal localization in Condition 1, a predominant right frontal localization in Condition 2, an occipital localization in Condition 3, and a more widely distributed (although with posterior predominance) localization in Condition 4. These results support the existence of multiple N400, and particularly of a nonlinguistic N400 related to purely visual information, which can be evoked by facial structure processing in the absence of verbal-semantic information.


2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.


Author(s):  
Lisanne van Weelden ◽  
Joost Schilperoord ◽  
Marc Swerts ◽  
Diane Pecher

Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.


2021 ◽  
Author(s):  
◽  
Matthew David Weaver

<p>People are constantly confronted by a barrage of visual information. Visual attention is the crucial mechanism which selects for further processing, subsets of information which are most behaviourally relevant, allowing us to function effectively within our everyday environment. This thesis explored how semantic information (i.e., information which has meaning) encountered within the environment influences the selective orienting of visual attention. Past research has shown semantic information does affect the orienting of attention, but the processes by which it does so remain unclear. The extent of semantic influence on the visual attention system was determined by parsing visual orienting into the tractable components of covert and overt orienting, and capture and hold process stages therein. This thesis consisted of a series of experiments which were designed, utilising well- established paradigms and semantic manipulations in concert with eye-tracking techniques, to test whether the capture and hold of either overt or covert forms of visual attention were influenced by semantic information. Taking together the main findings across all experiments, the following conclusions were drawn. 1) Semantic information differentially influences covert and overt attentional orienting processes. 2) The capture and hold of covert attention is generally uninfluenced by semantic information. 3) Semantic information briefly encountered in the environment can facilitate or prime action independent of covert attentional orienting.4) Overt attention can be both preferentially captured and held by semantically salient information encountered in visual environments. The visual attentional system thus appears to have a complex relationship with semantic information encountered in the visual environment. Semantic information has a differential influence on selective orienting processes that depends on the form of orienting employed and a range of circumstances under which attentional selection takes place.</p>


2020 ◽  
Vol 34 (07) ◽  
pp. 12943-12950
Author(s):  
Zhaolong Zhang ◽  
Yuejie Zhang ◽  
Rui Feng ◽  
Tao Zhang ◽  
Weiguo Fan

Zero-Shot Sketch-based Image Retrieval (ZS-SBIR) has been proposed recently, putting the traditional Sketch-based Image Retrieval (SBIR) under the setting of zero-shot learning. Dealing with both the challenges in SBIR and zero-shot learning makes it become a more difficult task. Previous works mainly focus on utilizing one kind of information, i.e., the visual information or the semantic information. In this paper, we propose a SketchGCN model utilizing the graph convolution network, which simultaneously considers both the visual information and the semantic information. Thus, our model can effectively narrow the domain gap and transfer the knowledge. Furthermore, we generate the semantic information from the visual information using a Conditional Variational Autoencoder rather than only map them back from the visual space to the semantic space, which enhances the generalization ability of our model. Besides, feature loss, classification loss, and semantic loss are introduced to optimize our proposed SketchGCN model. Our model gets a good performance on the challenging Sketchy and TU-Berlin datasets.


Sign in / Sign up

Export Citation Format

Share Document