Spatial Attention and the Visual World: Attention and Vector Sum Coding in Spatial Language

1999 ◽  
Author(s):  
Laura A. Calson-Radvansky
2020 ◽  
Vol 34 (04) ◽  
pp. 3684-3692
Author(s):  
Eric Crawford ◽  
Joelle Pineau

The ability to detect and track objects in the visual world is a crucial skill for any intelligent agent, as it is a necessary precursor to any object-level reasoning process. Moreover, it is important that agents learn to track objects without supervision (i.e. without access to annotated training videos) since this will allow agents to begin operating in new environments with minimal human assistance. The task of learning to discover and track objects in videos, which we call unsupervised object tracking, has grown in prominence in recent years; however, most architectures that address it still struggle to deal with large scenes containing many objects. In the current work, we propose an architecture that scales well to the large-scene, many-object setting by employing spatially invariant computations (convolutions and spatial attention) and representations (a spatially local object specification scheme). In a series of experiments, we demonstrate a number of attractive features of our architecture; most notably, that it outperforms competing methods at tracking objects in cluttered scenes with many objects, and that it can generalize well to videos that are larger and/or contain more objects than videos encountered during training.


2018 ◽  
Author(s):  
Thomas Kluth

Are humans able to split their attentional focus? This master's thesis tries to answer this question by proposing several modifications to the Attentional Vector Sum (AVS) model (Regier & Carlson, 2001). The AVS model is a computational cognitive model of spatial language use that assumes visual attention. Carlson, Regier, Lopez, and Corrigan (2006) have developed a modification to the AVS model that integrates effects of world knowledge (functionality of spatially related objects) into the AVS model. This modified model assumes that people are able to split their visual spatial attention. However, it is debated whether this assumption holds true (e.g., Jans, Peters, & De Weerd, 2010). Thus, this thesis investigates the assumption in the domain of spatial language use by proposing and assessing alternative model modifications that do not assume split attention. Based on available empirical data, the results favor a uni-focal distribution of attention over a multi-focal attentional distribution. At the same time, the results cast doubt on the proper modeling of functional aspects of spatial language use, as the AVS model (not considering functionality) is performing surprisingly well on most data sets. (See https://doi.org/10.1007/978-3-319-11215-2_6 for a condensed version of this work.)


2001 ◽  
Vol 15 (1) ◽  
pp. 22-34 ◽  
Author(s):  
D.H. de Koning ◽  
J.C. Woestenburg ◽  
M. Elton

Migraineurs with and without aura (MWAs and MWOAs) as well as controls were measured twice with an interval of 7 days. The first session of recordings and tests for migraineurs was held about 7 hours after a migraine attack. We hypothesized that electrophysiological changes in the posterior cerebral cortex related to visual spatial attention are influenced by the level of arousal in migraineurs with aura, and that this varies over the course of time. ERPs related to the active visual attention task manifested significant differences between controls and both types of migraine sufferers for the N200, suggesting a common pathophysiological mechanism for migraineurs. Furthermore, migraineurs without aura (MWOAs) showed a significant enhancement for the N200 at the second session, indicating the relevance of time of measurement within migraine studies. Finally, migraineurs with aura (MWAs) showed significantly enhanced P240 and P300 components at central and parietal cortical sites compared to MWOAs and controls, which seemed to be maintained over both sessions and could be indicative of increased noradrenergic activity in MWAs.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document