attentional weight
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 3 (1) ◽  
pp. 1-46
Author(s):  
Alexander Krüger ◽  
Jan Tünnermann ◽  
Lukas Stratmann ◽  
Lucas Briese ◽  
Falko Dressler ◽  
...  

Abstract As a formal theory, Bundesen’s theory of visual attention (TVA) enables the estimation of several theoretically meaningful parameters involved in attentional selection and visual encoding. As of yet, TVA has almost exclusively been used in restricted empirical scenarios such as whole and partial report and with strictly controlled stimulus material. We present a series of experiments in which we test whether the advantages of TVA can be exploited in more realistic scenarios with varying degree of stimulus control. This includes brief experimental sessions conducted on different mobile devices, computer games, and a driving simulator. Overall, six experiments demonstrate that the TVA parameters for processing capacity and attentional weight can be measured with sufficient precision in less controlled scenarios and that the results do not deviate strongly from typical laboratory results, although some systematic differences were found.


2018 ◽  
Author(s):  
K. Braunlich ◽  
B. C. Love

AbstractThrough selective attention, decision-makers can learn to ignore behaviorally-irrelevant stimulus dimensions. This can improve learning and increase the perceptual discriminability of relevant stimulus information. To account for this effect, popular contemporary cognitive models of categorization typically include of attentional parameters, which provide information about the importance of each stimulus dimension in decision-making. The effect of these parameters on psychological representation is often described geometrically, such that perceptual differences over relevant psychological dimensions are accentuated (or stretched), and differences over irrelevant dimensions are down-weighted (or compressed). In sensory and association cortex, representations of stimulus features are known to covary with their behavioral relevance. Although this implies that neural representational space might closely resemble that hypothesized by formal categorization theory, to date, attentional effects in the brain have been demonstrated through powerful experimental manipulations (e.g., contrasts between relevant and irrelevant features). This approach sidesteps the role of idiosyncratic conceptual knowledge in guiding attention to useful information sources. To bridge this divide, we used formal categorization models, which were fit to behavioral data, to make inferences about the concepts and strategies used by individual participants during decision-making. We found that when greater attentional weight was devoted to a particular visual feature (e.g., “color”), its value (e.g., “red”) was more accurately decoded from occipitotemporal cortex. We additionally found that this effect was sufficiently sensitive to reflect individual differences in conceptual knowledge. The results indicate that occipitotemporal stimulus representations are embedded within a space closely resembling that proposed by classic categorization models.


Author(s):  
Sebastian Dummel ◽  
Ronald Hübner

Abstract. Recent research has shown that even non-salient stimuli (colored circles) can gain attentional weight, when they have been loaded with some value through previous reward learning. The present study examined such value-based attentional weighting with intrinsically rewarding food stimuli. Different snacks were assumed to have different values for people due to individual food preferences. Participants indicated their preferences toward various snacks and then performed a flanker task with these snacks: they had to categorize a target snack as either sweet or salty; irrelevant flanker snacks were either compatible or incompatible with the target category. Results of a linear mixed-effects model show that the effect of flanker compatibility on participants’ performance (response times) increased with the participants’ preference toward the flanking snacks. This shows, for the first time, that attentional weightings in a flanker task with naturalistic stimuli (snacks) are modulated by participants’ preferences toward the flankers.


2009 ◽  
Vol 21 (9) ◽  
pp. 1653-1669 ◽  
Author(s):  
Thomas Töllner ◽  
Klaus Gramann ◽  
Hermann J. Müller ◽  
Martin Eimer

Processing of a given target is facilitated when it is defined within the same (e.g., visual–visual), compared to a different (e.g., tactile–visual), perceptual modality as on the previous trial [Spence, C., Nicholls, M., & Driver, J. The cost of expecting events in the wrong sensory modality. Perception & Psychophysics, 63, 330–336, 2001]. The present study was designed to identify electrocortical (EEG) correlates underlying this “modality shift effect.” Participants had to discriminate (via foot pedal responses) the modality of the target stimulus, visual versus tactile (Experiment 1), or respond based on the target-defining features (Experiment 2). Thus, modality changes were associated with response changes in Experiment 1, but dissociated in Experiment 2. Both experiments confirmed previous behavioral findings with slower discrimination times for modality change, relative to repetition, trials. Independently of the target-defining modality, spatial stimulus characteristics, and the motor response, this effect was mirrored by enhanced amplitudes of the anterior N1 component. These findings are explained in terms of a generalized “modality-weighting” account, which extends the “dimension-weighting” account proposed by Found and Müller [Searching for unknown feature targets on more than one dimension: Investigating a “dimension-weighting” account. Perception & Psychophysics, 58, 88–101, 1996] for the visual modality. On this account, the anterior N1 enhancement is assumed to reflect the detection of a modality change and initiation of the readjustment of attentional weight-setting from the old to the new target-defining modality in order to optimize target detection.


Sign in / Sign up

Export Citation Format

Share Document