Tuning Perceptual Competition

2010 ◽  
Vol 103 (2) ◽  
pp. 1057-1065 ◽  
Author(s):  
Edmund Wascher ◽  
Christian Beste

The ability to notice relevant visual information has been assumed to be determined both by the relative salience of relevant information compared with distracters within a given display and by voluntary allocation of attention toward intended goals. A dominance of either of these two mechanisms in stimulus processing has been claimed by different theories. A central question in this context is to what degree and how task irrelevant signals can influence processing of target information. In the present study, participants had to detect a luminance change in various conditions among others against an irrelevant orientation change. The saliency of the latter was systematically varied and was found to be predictive for the proportion of detected information when relevant and irrelevant information were spatially separated but not when they overlapped. Weighting and competition of incoming signals was reflected in the amplitude of the N1pc component of the event-related potential. Initial orientation of attention toward the irrelevant element had to be followed by a reallocation process, reflected in an N2pc. The control of conflicting information additionally evoked a fronto-central N2 that varied with the amount of competition induced. Thus the data support models that assume that attention is a dynamic interplay of bottom-up and top-down processes that may be mediated via a common dynamic neural network.

2005 ◽  
Vol 16 (3) ◽  
pp. 228-235 ◽  
Author(s):  
Sharon E. Guttman ◽  
Lee A. Gilroy ◽  
Randolph Blake

When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of “hearing visual rhythms.”


2020 ◽  
Author(s):  
F. Di Bello ◽  
S. Ben Hadj Hassen ◽  
E. Astrand ◽  
S. Ben Hamed

AbstractIn everyday life, we are continuously struggling at focusing on our current goals while at the same time avoiding distractions. Attention is the neuro-cognitive process devoted to the selection of behaviorally relevant sensory information while at the same time preventing distraction by irrelevant information. Visual selection can be implemented by both long-term (learning-based spatial prioritization) and short term (dynamic spatial attention) mechanisms. On the other hand, distraction can be prevented proactively, by strategically prioritizing task-relevant information at the expense of irrelevant information, or reactively, by actively suppressing the processing of distractors. The distinctive neuronal signature of each of these four processes is largely unknown. Likewise, how selection and suppression mechanisms interact to drive perception has never been explored neither at the behavioral nor at the neuronal level. Here, we apply machine-learning decoding methods to prefrontal cortical (PFC) activity to monitor dynamic spatial attention with an unprecedented spatial and temporal resolution. This leads to several novel observations. We first identify independent behavioral and neuronal signatures for learning-based attention prioritization and dynamic attentional selection. Second, we identify distinct behavioral and neuronal signatures for proactive and reactive suppression mechanisms. We find that while distracting task-relevant information is suppressed proactively, task-irrelevant information is suppressed reactively. Critically, we show that distractor suppression, whether proactive or reactive, strongly depends on both learning-based attention prioritization and dynamic attentional selection. Overall, we thus provide a unified neuro-cognitive framework describing how the prefrontal cortex implements spatial selection and distractor suppression in order to flexibly optimize behavior in dynamic environments.


2017 ◽  
Vol 29 (4) ◽  
pp. 628-636 ◽  
Author(s):  
Tobias Katus ◽  
Anna Grubert ◽  
Martin Eimer

Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top–down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top–down influences that optimize multimodal WM representations for behavioral goals.


Perception ◽  
2017 ◽  
Vol 46 (12) ◽  
pp. 1412-1426 ◽  
Author(s):  
Elmeri Syrjänen ◽  
Marco Tullio Liuzza ◽  
Håkan Fischer ◽  
Jonas K. Olofsson

Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.


Author(s):  
I. Murph ◽  
M. McDonald ◽  
K. Richardson ◽  
M. Wilkinson ◽  
S. Robertson ◽  
...  

Within distracting environments, it is difficult to maintain attentional focus on complex tasks. Cognitive aids can support attention by adding relevant information to the environment, such as via augmented reality (AR). However, there may be a benefit in removing elements from the environment, such as irrelevant alarms, displays, and conversations. De-emphasis of distracting elements is a type of AR called Diminished Reality (DR). Although de-emphasizing distraction may help focus on a primary task, it may also reduce situational awareness (SA) of other activities that may become relevant. In the current study, participants will assemble a medical ventilator during a simulated emergency while experiencing varying levels of DR. Participants will also be probed to assess secondary SA. We anticipate that participants will have better accuracy and completion times in the full DR conditions but their SA will suffer. Future applications include the design of future DR systems and improved training methods.


2014 ◽  
Vol 111 (3) ◽  
pp. 481-487 ◽  
Author(s):  
Arezoo Pooresmaeili ◽  
Dominik R. Bach ◽  
Raymond J. Dolan

Deciding whether a stimulus is the “same” or “different” from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.


2019 ◽  
Author(s):  
Roy S Hessels

Gaze – where one looks, how long, and when – plays an essential part in human social behaviour. While many aspects of social gaze have been reviewed, there is no comprehensive review or theoretical framework that describes how gaze to faces supports face-to-face interaction. In this review, I address the following questions: (1) When does gaze need to be allocated to a particular region of a face in order to provide the relevant information for successful interaction; (2) How do humans look at other people, and faces in particular, regardless of whether gaze needs to be directed at a particular region to acquire the relevant visual information; (3) How does gaze support the regulation of interaction? The work reviewed spans psychophysical research, observational research and eye-tracking research in both lab-based and interactive contexts. Based on the literature overview, I sketch a framework for future research based on dynamic systems theory. The framework holds that gaze should be investigated in relation to sub-states of the interaction, encompassing sub-states of the interactors, the content of the interaction as well as the interactive context. The relevant sub-states for understanding gaze in interaction vary over different timescales from microgenesis to ontogenesis and phylogenesis. The framework has important implications for vision science, psychopathology, developmental science and social robotics.


Author(s):  
Thomas Jacobsen ◽  
Erich Schröger

Abstract. Working memory uses central sound representations as an informational basis. The central sound representation is the temporally and feature-integrated mental representation that corresponds to phenomenal perception. It is used in (higher-order) mental operations and stored in long-term memory. In the bottom-up processing path, the central sound representation can be probed at the level of auditory sensory memory with the mismatch negativity (MMN) of the event-related potential. The present paper reviews a newly developed MMN paradigm to tap into the processing of speech sound representations. Preattentive vowel categorization based on F1-F2 formant information occurs in speech sounds and complex tones even under conditions of high variability of the auditory input. However, an additional experiment demonstrated the limits of the preattentive categorization of language-relevant information. It tested whether the system categorizes complex tones containing the F1 and F2 formant components of the vowel /a/ differently than six sounds with nonlanguage-like F1-F2 combinations. From the absence of an MMN in this experiment, it is concluded that no adequate vowel representation was constructed. This shows limitations of the capability of preattentive vowel categorization.


Sign in / Sign up

Export Citation Format

Share Document