attentional state
Recently Published Documents


TOTAL DOCUMENTS

106
(FIVE YEARS 40)

H-INDEX

17
(FIVE YEARS 4)

2022 ◽  
Vol 2 ◽  
Author(s):  
Ivo V. Stuldreher ◽  
Alexandre Merasli ◽  
Nattapong Thammasan ◽  
Jan B. F. van Erp ◽  
Anne-Marie Brouwer

Research on brain signals as indicators of a certain attentional state is moving from laboratory environments to everyday settings. Uncovering the attentional focus of individuals in such settings is challenging because there is usually limited information about real-world events, as well as a lack of data from the real-world context at hand that is correctly labeled with respect to individuals' attentional state. In most approaches, such data is needed to train attention monitoring models. We here investigate whether unsupervised clustering can be combined with physiological synchrony in the electroencephalogram (EEG), electrodermal activity (EDA), and heart rate to automatically identify groups of individuals sharing attentional focus without using knowledge of the sensory stimuli or attentional focus of any of the individuals. We used data from an experiment in which 26 participants listened to an audiobook interspersed with emotional sounds and beeps. Thirteen participants were instructed to focus on the narrative of the audiobook and 13 participants were instructed to focus on the interspersed emotional sounds and beeps. We used a broad range of commonly applied dimensionality reduction ordination techniques—further referred to as mappings—in combination with unsupervised clustering algorithms to identify the two groups of individuals sharing attentional focus based on physiological synchrony. Analyses were performed using the three modalities EEG, EDA, and heart rate separately, and using all possible combinations of these modalities. The best unimodal results were obtained when applying clustering algorithms on physiological synchrony data in EEG, yielding a maximum clustering accuracy of 85%. Even though the use of EDA or heart rate by itself did not lead to accuracies significantly higher than chance level, combining EEG with these measures in a multimodal approach generally resulted in higher classification accuracies than when using only EEG. Additionally, classification results of multimodal data were found to be more consistent across algorithms than unimodal data, making algorithm choice less important. Our finding that unsupervised classification into attentional groups is possible is important to support studies on attentional engagement in everyday settings.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8205
Author(s):  
Lisa-Marie Vortmann ◽  
Felix Putze

Statistical measurements of eye movement-specific properties, such as fixations, saccades, blinks, or pupil dilation, are frequently utilized as input features for machine learning algorithms applied to eye tracking recordings. These characteristics are intended to be interpretable aspects of eye gazing behavior. However, prior research has demonstrated that when trained on implicit representations of raw eye tracking data, neural networks outperform these traditional techniques. To leverage the strengths and information of both feature sets, we integrated implicit and explicit eye tracking features in one classification approach in this work. A neural network was adapted to process the heterogeneous input and predict the internally and externally directed attention of 154 participants. We compared the accuracies reached by the implicit and combined features for different window lengths and evaluated the approaches in terms of person- and task-independence. The results indicate that combining implicit and explicit feature extraction techniques for eye tracking data improves classification results for attentional state detection significantly. The attentional state was correctly classified during new tasks with an accuracy better than chance, and person-independent classification even outperformed person-dependently trained classifiers for some settings. For future experiments and applications that require eye tracking data classification, we suggest to consider implicit data representation in addition to interpretable explicit features.


2021 ◽  
Author(s):  
Matthew James Davidson ◽  
James Macdonald ◽  
Nick Yeung

Variability in the detection and discrimination of weak visual stimuli has been linked to prestimulus neural activity. In particular, the power of oscillatory activity in the alpha-band (8-12 Hz) has been shown to impact upon the objective likelihood of stimulus detection, as well as measures of subjective visibility, attention, and decision confidence. We aimed to clarify how prestimulus alpha influences performance and phenomenology, by recording simultaneous subjective measures of attention and confidence (Experiment 1), or attention and visibility (Experiment 2) on a trial-by-trial basis in a visual detection task. Across both experiments, prestimulus alpha power was negatively and linearly correlated with the intensity of subjective attention. In contrast to this linear relationship, we observed a quadratic relationship between the strength of prestimulus alpha power and subjective ratings of confidence and visibility. We find that this same quadratic relationship links prestimulus alpha power to the strength of stimulus evoked responses. Visibility and confidence judgements corresponded to the strength of evoked responses, but confidence, uniquely, incorporated information about attentional state. As such, our findings reveal distinct psychological and neural correlates of metacognitive judgements of attentional state, stimulus visibility, and decision confidence.


2021 ◽  
Author(s):  
Taylor A Chamberlain ◽  
Monica D. Rosenberg

Sustained attention is a critical cognitive function reflected in an individuals whole-brain pattern of fMRI functional connectivity. However sustained attention is not a purely static trait. Rather, attention waxes and wanes over time. Do functional brain networks that underlie individual differences in sustained attention also underlie changes in attentional state? To investigate, we replicate the finding that a validated connectome-based model of individual differences in sustained attention tracks pharmacologically induced changes in attentional state. Specifically, preregistered analyses revealed that participants exhibited functional connectivity signatures of stronger attention when awake than when under deep sedation with the anesthetic agent propofol. Furthermore, this effect was relatively specific to the predefined sustained attention networks: propofol administration modulated strength of the sustained attention networks more than it modulated strength of canonical resting-state networks and a network defined to predict fluid intelligence, and the functional connections most affected by propofol sedation overlapped with the sustained attention networks. Thus, propofol modulates functional connectivity signatures of sustained attention within individuals. More broadly these findings underscore the utility of pharmacological intervention in testing both the generalizability and specificity of network-based models of cognitive function.


2021 ◽  
Author(s):  
Cheyenne Wakeland-Hart ◽  
Steven Cao ◽  
Megan deBettencourt ◽  
Wilma A. Bainbridge ◽  
Monica D. Rosenberg

We only remember a fraction of what we see—including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image’s memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, N=706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, N=57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Monamie Ringhofer ◽  
Miléna Trösch ◽  
Léa Lansade ◽  
Shinya Yamamoto

AbstractWhen interacting with humans, domesticated species may respond to communicative gestures, such as pointing. However, it is currently unknown, except for in dogs, if species comprehend the communicative nature of such cues. Here, we investigated whether horses could follow the pointing of a human informant by evaluating the credibility of the information about the food-hiding place provided by the pointing of two informants. Using an object-choice task, we manipulated the attentional state of the two informants during food-hiding events and differentiated their knowledge about the location of the hidden food. Furthermore, we investigated the horses’ visual attention levels towards human behaviour to evaluate the relationship between their motivation and their performance of the task. The result showed that horses that sustained high attention levels could evaluate the credibility of the information and followed the pointing of an informant who knew where food was hidden (Z =  − 2.281, P = 0.002, n = 36). This suggests that horses are highly sensitive to the attentional state and pointing gestures of humans, and that they perceive pointing as a communicative cue. This study also indicates that the motivation for the task should be investigated to determine the socio-cognitive abilities of animals.


2021 ◽  
Author(s):  
Branden J. Bio ◽  
Michael S. A. Graziano

In the attention schema theory, people attribute the property of consciousness to themselves and others because it serves as a schematic model of attention. Most of the existing literature on monitoring the attention of others assumes that people primarily use the gaze direction of others. In that assumption, attention is not represented by a deeper model, but instead limited mainly to a single, externally visible parameter. Here we presented subjects with two cues about the attentional state of a face: direction of gaze and emotional expression. We tested whether people relied predominantly on one cue, the other, or both when deciding if the face was conscious of a nearby object. If the traditional view is correct, then the gaze cue should dominate. Instead, some people relied on gaze, some on expression, and some on an integration of cues, suggesting that a variety of surface strategies could inform a deeper model. We also assessed people’s social cognitive ability using two, independent, standard tests. If the traditional view of attention monitoring is correct, then the degree to which people use gaze to judge attention should correlate best with their social cognitive ability. Instead, social cognitive ability correlated best with the degree to which people successfully integrated the cues together. The results strongly suggest that when people attribute a specific state of consciousness to another, rather than simply tracking gaze, they construct a model of attention, or an attention schema, that is informed by a combination of surface cues.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lisa-Marie Vortmann ◽  
Jannes Knychalla ◽  
Sonja Annerer-Walcher ◽  
Mathias Benedek ◽  
Felix Putze

It has been shown that conclusions about the human mental state can be drawn from eye gaze behavior by several previous studies. For this reason, eye tracking recordings are suitable as input data for attentional state classifiers. In current state-of-the-art studies, the extracted eye tracking feature set usually consists of descriptive statistics about specific eye movement characteristics (i.e., fixations, saccades, blinks, vergence, and pupil dilation). We suggest an Imaging Time Series approach for eye tracking data followed by classification using a convolutional neural net to improve the classification accuracy. We compared multiple algorithms that used the one-dimensional statistical summary feature set as input with two different implementations of the newly suggested method for three different data sets that target different aspects of attention. The results show that our two-dimensional image features with the convolutional neural net outperform the classical classifiers for most analyses, especially regarding generalization over participants and tasks. We conclude that current attentional state classifiers that are based on eye tracking can be optimized by adjusting the feature set while requiring less feature engineering and our future work will focus on a more detailed and suited investigation of this approach for other scenarios and data sets.


2021 ◽  
Author(s):  
Han Zhang ◽  
Tessa Abagis ◽  
John Jonides

We suggest that consideration of trial-by-trial variations, individual differences, and training data will enrich the current framework in Luck et al. (2020). We consider whether attentional capture is modulated by trial-by-trial fluctuations of attentional state and experiences on the previous trial. We also consider whether individual differences may affect attentional capture while highlighting potential challenges in using the color-singleton task to measure individual differences. Finally, performance in the color-singleton task can be modified dramatically with practice but the underlying mechanisms are not entirely clear. Understanding the malleability of attentional capture may broaden the current framework and resolve outstanding questions. The version of record of this manuscript will be available in Visual Cognition (2021), https://doi.org/10.1080/13506285.2021.1915903


Sign in / Sign up

Export Citation Format

Share Document