Emotional arousal and lexical specificity modulate response times differently depending on ear of presentation in a dichotic listening task

2015 ◽  
Vol 10 (2) ◽  
pp. 221-246 ◽  
Author(s):  
Frida Blomberg ◽  
Mikael Roll ◽  
Magnus Lindgren ◽  
K. Jonas Brännström ◽  
Merle Horne

We investigated possible hemispheric differences in the processing of four different lexical semantic categories: SPECIFIC (e.g. bird), GENERAL (e.g. animal), ABSTRACT (e.g. advice), and EMOTIONAL (e.g. love). These wordtypes were compared using a dichotic listening paradigm and a semantic category classification task. Response times (RTs) were measured when participants classified testwords as concrete or abstract. In line with previous findings, words were expected to be processed faster following right-ear presentation. However, lexical specificity and emotional arousal were predicted to modulate response times differently depending on the ear of presentation. For left-ear presentation, relatively faster RTs were predicted for SPECIFIC and EMOTIONAL words as opposed to GENERAL and ABSTRACT words. An interaction of ear and wordtype was found. For right-ear presentation, RTs increased as testwords’ imageability decreased along the span SPECIFIC–GENERAL–EMOTIONAL–ABSTRACT. In contrast, for left ear presentation, EMOTIONAL words were processed fastest, while SPECIFIC words gave rise to long RTs on par with those for ABSTRACT words. Thus, the prediction for EMOTIONAL words presented in the left ear was borne out, whereas the prediction for SPECIFIC words was not. This might be related to previously found differences in processing of stimuli at a global or local level.

2018 ◽  
Vol 33 (5) ◽  
pp. 292-300 ◽  
Author(s):  
Ellyn A. Riley ◽  
Elena Barbieri ◽  
Sandra Weintraub ◽  
M. Marsel Mesulam ◽  
Cynthia K. Thompson

Prototypical items within a semantic category are processed faster than atypical items within the same category. This typicality effect reflects normal representation and processing of semantic categories and when absent may be reflective of lexical–semantic deficits. We examined typicality effects in individuals with semantic and nonsemantic variants of primary progressive aphasia (PPA; semantic—PPA-S, agrammatic—PPA-G), a neurodegenerative disorder characterized by specific decline in language function, and age-matched controls. Using a semantic category verification task, where participants were asked to decide whether visual or auditory words (category typical, atypical, or nonmembers) belonged within a specified superordinate category, we found a typicality effect (ie, faster response times for typical vs atypical items) for all participant groups. However, participants with more severe PPA-S did not show a typicality effect in either modality. Findings may reflect increased intracategory semantic blurring as the disease progresses and semantic impairment becomes more severe.


2014 ◽  
Vol 35 (3) ◽  
pp. 137-143 ◽  
Author(s):  
Lindsay M. Niccolai ◽  
Thomas Holtgraves

This research examined differences in the perception of emotion words as a function of individual differences in subclinical levels of depression and anxiety. Participants completed measures of depression and anxiety and performed a lexical decision task for words varying in affective valence (but equated for arousal) that were presented briefly to the right or left visual field. Participants with a lower level of depression demonstrated hemispheric asymmetry with a bias toward words presented to the left hemisphere, but participants with a higher level of depression displayed no hemispheric differences. Participants with a lower level of depression also demonstrated a bias toward positive words, a pattern that did not occur for participants with a higher level of depression. A similar pattern occurred for anxiety. Overall, this study demonstrates how variability in levels of depression and anxiety can influence the perception of emotion words, with patterns that are consistent with past research.


2009 ◽  
Author(s):  
Oshin A. Vartanian ◽  
Colin Martindale ◽  
Jessica Matthews ◽  
Jonna M. Kwiatkowski

2021 ◽  
pp. 174702182199003
Author(s):  
Andy J Kim ◽  
David S Lee ◽  
Brian A Anderson

Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.


2014 ◽  
Vol 67 (10) ◽  
pp. 2010-2024 ◽  
Author(s):  
Vera Lawo ◽  
Janina Fels ◽  
Josefa Oberem ◽  
Iring Koch

Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.


1974 ◽  
Vol 38 (1) ◽  
pp. 263-264 ◽  
Author(s):  
L. Gruber ◽  
R. L. Powell

Performance on a dichotic listening task of 28 normally speaking and 28 stuttering elementary and high school children showed no significant inter-ear differences. These results do not support the idea that stuttering results from lack of cerebral dominance for speech.


2019 ◽  
Author(s):  
Lore Goetschalckx ◽  
Johan Wagemans

This is a preprint. Please find the published, peer reviewed version of the paper here: https://peerj.com/articles/8169/. Images differ in their memorability in consistent ways across observers. What makes an image memorable is not fully understood to date. Most of the current insight is in terms of high-level semantic aspects, related to the content. However, research still shows consistent differences within semantic categories, suggesting a role for factors at other levels of processing in the visual hierarchy. To aid investigations into this role as well as contributions to the understanding of image memorability more generally, we present MemCat. MemCat is a category-based image set, consisting of 10K images representing five broader, memorability-relevant categories (animal, food, landscape, sports, and vehicle) and further divided into subcategories (e.g., bear). They were sampled from existing source image sets that offer bounding box annotations or more detailed segmentation masks. We collected memorability scores for all 10K images, each score based on the responses of on average 99 participants in a repeat-detection memory task. Replicating previous research, the collected memorability scores show high levels of consistency across observers. Currently, MemCat is the second largest memorability image set and the largest offering a category-based structure. MemCat can be used to study the factors underlying the variability in image memorability, including the variability within semantic categories. In addition, it offers a new benchmark dataset for the automatic prediction of memorability scores (e.g., with convolutional neural networks). Finally, MemCat allows to study neural and behavioral correlates of memorability while controlling for semantic category.


1996 ◽  
Vol 49 (2) ◽  
pp. 490-518 ◽  
Author(s):  
Anthony J. Lambert ◽  
Alexander L. Sumich

Three experiments tested whether spatial attention can be influenced by a predictive relation between incidental information and the location of target events. Subjects performed a simple dot detection task; 600 msec prior to each target a word was presented briefly 5° to the left or right of fixation. There was a predictive relationship between the semantic category (living or non-living) of the words and target location. However, subjects were instructed to ignore the words, and a post-experiment questionnaire confirmed that they remained unaware of the word-target relationship. In all three experiments, given some practice on the task, response times were faster when targets appeared at likely ( p = 0.8), compared to unlikely ( p = 0.2) locations, in relation to lateral word category. Experiments 2 and 3 confirmed that this effect was driven by semantic encoding of the irrelevant words, and not by repetition of individual stimuli. Theoretical implications of this finding are discussed.


2015 ◽  
Vol 27 (5) ◽  
pp. 988-1000 ◽  
Author(s):  
Malte Wöstmann ◽  
Erich Schröger ◽  
Jonas Obleser

The flexible allocation of attention enables us to perceive and behave successfully despite irrelevant distractors. How do acoustic challenges influence this allocation of attention, and to what extent is this ability preserved in normally aging listeners? Younger and healthy older participants performed a masked auditory number comparison while EEG was recorded. To vary selective attention demands, we manipulated perceptual separability of spoken digits from a masking talker by varying acoustic detail (temporal fine structure). Listening conditions were adjusted individually to equalize stimulus audibility as well as the overall level of performance across participants. Accuracy increased, and response times decreased with more acoustic detail. The decrease in response times with more acoustic detail was stronger in the group of older participants. The onset of the distracting speech masker triggered a prominent contingent negative variation (CNV) in the EEG. Notably, CNV magnitude decreased parametrically with increasing acoustic detail in both age groups. Within identical levels of acoustic detail, larger CNV magnitude was associated with improved accuracy. Across age groups, neuropsychological markers further linked early CNV magnitude directly to individual attentional capacity. Results demonstrate for the first time that, in a demanding listening task, instantaneous acoustic conditions guide the allocation of attention. Second, such basic neural mechanisms of preparatory attention allocation seem preserved in healthy aging, despite impending sensory decline.


Sign in / Sign up

Export Citation Format

Share Document