stimulus degradation
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 5)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Heiko Hecht ◽  
Esther Brendel ◽  
Marlene Wessels ◽  
Christoph Bernhard

AbstractOften, we have to rely on limited information when judging time-to-contact (TTC), as for example, when driving in foul weather, or in situations where we would need reading glasses but do not have them handy. However, most existing studies on the ability to judge TTC have worked with optimal visual stimuli. In a prediction motion task, we explored to what extent TTC estimation is affected by visual stimulus degradation. A simple computer-simulated object approached the observer at constant speed either with clear or impaired vision. It was occluded after 1 or 1.5 s. The observers extrapolated the object’s motion and pressed a button when they thought the object would have collided with them. We found that dioptric blur and simulated snowfall shortened TTC-estimates. Contrast reduction produced by a virtual semi-transparent mask lengthened TTC estimates, which could be the result of distance overestimation or speed underestimation induced by the lower contrast or the increased luminance of the mask. We additionally explored the potential influence of arousal and valence, although they played a minor role for basic TTC estimation. Our findings suggest that vision impairments have adverse effects on TTC estimation, depending on the specific type of degradation and the changes of the visual environmental cues which they cause.


2021 ◽  
Author(s):  
Adrienne C. DeBrosse ◽  
Ye Li ◽  
Robyn Wiseman ◽  
Racine Ross ◽  
Sy’Keria Garrison ◽  
...  

AbstractSustained attention is a core cognitive domain that is often disrupted in neuropsychiatric disorders. Continuous performance tests (CPTs) are the most common clinical assay of sustained attention. In CPTs, participants produce a behavioral response to target stimuli and refrain from responding to non-target stimuli. Performance in CPTs is measured as the ability to discriminate between targets and non-targets. Rodent versions of CPTs (rCPT) have been developed and validated with both anatomical and pharmacological studies, providing a translational platform for understanding the neurobiology of sustained attention. In human studies, using degraded stimuli (decreased contrast) in CPTs impairs performance and patients with schizophrenia experience a larger decrease in performance compared to healthy controls. In this study, we tested multiple levels of stimulus degradation in a touchscreen version of the CPT in mice. We found that stimulus degradation significantly decreased performance in both males and females. The changes in performance consisted of a decrease in stimulus discrimination, measured as d’, and increases in hit reaction time and reaction time variability. These findings are in line with the effects of stimulus degradation in human studies. Overall, female mice demonstrated a more liberal response strategy than males, but response strategy was not affected by stimulus degradation. These data extend the utility of the mouse CPT by demonstrating that stimulus degradation produces equivalent behavioral responses in mice and humans. Therefore, the degraded stimuli rCPT has high translational value as a preclinical assay of sustained attention.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Tomas Lenc ◽  
Peter E Keller ◽  
Manuel Varlet ◽  
Sylvie Nozaradan

Abstract When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the electroencephalography while nonmusician and musician participants listened to nonrepeating rhythmic sequences, where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared with the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in nonmusicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.


2019 ◽  
Author(s):  
Tomoya Nakai ◽  
Laura Rachman ◽  
Pablo Arias ◽  
Kazuo Okanoya ◽  
Jean-Julien Aucouturier

AbstractPeople are more accurate in voice identification and emotion recognition in their native language than in other languages, a phenomenon known as the language familiarity effect (LFE). Previous work on cross-cultural inferences of emotional prosody has left it difficult to determine whether these native-language advantages arise from a true enhancement of the auditory capacity to extract socially relevant cues in familiar speech signals or, more simply, from cultural differences in how these emotions are expressed. In order to rule out such production differences, this work employed algorithmic voice transformations to create pairs of stimuli in the French and Japanese language which differed by exactly the same amount of prosodic expression. Even though the cues were strictly identical in both languages, they were better recognized when participants processed them in their native language. This advantage persisted in three types of stimulus degradation (jabberwocky, shuffled and reversed sentences). These results provide univocal evidence that production differences are not the sole drivers of LFEs in cross-cultural emotion perception, and suggest that it is the listeners’ lack of familiarity with the individual speech sounds of the other language, and not e.g. with their syntax or semantics, which impairs their processing of higher-level emotional cues.


SLEEP ◽  
2012 ◽  
Vol 35 (1) ◽  
pp. 113-121 ◽  
Author(s):  
Brian C. Rakitin ◽  
Adrienne M. Tucker ◽  
Robert C. Basner ◽  
Yaakov Stern

Sign in / Sign up

Export Citation Format

Share Document