perception task
Recently Published Documents


TOTAL DOCUMENTS

124
(FIVE YEARS 34)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Daniel Haenelt ◽  
Robert Trampel ◽  
Nikolaus Weiskopf ◽  
Daniel Kaiser ◽  
...  

Recent advances in high-field fMRI have allowed differentiating feedforward and feedback information in the grey matter of the human brain. For continued progress in this endeavor, it is critical to understand how MRI data acquisition parameters impact the read-out of information from laminar response profiles. Here, we benchmarked three different MR-sequences at 7T - gradient-echo (GE), spin-echo (SE) and vascular space occupancy imaging (VASO) - in differentiating feedforward and feedback signals in human early visual cortex (V1). The experiment (N=4) consisted of two complementary tasks: a perception task that predominantly evokes feedforward signals and a working memory task that relies on feedback signals. In the perception task, participants saw flickering oriented gratings while detecting orthogonal color-changes. In the working memory task, participants memorized the precise orientation of a grating. We used multivariate pattern analysis to read out the perceived (feedforward) and memorized (feedback) grating orientation from neural signals across cortical depth. Analyses across all the MR-sequences revealed perception signals predominantly in the middle cortical compartment of area V1 and working memory signals in the deep compartment. Despite an overall consistency across sequences, SE-EPI was the only sequence where both feedforward and feedback information were differently pronounced across cortical depth in a statistically robust way. We therefore suggest that in the context of a typical cognitive neuroscience experiment as the one benchmarked here, SE-EPI may provide a favorable trade-off between spatial specificity and signal sensitivity.


Author(s):  
A. Meermeier ◽  
M. Jording ◽  
Y. Alayoubi ◽  
David H. V. Vogel ◽  
K. Vogeley ◽  
...  

AbstractIn this study we investigate whether persons with autism spectrum disorder (ASD) perceive social images differently than control participants (CON) in a graded perception task in which stimuli emerged from noise before dissipating into noise again. We presented either social stimuli (humans) or non-social stimuli (objects or animals). ASD were slower to recognize images during their emergence, but as fast as CON when indicating the dissipation of the image irrespective of its content. Social stimuli were recognized faster and remained discernable longer in both diagnostic groups. Thus, ASD participants show a largely intact preference for the processing of social images. An exploratory analysis of response subsets reveals subtle differences between groups that could be investigated in future studies.


2021 ◽  
pp. 1-13
Author(s):  
Sara K. Mamo ◽  
Karen S. Helfer

Objectives The purpose of this study was to investigate the impact of different types of maskers on speech understanding as a function of cognitive status in older adults. The hypothesis tested was that individuals with a diagnosis of mild cognitive impairment (MCI) or mild dementia would perform like their age- and hearing status–matched control counterparts in modulated noise but would perform more poorly in the presence of competing speech. Design Participants ( n = 39; age range: 55–77 years old) performed a speech-in-noise task and completed two cognitive screening tests and a measure of working memory. Sentences were presented in the presence of two types of maskers (i.e., speech envelope–modulated noise and two-talker, same-sex competing speech). Two analyses were undertaken: (a) a between-groups comparison of individuals diagnosed with MCI/dementia, individuals who failed both cognitive screeners (possible MCI), and age- and hearing status–matched neurologically healthy control individuals and (b) a mixed-model analysis of variance of speech perception performance as a function of working memory capacity. Results The between-groups comparison yielded significant group differences for speech understanding in both masking conditions, with the MCI/dementia group performing more poorly than the neurologically healthy controls and possible MCI groups. A single measure of working memory (Size Comparison Span [SICSPAN]) was correlated with performance on the speech perception task in the competing speech conditions. Conclusions Adults with a diagnosis of MCI or mild dementia performed more poorly on a speech perception task than their age- and hearing status–matched control counterparts in the presence of both maskers, with larger group mean differences when the target speech was presented in a two-talker masker. This suggests increased difficulty understanding speech in the presence of distracting backgrounds for people with MCI/dementia. Future studies should consider how to target this potentially vulnerable population as they may be experiencing increased difficulty communicating in challenging environments.


2021 ◽  
Vol 7 ◽  
Author(s):  
Chiara Visentin ◽  
Nicola Prodi

Performing a task in noisy conditions is effortful. This is especially relevant for children in classrooms as the effort involved could impair their learning and academic achievements. Numerous studies have investigated how to use behavioral and physiological methods to measure effort, but limited data are available on how well school-aged children rate effort in their classrooms. This study examines whether and how self-ratings can be used to describe the effort children perceive while working in a noisy classroom. This is done by assessing the effect of listening condition on self-rated effort in a group of 182 children 11–13 years old. The children performed three tasks typical of daily classroom activities (speech perception, sentence comprehension, and mental calculation) in three listening conditions (quiet, traffic noise, and classroom noise). After completing each task, they rated their perceived task-related effort on a five-point scale. Their task accuracy and response times (RTs) were recorded (the latter as a behavioral measure of task-related effort). Participants scored higher (more effort) on their self-ratings in the noisy conditions than in quiet. Their self-ratings were also sensitive to the type of background noise, but only for the speech perception task, suggesting that children might not be fully aware of the disruptive effect of background noise. A repeated-measures correlation analysis was run to explore the possible relationship between the three study outcomes (accuracy, self-ratings, and RTs). Self-ratings correlated with accuracy (in all tasks) and with RTs (only in the speech perception task), suggesting that the relationship between different measures of listening effort might depend on the task. Overall, the present findings indicate that self-reports could be useful for measuring changes in school-aged children’s perceived listening effort. More research is needed to better understand, and consequently manage, the individual factors that might affect children’s self-ratings (e.g., motivation) and to devise an appropriate response format.


2021 ◽  
Vol 12 ◽  
Author(s):  
Theresa J. Chirles ◽  
Johnathon P. Ehsani ◽  
Neale Kinnear ◽  
Karen E. Seymour

Background: While advanced driver assistance technologies have the potential to increase safety, there is concern that driver inattention resulting from overreliance on these features may result in crashes. Driver monitoring technologies to assess a driver’s state may be one solution. The purpose of this study was to replicate and extend the research on physiological responses to common driving hazards and examine how these may differ based on driving experience.Methods: Learner and Licensed drivers viewed a Driving Hazard Perception Task while electrodermal activity (EDA) was measured. The task presented 30 Event (hazard develops) and 30 Non-Event (routine driving) videos. A skin conductance response (SCR) score was calculated for each participant based on the percentage of videos that elicited an SCR.Results: Analysis of the SCR score during Event videos revealed a medium effect (d = 0.61) of group differences, whereby Licensed drivers were more likely to have an SCR than Learner drivers. Interaction effects revealed Licensed drivers were more likely to have an SCR earlier in the Event videos compared to the end, and the Learner drivers were more likely to have an SCR earlier in the Non-Event videos compared to the end.Conclusion: Our results support the viability of using SCR during driving videos as a marker of hazard anticipation differing based on experience. The interaction effects may illustrate situational awareness in licensed drivers and deficiencies in sustained vigilance among learner drivers. The findings demand further examination if physiological measures are to be validated as a tool to inform driver potential performance in an increasingly automated driving environment.


Author(s):  
Fernanda Mottin Refinetti ◽  
Ricardo Drews ◽  
Umberto Cesar Corrêa ◽  
Flavio Henrique Bastos

2021 ◽  
Vol 4 ◽  
pp. 205920432110551
Author(s):  
May Pik Yu Chan ◽  
Youngah Do

Singers convey meaning via both text and music. As sopranos balance tone quality and diction, vowel intelligibility is often compromised at high pitches. This study examines how sopranos modify their vowels against an increasing fundamental, and in turn how such vowel modification affects vowel intelligibility. We examine the vowel modification process of three non-central vowels in Cantonese ([a], [ɛ] and [ɔ]) using the spectral centroid. Acoustic results suggest that overall vowel modification is conditioned by vowel height in mid-ranges and by vowel frontness in higher ranges. In a following perception task, listeners identified and discriminated vowels at pitches spanning an octave from A4 (nominally 440 Hz) to G♯5 (nominally 831 Hz). Results showed that perceptual accuracy rates of the three vowels’ match their acoustic patterns. The overall results suggest that vowels are not modified in a unified way in sopranos’ voices, implying that research on sopranos’ singing strategies should consider vocalic differences.


2020 ◽  
Author(s):  
Sarah S. Sheldon ◽  
Kyle E. Mathewson

AbstractBrain oscillations are known to modulate detection of visual stimuli, but it is unclear if this is due to increased guess rate or decreased precision of the mental representation. Here we estimated quality and guess rate as a function of electroencephalography (EEG) brain activity using an orientation perception task. Errors on each trial were quantified as the difference between the target orientation and the orientation reported by participants with a response stimulus. Response errors were fitted to standard mixed model by Zhang and Luck (2008) to quantify how participants’ guess rate and standard deviation parameters varied as a function of brain activity. Twenty-four participants were included in the analysis.Within subjects, the power and phase of delta and theta post-target oscillatory activity were found to vary along with performance on the orientation perception task in that greater power and phase coherence in the 2-5 Hz band range was measured in trials with more accurate responses. In addition, the phase of delta and theta correlated with the degree of response error while oscillatory power did not have a relationship with trial-by-trial response errors. Analysis of task-related alpha activity yielded no significant results implying that alpha oscillations do not play an important role in orientation perception at single trial level. Across participants, only the standard deviation parameter correlated with oscillatory power in the high alpha and low beta frequency ranges. These results indicate that post-target power is associated with the precision of mental representations rather than the guess rate, both across trials within subjects and across subjects.


2020 ◽  
Author(s):  
Efthymia C Kapnoula ◽  
jan edwards ◽  
Bob McMurray

Listeners activate speech sound categories in a gradient way and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula, Winn, Kong, Edwards, and McMurray (2017) suggest that the degree to which listeners maintain within-category information varies across individuals. Here we assessed the consequences of this gradiency for speech perception. To test this, we collected a measure of gradiency for different listeners using the visual analogue scaling (VAS) task used by Kapnoula et al. (2017). We also collected two independent measures of performance in speech perception: a visual world paradigm (VWP) task measuring participants’ ability to recover from lexical garden paths (McMurray et al., 2009) and a speech perception task measuring participants’ perception of isolated words in noise. Our results show that categorization gradiency does not predict participants’ performance in the speech-in-noise task. However, higher gradiency predicted higher likelihood of recovery from temporarily misleading information presented in the VWP task. These results suggest that gradient activation of speech sound categories is helpful when listeners need to reconsider their initial interpretation of the input, making them more efficient in recovering from errors.


Sign in / Sign up

Export Citation Format

Share Document