auditory tasks
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 19)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Kavitha Gurunathgowda Sannamani ◽  
Madhumanti Chakraborty ◽  
Neelamegarajan Devi ◽  
Prashanth Prabhu

Background and Aim: Musical  training  has shown to bring about superior  performance  in several auditory and non-auditory tasks compared to those without musical exposure. Distortion product otoacoustic emissions (DPOAE) input-output function can be an indicator of the non-linear functioning of the cochlea. The objective of this study was to evaluate and compare the differences in the slope of DPOAE input-output function in individuals with and without musical abilities. Methods: Twenty normal-hearing individuals were considered in the age range of 18–25 years. They were divided based on the scores obtained on the questionnaire of musical abilities, as individuals with and without musical abilities. DPOAE input-output function was done   for each of the two groups. The slope of the DPOAE input-output function was compared at different frequencies between the groups. Results: The results of the Mann Whitney test revealed that the slope was significantly steeper at 2000, 3000, 4000 and 6000 Hz in individuals with musical abilities. There was no significant difference in slope at 1000 and 1500 Hz. Conclusion: The increased steepness of the slope indicates a relatively better functioning     of the cochlea in individuals with musical abilities. The enhanced perception of music may induce changes in the cochlea resulting in a better appreciation of music.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256593
Author(s):  
Nádia Giulian de Carvalho ◽  
Maria Isabel Ramos do Amaral ◽  
Maria Francisca Colella-Santos

Objective To contribute to the validation of AudBility, an online central auditory processing screening program, considering the tasks for age between 6 and 8 years-old, from the investigation of sensitivity and specificity, as well as to suggest a minimum central auditory processing (CAP) screening protocol in this age group. Method In the first stage of the study, 154 schoolchildren were screened. Children were aged between 6 and 8 years old, native speakers of Brazilian Portuguese. The auditory tasks of AudBility analyzed in this study were: sound localization (SL), auditory closure (AC), figure-ground (FG), dichotic digits—binaural integration (DD), temporal resolution (TR) and temporal frequency ordering (TO-F). In the second stage, 112 children attended to CAP assessment in the institution’s laboratory. The calculation of efficacy (sensitivity/specificity) was obtained through the construction of the ROC curve for the tests with more than five children altered in the diagnosis. Results For the 6–7-year-old age group the accuracy values were: AC (76.9%); FG (61.6%); DD 78.8% for the right ear and 84.4% for the left ear in females and 63.2% for the left ear in males; TR (77.1%) and TO-F (74.4% for the right ear and 82.4% for the left ear). For the 8-year-old age group the values were: FF (76.5%); DD (71.7% for the left ear for females and 77% for the right ear for males); TR (56.5%) and TO-F (54.1% for the right ear and 70% for the left ear). Conclusions AudBility showed variations in sensitivity and specificity values between the auditory tasks and age groups, with better effectiveness in schoolchildren between the ages of 6 and 7 than eight-year-olds, except for the FG task. For screening purposes, the application of the protocol involving five tasks for the 6 to 7-year-olds group and with four tasks for the 8-year-olds group is suggested.


2021 ◽  
Vol 11 (8) ◽  
pp. 1024
Author(s):  
Durmuş Koç ◽  
Ahmet Çağdaş Seçkin ◽  
Zümrüt Ecevit Satı

The risk of accidents while operating a drone is quite high. The most important solution is training for drone pilots. Drone pilot training can be done in both physical and virtual environments, but the probability of an accident is higher for pilot trainees, so the first method is to train in a virtual environment. The purpose of this study is to develop a new system to collect data on students' educational development performance of students during the use of Gamified Drone Training Simulator and objectively analyze students' development. A multimodal recording system that can collect simulator, keystroke, and brain activity data has been developed to analyze the cognitive and physical activities of participants trained in the gamified drone simulator. It was found that as the number of trials increased, participants became accustomed to the cognitive load of visual/auditory tasks and therefore the power in the alpha and beta bands decreased. It was observed that participants' meditation and attention scores increased with the number of repetitions of the educational game. It can be concluded that the number of repetitions lowers stress and anxiety levels, increases attention, and thus enhances game performance.


2021 ◽  
Author(s):  
Francesco Caprini ◽  
Sijia Zhao ◽  
Maria Chait ◽  
Trevor Agus ◽  
Ulrich Pomper ◽  
...  

From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations whose auditory expertise could match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing conservatory-trained instrumentalists to students of audio engineering (along with naive controls) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that both musicians and audio engineers had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Musicians performed best in a sustained selective attention task with two competing streams of tones, while audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, when compared to controls. Additionally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xia Wu ◽  
Ying Jiang ◽  
Yunpeng Jiang ◽  
Guodong Chen ◽  
Ying Chen ◽  
...  

Attention can help an individual efficiently find a specific target among multiple distractors and is proposed to consist of three functions: alerting, orienting, and executive control. Action video games (AVGs) have been shown to enhance attention. However, whether AVG can affect the attentional functions across different modalities remains to be determined. In the present study, a group of action video game players (AVGPs) and a group of non-action video game players (NAVGPs) selected by a video game usage questionnaire successively participated in two tasks, including an attention network task-visual version (ANT-V) and an attention network task-auditory version (ANT-A). The results indicated that AVGPs showed an advantage in orienting under the effects of conflicting stimuli (executive control) in both tasks, and NAVGPs may have a reduced ability to disengage when conflict occurs in visual task, suggesting that the AVGs can improve guidance toward targets and inhibition of distractors with the function of executive control. AVGPs also showed more correlations among attentional functions. Importantly, the alerting functions of AVGPs in visual and auditory tasks were significantly related, indicating that the experience of AVGs could help us to generate a supramodal alerting effect across visual and auditory modalities.


2020 ◽  
Author(s):  
Irune Fernandez-Prieto ◽  
Ferran Pons ◽  
Jordi Navarra

Crossmodal correspondences between auditory pitch and spatial elevation have been demonstrated extensively in adults. High- and low-pitched sounds tend to be mapped onto upper and lower spatial positions, respectively. We hypothesised that this crossmodal link could be influenced by the development of spatial and linguistic abilities during childhood. To explore this possibility, 70 children (9-12 years old) divided into three groups (4th, 5th and 6th grade of primary school) completed a crossmodal test to evaluate the perceptual correspondence between pure tones and spatial elevation. Additionally, we addressed possible correlations between the students’ performance in this crossmodal task and other auditory, spatial and linguistic measures. The participants’ auditory pitch performance was measured in a frequency classification test. The participants also completed three tests of the Wechsler Intelligence Scale for Children-IV (WISC-IV): (1) Vocabulary, to assess verbal intelligence, (2) Matrix reasoning, to measure visuospatial reasoning and (3) Blocks design, to analyse visuospatial/motor skills. The results revealed crossmodal effects between pitch and spatial elevation. Additionally, we found a correlation between the performance in the block design subtest with the pitch-elevation crossmodal correspondence and the auditory frequency classification test. No correlation was observed between auditory tasks with matrix and vocabulary subtests. This suggests (1) that the crossmodal correspondence between pitch and spatial elevation is already consolidated at the age of 9 and also (2) that good performance in a pitch-based auditory task is mildly associated, in childhood, with good performance in visuospatial/motor tasks.


2020 ◽  
Vol 10 (11) ◽  
pp. 894
Author(s):  
W. Wiktor Jedrzejczak ◽  
Rafal Milner ◽  
Malgorzata Ganc ◽  
Edyta Pilka ◽  
Henryk Skarzynski

The medial olivocochlear (MOC) system is thought to be responsible for modulation of peripheral hearing through descending (efferent) pathways. This study investigated the connection between peripheral hearing function and conscious attention during two different modality tasks, auditory and visual. Peripheral hearing function was evaluated by analyzing the amount of suppression of otoacoustic emissions (OAEs) by contralateral acoustic stimulation (CAS), a well-known effect of the MOC. Simultaneously, attention was evaluated by event-related potentials (ERPs). Although the ERPs showed clear differences in processing of auditory and visual tasks, there were no differences in the levels of OAE suppression. We also analyzed OAEs for the highest magnitude resonant mode signal detected by the matching pursuit method, but again did not find a significant effect of task, and no difference in noise level or number of rejected trials. However, for auditory tasks, the amplitude of the P3 cognitive wave negatively correlated with the level of OAE suppression. We conclude that there seems to be no change in MOC function when performing different modality tasks, although the cortex still remains able to modulate some aspects of MOC activity.


2020 ◽  
Author(s):  
Stefano Ioannucci ◽  
Guillermo Borragán ◽  
Alexandre Zénon

AbstractTheories of mental fatigue disagree on whether performance decrement is caused by motivational or functional alterations. We tested the assumption that keeping neural networks active for an extensive period of time entrains consequences at the subjective, objective and neurophysiological level – the defining characteristics of fatigue – when confounds such as motivation, boredom and level of skill are controlled. We reveal that passive visual stimulation affects visual gamma activity and the performance of a subsequent task when carried out in the same portion of visual space. This outcome was influenced by participants’ level of arousal, manipulated through variations in the difficulty of concurrent auditory tasks. Thus, repeated stimulation of neural networks leads to their altered functional performance and this mechanism seems to play a role in the development of mental fatigue.


2020 ◽  
Vol 63 (11) ◽  
pp. 3865-3876
Author(s):  
Michal Icht ◽  
Yaniv Mama ◽  
Riki Taitelbaum-Swead

Purpose The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based biases. Method Participants performed two separate visual and auditory verbal memory tasks. In each task, the visually or aurally presented study words were learned by vocal production (saying aloud) or by no production (reading silently or listening), followed by a free recall test. Twenty-seven older adults (> 60 years) participated (OCI = 13, ONH = 14), all of whom demonstrated intact cognitive abilities. All OCIs showed good open-set speech perception results in quiet. Results Both ONHs and OCIs showed production benefits (higher recall rates for vocalized than nonvocalized words) in the visual and auditory tasks. The ONHs showed similar production benefits in the visual and auditory tasks. The OCIs demonstrated a smaller production effect in the auditory task. Conclusions These results may indicate that different modality-specific memory strategies were used by the ONHs and the OCIs. The group differences in memory performance suggest that, even when deafness occurs after the completion of language acquisition, the reduced and distorted external auditory stimulation leads to a deterioration in the phonological representation of sounds. Possibly, this deterioration leads to a less efficient auditory long-term verbal memory.


Sign in / Sign up

Export Citation Format

Share Document