The auditory system

2020 ◽  
pp. 244-252
Author(s):  
Edmund T. Rolls

Mechanisms for left-right auditory localization involving time differences for low frequencies, and intensity differences for high frequencies, performed in the brainstem are described. Auditory localization in 3D space using the pinna as an asymmetrical antenna is described. The auditory cortical areas in the superior temporal gyrus are hierarchically organised and can detect feature combinations. Auditory neurons in the orbitofrontal cortex and nearby inferior frontal gyrus can respond to vocalizations used in communication; and the human orbitofrontal cortex is involved in identifying the emotional expression in a voice, which is useful in social behaviour.

2000 ◽  
Vol 84 (3) ◽  
pp. 1588-1598 ◽  
Author(s):  
Anne-Lise Giraud ◽  
Christian Lorenzi ◽  
John Ashburner ◽  
Jocelyne Wable ◽  
Ingrid Johnsrude ◽  
...  

The cerebral representation of the temporal envelope of sounds was studied in five normal-hearing subjects using functional magnetic resonance imaging. The stimuli were white noise, sinusoidally amplitude-modulated at frequencies ranging from 4 to 256 Hz. This range includes low AM frequencies (up to 32 Hz) essential for the perception of the manner of articulation and syllabic rate, and high AM frequencies (above 64 Hz) essential for the perception of voicing and prosody. The right lower brainstem (superior olivary complex), the right inferior colliculus, the left medial geniculate body, Heschl's gyrus, the superior temporal gyrus, the superior temporal sulcus, and the inferior parietal lobule were specifically responsive to AM. Global tuning curves in these regions suggest that the human auditory system is organized as a hierarchical filter bank, each processing level responding preferentially to a given AM frequency, 256 Hz for the lower brainstem, 32–256 Hz for the inferior colliculus, 16 Hz for the medial geniculate body, 8 Hz for the primary auditory cortex, and 4–8 Hz for secondary regions. The time course of the hemodynamic responses showed sustained and transient components with reverse frequency dependent patterns: the lower the AM frequency the better the fit with a sustained response model, the higher the AM frequency the better the fit with a transient response model. Using cortical maps of best modulation frequency, we demonstrate that the spatial representation of AM frequencies varies according to the response type. Sustained responses yield maps of low frequencies organized in large clusters. Transient responses yield maps of high frequencies represented by a mosaic of small clusters. Very few voxels were tuned to intermediate frequencies (32–64 Hz). We did not find spatial gradients of AM frequencies associated with any response type. Our results suggest that two frequency ranges (up to 16 and 128 Hz and above) are represented in the cortex by different response types. However, the spatial segregation of these two ranges is not systematic. Most cortical regions were tuned to low frequencies and only a few to high frequencies. Yet, voxels that show a preference for low frequencies were also responsive to high frequencies. Overall, our study shows that the temporal envelope of sounds is processed by both distinct (hierarchically organized series of filters) and shared (high and low AM frequencies eliciting different responses at the same cortical locus) neural substrates. This layout suggests that the human auditory system is organized in a parallel fashion that allows a degree of separate routing for groups of AM frequencies conveying different information and preserves a possibility for integration of complementary features in cortical auditory regions.


2019 ◽  
Author(s):  
Lílian Rodrigues de Almeida ◽  
Paul A. Pope ◽  
Peter Hansen

In our previous studies we supported the claim that the motor theory is modulated by task load. Motoric participation in phonological processing increases from speech perception to speech production, with the endpoints of the dorsal stream having changing and complementary weightings for processing: the left inferior frontal gyrus (LIFG) being increasingly relevant and the left superior temporal gyrus (LSTG) being decreasingly relevant. Our previous results for neurostimulation of the LIFG support this model. In this study we investigated whether our claim that the motor theory is modulated by task load holds in (frontal) aphasia. Person(s) with aphasia (PWA) after stroke typically have damage on brain areas responsible for phonological processing. They may present variable patterns of recovery and, consequently, variable strategies of phonological processing. Here these strategies were investigated in two PWA with simultaneous fMRI and tDCS of the LIFG during speech perception and speech production tasks. Anodal tDCS excitation and cathodal tDCS inhibition should increase with the relevance of the target for the task. Cathodal tDCS over a target of low relevance could also induce compensation by the remaining nodes. Responses of PWA to tDCS would further depend on their pattern of recovery. Responses would depend on the responsiveness of the perilesional area, and could be weaker than in controls due to an overall hypoactivation of the cortex. Results suggest that the analysis of motor codes for articulation during phonological processing remains in frontal aphasia and that tDCS is a promising diagnostic tool to investigate the individual processing strategies.


2015 ◽  
Vol 19 (2) ◽  
pp. 331-346 ◽  
Author(s):  
XIANGZHI MENG ◽  
HANLIN YOU ◽  
MEIXIA SONG ◽  
AMY S. DESROCHES ◽  
ZHENGKE WANG ◽  
...  

Auditory phonological processing skills are critical for successful reading development in English not only in native (L1) speakers but also in second language (L2) learners. However, the neural deficits of auditory phonological processing remain unknown in English-as-the-second-language (ESL) learners with reading difficulties. Here we investigated neural responses during spoken word rhyme judgments in typical and impaired ESL readers in China. The impaired readers showed comparable activation in the left superior temporal gyrus (LSTG), but reduced activation in the left inferior frontal gyrus (LIFG) and left fusiform and reduced connectivity between the LSTG and left fusiform when compared to typical readers. These findings suggest that impaired ESL readers have relative intact representations but impaired manipulation of phonology and reduced or absent automatic access to orthographic representations. This is consistent with previous findings in native English speakers and suggests a common neural mechanism underlying English impairment across the L1 and L2 learners.


2015 ◽  
Vol 804 ◽  
pp. 25-29 ◽  
Author(s):  
Wanlop Harnnarongchai ◽  
Kantima Chaochanchaikul

The sound absorbing efficiency of natural rubber (NR) foam is affected by the cell morphology of foam. Potassium oleate (K-oleate) and sodium bicarbonate (NaHCO3) were used as blowing agents to create open-cell foam. Amounts of the blowing agent were varied from 0.5 to 8.0 part per hundred of rubber (phr) to evaluate cell size and number of foam cell as well as sound adsorption coefficient of NR foam. The NR foam specimens were prepared using mould and air-circulating oven for vulcanizing and foaming processes. The results indicated that K-oleate at 2.0 phr and NaHCO3 at 0.5 phr led to form NR foam with the smallest cell size and the largest number of foam cell. At low frequencies, the optimum sound adsorption coefficient of NR foam was caused by filling K-oleate 2 phr. However, that of NR foam at high frequencies was provided by 0.5 phr-NaHCO3 addition.


1993 ◽  
Vol 107 (3) ◽  
pp. 179-182 ◽  
Author(s):  
J. R. Cullen ◽  
M. J. Cinnamond

The relationship between diabetes and senbsorineural hearing loss has been disputed. This study compares 44 insulin-dependent diabetics with 38 age and sex matched controls. All had pure tone and speech audiometry performed, with any diabetics showing sensorineural deafness undergoing stapedial reflecx decat tests. In 14 diabetics stapedial reflex tests showed no tone decay in any patient, but seven showed evidence of recruitment. Analysis of vaiance showed the diabetics to be significantly deafer than the control population.The hearing loss affected high frequencies in both sexes, but also low frequencies in the male. Speech discrimination scores showed no differences. Further analysis by sex showed the males to account for most of the differences. Analysys of the audiograms showered mostly a high tone loss. Finally duration of disbetes, insulin dosage and family history of diabtes were not found to have a significant effect on threshold.


2006 ◽  
Vol 18 (11) ◽  
pp. 1789-1798 ◽  
Author(s):  
Angela Bartolo ◽  
Francesca Benuzzi ◽  
Luca Nocetti ◽  
Patrizia Baraldi ◽  
Paolo Nichelli

Humor is a unique ability in human beings. Suls [A two-stage model for the appreciation of jokes and cartoons. In P. E. Goldstein & J. H. McGhee (Eds.), The psychology of humour. Theoretical perspectives and empirical issues. New York: Academic Press, 1972, pp. 81–100] proposed a two-stage model of humor: detection and resolution of incongruity. Incongruity is generated when a prediction is not confirmed in the final part of a story. To comprehend humor, it is necessary to revisit the story, transforming an incongruous situation into a funny, congruous one. Patient and neuroimaging studies carried out until now lead to different outcomes. In particular, patient studies found that right brain-lesion patients have difficulties in humor comprehension, whereas neuroimaging studies suggested a major involvement of the left hemisphere in both humor detection and comprehension. To prevent activation of the left hemisphere due to language processing, we devised a nonverbal task comprising cartoon pairs. Our findings demonstrate activation of both the left and the right hemispheres when comparing funny versus nonfunny cartoons. In particular, we found activation of the right inferior frontal gyrus (BA 47), the left superior temporal gyrus (BA 38), the left middle temporal gyrus (BA 21), and the left cerebellum. These areas were also activated in a nonverbal task exploring attribution of intention [Brunet, E., Sarfati, Y., Hardy-Bayle, M. C., & Decety, J. A PET investigation of the attribution of intentions with a nonverbal task. Neuroimage, 11, 157–166, 2000]. We hypothesize that the resolution of incongruity might occur through a process of intention attribution. We also asked subjects to rate the funniness of each cartoon pair. A parametric analysis showed that the left amygdala was activated in relation to subjective amusement. We hypothesize that the amygdala plays a key role in giving humor an emotional dimension.


Author(s):  
Jerome E. Manning

Abstract Statistical energy analysis provides a technique to predict acoustic and vibration levels in complex dynamic systems. The technique is most useful for broad-band excitation at high frequencies where many modes contribute to the response in any given frequency band. At mid and low frequencies, the number of modes contributing to the response may be quite small. In this case SEA predictions show large variability from measured data and may not be useful for vibroacoustic design. This paper focuses on the use of measured data to improve the accuracy of the predictions. Past work to measure the SEA coupling and damping loss factors has not been successful for a broad range of systems that do not have light coupling. This paper introduces a new hybrid SEA technique that combines measured mobility functions with analytical SEA predictions. The accuracy of the hybrid technique is shown to be greatly improved at mid and low frequencies.


Author(s):  
Gundula B. Runge ◽  
Al Ferri ◽  
Bonnie Ferri

This paper considers an anytime strategy to implement controllers that react to changing computational resources. The anytime controllers developed in this paper are suitable for cases when the time scale of switching is in the order of the task execution time, that is, on the time scale found commonly with sporadically missed deadlines. This paper extends the prior work by developing frequency-weighted anytime controllers. The selection of the weighting function is driven by the expectation of the situations that would require anytime operation. For example, if the anytime operation is due to occasional and isolated missed deadlines, then the weighting on high frequencies should be larger than that for low frequencies. Low frequency components will have a smaller change over one sample time, so failing to update these components for one sample period will have less effect than with the high frequency components. An example will be included that applies the anytime control strategy to a model of a DC motor with deadzone and saturation nonlinearities.


2000 ◽  
Vol 39 (10) ◽  
pp. 1645-1656 ◽  
Author(s):  
Gail M. Skofronick-Jackson ◽  
James R. Wang

Abstract Profiles of the microphysical properties of clouds and rain cells are essential in many areas of atmospheric research and operational meteorology. To enhance the understanding of the nonlinear and underconstrained relationships between cloud and hydrometeor microphysical profiles and passive microwave brightness temperatures, estimations of cloud profiles for an anvil region, a convective region, and an updraft region of an oceanic squall were performed. The estimations relied on comparisons between radiative transfer calculations of incrementally estimated microphysical profiles and concurrent dual-altitude wideband brightness temperatures from the 22 February 1993 flight during the Tropical Ocean and Global Atmosphere Coupled Ocean–Atmosphere Response Experiment. The wideband observations (10–220 GHz) are necessary for estimating cloud profiles reaching up to 20 km. The low frequencies enhance the rain and cloud water profiles, and the high frequencies are required to detail the higher-altitude ice microphysics. A microphysical profile was estimated for each of the three regions of the storm. Each of the three estimated profiles produced calculated brightness temperatures within ∼10 K of the observations. A majority of the total iterative adjustments were to the estimated profile’s frozen hydrometeor characteristics and were necessary to match the high-frequency calculations with the observations. This requirement indicates a need to validate cloud-resolving models using high frequencies. Some difficulties matching the 37-GHz observation channels on the DC-8 and ER-2 aircraft with the calculations simulated at the two aircraft heights (∼11 km and 20 km, respectively) were noted, and potential causes were presented.


Sign in / Sign up

Export Citation Format

Share Document