Shorter Articles and Notes Left-Right Differences in Auditory Perception of Verbal and Non-Verbal Material by Children

1967 ◽  
Vol 19 (4) ◽  
pp. 334-336 ◽  
Author(s):  
Dirk J. Bakker

One hundred and twenty children, 60 boys and 60 girls, varying in age between 6 and 12 years were presented with a series of digits and Morse-like sound patterns to each ear separately. As predicted, sound patterns were found to better retained when presented to the left ear than when presented to the right ear. Series of digits however were not retained better via the right ear than via the left ear. The dominance of the left ear for non-verbal material decreases with increasing age. For verbal material a quadratic relation between the dominance of the right ear and age was established.

2010 ◽  
Vol 23 (3) ◽  
pp. 107-115 ◽  
Author(s):  
Sophie Blanchet ◽  
Geneviève Gagnon ◽  
Cyril Schneider

This research investigated the contribution of the dorsolateral prefrontal cortex (DLPFC) in the attentional resources in episodic encoding for both verbal and non-verbal material. Paired-pulse transcranial magnetic stimulations (TMS) were used to interfere transiently with either the left or right DLPFC during encoding under full attention (FA) or under divided attention (DA) in a recognition paradigm using words and random shapes. Participants recognized fewer items after TMS over the left DLPFC than over the right DLPFC during FA encoding. However, TMS over the left DLPFC did not impair performance when compared to sham condition. Conversely, participants produced fewer items after TMS over the right DLPFC in DA encoding compared to sham condition, but not compared to TMS over the left DLPFC. These effects were found for both words and random shapes. These results suggest that the right DLPFC play an important role in successful encoding with a concomitant task regardless of the type of material.


Author(s):  
Mayada Elsabbagh ◽  
Henri Cohen ◽  
Annette Karmiloff-Smith

Abstract We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed that individuals with Williams syndrome were sensitive to the same pitch cues as typical children and adults when streaming these patterns. In Experiment 2, we evaluated differences in reliance on pitch and contour cues in unfamiliar melody perception in a group of adults with Williams syndrome relative to typical control children and adults. Unlike controls who demonstrated greater proficiency when contour cues were available, adults with Williams syndrome showed no such advantage.


Linguistics ◽  
2011 ◽  
Author(s):  
Paul de Lacy

The term “phonology” has several meanings. It is often used to refer to generalizations about sounds and sound combinations (often called sound patterns) in and across languages. In contrast, within generative grammar “phonology” refers to a particular cognitive module. Many generative theories propose that the module takes inputs, consisting of strings of symbols (called phonological symbols, segments, phonemes, underlying forms, or the input, depending on the theory). The symbols may be accompanied by information about morphology, syntax, and perhaps even some aspects of meaning. The module produces an output representation, which serves as the input to the phonetic modules; these modules ultimately provoke muscle movements that can result in speech sounds. A common point of confusion is the belief that the phonological module manipulates speech sounds; in fact the phonology manipulates representations that are sent to the phonetic module, which then converts them into phonetic representations that are then implemented as muscle movements that, given the right factors, can produce audible sound. The two meanings of “phonology” are not in opposition. Phonology (sound patterns) makes up some of the data used in theorizing about the phonology (the cognitive module). There are large variations in sound patterns across languages. For example, Hawai’ian has nine contrastive consonants, whereas Ubykh has eighty-six. However, there are commonalities too, though many are disputed. For example, every language has either an alveolar voiceless stop of some kind or a glottal stop or both. Similarly, no language lacks words that start with a consonant. There are also large variations in phonological modules among humans; however, a great deal of research contends that all phonological modules share common properties, at least in their underlying structures. Although the outputs of languages are diverse, much work has argued that the representations and processes used to generate phonological outputs are very similar—perhaps identical—in all phonological modules, with only certain aspects of the phonological module (e.g., rules, constraint ranking) differing between modules.


2021 ◽  
Vol 12 ◽  
Author(s):  
César E. Corona-González ◽  
Luz María Alonso-Valerdi ◽  
David I. Ibarra-Zarate

Binaural beats (BB) consist of two slightly distinct auditory frequencies (one in each ear), which are differentiated with clinical electroencephalographic (EEG) bandwidths, namely, delta, theta, alpha, beta, or gamma. This auditory stimulation has been widely used to module brain rhythms and thus inducing the mental condition associated with the EEG bandwidth in use. The aim of this research was to investigate whether personalized BB (specifically those within theta and beta EEG bands) improve brain entrainment. Personalized BB consisted of pure tones with a carrier tone of 500 Hz in the left ear together with an adjustable frequency in the right ear that was defined for theta BB (since fc for theta EEG band was 4.60 Hz ± 0.70 SD) and beta BB (since fc for beta EEG band was 18.42 Hz ± 2.82 SD). The adjustable frequencies were estimated for each participant in accordance with their heart rate by applying the Brain-Body Coupling Theorem postulated by Klimesch. To achieve this aim, 20 healthy volunteers were stimulated with their personalized theta and beta BB for 20 min and their EEG signals were collected with 22 channels. EEG analysis was based on the comparison of power spectral density among three mental conditions: (1) theta BB stimulation, (2) beta BB stimulation, and (3) resting state. Results showed larger absolute power differences for both BB stimulation sessions than resting state on bilateral temporal and parietal regions. This power change seems to be related to auditory perception and sound location. However, no significant differences were found between theta and beta BB sessions when it was expected to achieve different brain entrainments, since theta and beta BB induce relaxation and readiness, respectively. In addition, relative power analysis (theta BB/resting state) revealed alpha band desynchronization in the parieto-occipital region when volunteers listened to theta BB, suggesting that participants felt uncomfortable. In conclusion, neural resynchronization was met with both personalized theta and beta BB, but no different mental conditions seemed to be achieved.


2018 ◽  
Vol 23 (01) ◽  
pp. 070-076 ◽  
Author(s):  
Cláudia Carneiro ◽  
Anna Almeida ◽  
Angela Ribas ◽  
Karolina Kluk-De Kort ◽  
Daviany Lima ◽  
...  

Introduction Dichotic listening refers to the ability to hear different sounds presented to each ear simultaneously. Objective The aim of the present study was to assess dichotic listening in women throughout the menstrual cycle. Methods The volunteers who met the eligibility criteria participated in a dichotic listening assessment composed of three tests: 1) staggered spondaic word test; 2) dichotic digits test; and 3) consonant-vowel test. The female participants were tested during two different phases of the menstrual cycle: the follicular (days 11 to 13) and luteal (days 23 to 26) phases. The phases were confirmed by measuring serum levels of the hormone estradiol. Results A total of 20 volunteers aged 18 to 49 years participated in the study (9 females and 11 males). In test 1, only the right ear of females showed better performance during the follicular phase (high estrogen levels), compared with the luteal phase (low estrogen levels); in test 2, there were no significant differences for any of the groups; and in test 3, both males and females showed significantly better performance in their right ear compared with their left ear. Conclusion The better performance of females during the follicular phase of the cycle may indicate that estrogen levels might have an influence on dichotic listening in women.


1979 ◽  
Vol 48 (2) ◽  
pp. 579-585 ◽  
Author(s):  
Paul L. Wang

A series of stimuli, words and faces, were presented tachistoscopically to 24 dextrals and 12 sinistrals. The stimuli were presented to one eye at a time and the subjects were instructed to respond to specific words or stimuli with a specific hand. The results indicate that (1) cerebral functional asymmetry is related to handedness; in the dextrals, the left hemisphere is more specialized in verbal recognition, while in the sinistrals, the right hemisphere is more specialized in recognizing non-verbal material. (2) An ipsilateral hand-and-eye combination is a valid method of measuring intrahemispheric information processing, provided that the tachistoscopically presented visual stimuli are capable of inciting specialized hemispheric function. The dominant relationship among the crossed and non-crossed visual pathways is discussed.


1973 ◽  
Vol 36 (1) ◽  
pp. 175-184 ◽  
Author(s):  
Amiram Carmon ◽  
Israel Nachshon

Laterality differences in binocular fusion of digits was examined using groups of Ss with either left to right or right to left reading habits. In hemifield presentation, opposing laterality differences were found between the groups when presentation was 8° 30' off center. English readers showed right visual-field preference, while Hebrew readers showed the opposite, but no laterality differences were observed at 3° presentation. In simultaneous presentation to both fields fusion was superior in the right visual field in both groups. The results obtained in hemifield presentation were in accordance with those obtained in conventional tachistoscopic perception of verbal material and can be explained by directional scanning tendencies. The results obtained in simultaneous presentation to both fields can be interpreted as demonstrating left cerebral dominance for perception of verbal stimuli. It is concluded that cerebral dominance in visual perception of verbal material can be demonstrated in situations where different inputs in crossed and uncrossed sensory projections are delivered to both hemispheres simultaneously.


Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 24
Author(s):  
Beatriz Estalayo-Gutiérrez ◽  
María José Álvarez-Pasquín ◽  
Francisco Germain

The objective of this work is to confirm the asymmetry in non-linguistic auditory perception, as well as the influence of anxiety-depressive disorders on it. Eighty-six people were recruited in the emotional well-being group, fifty-six in the anxiety group, fourteen in the depression group, and seventy-seven in the mixed group. In each group, audiograms were obtained from both ears and the differences were statistically analyzed. Differences in hearing sensitivity were found between both ears in the general population, such differences increased in people with anxiety-depressive disorders. When faced with anxiety-depressive disorders, the right ear suffered greater hearing loss than the left, showing peaks of hyper-hearing at the frequency of 4000 Hz in the anxiety subgroup, and hearing loss in the depression subgroup. In relation to anxiety, the appearance of the 4:8 pattern was observed in the right ear when the person had suffered acute stress in the 2 days prior to the audiometry, and in both ears if they had suffered stress in the 3–30 days before said stress. In conclusion, the advantage of the left ear in auditory perception was increased with these disorders, showing a hyperaudition peak in anxiety and a hearing loss in depression.


2020 ◽  
Author(s):  
Kurt Steinmetzger ◽  
Zhengzheng Shen ◽  
Helmut Riedel ◽  
André Rupp

ABSTRACTTo validate the use of functional near-infrared spectroscopy (fNIRS) in auditory perception experiments, combined fNIRS and electroencephalography (EEG) data were obtained from normal-hearing subjects passively listening to speech-like stimuli without linguistic content. The fNIRS oxy-haemoglobin (HbO) results were found to be inconsistent with the deoxy-haemoglobin (HbR) and EEG data, as they were dominated by pronounced cerebral blood stealing in anterior- to-posterior direction. This large-scale bilateral gradient in the HbO data masked the right-lateralised neural activity in the auditory cortex that was clearly evident in the HbR data and EEG source reconstructions. When the subjects were subsequently split into subgroups with more positive or more negative HbO responses in the right auditory cortex, the former group surprisingly showed smaller event-related potentials, less activity in frontal cortex, and increased EEG alpha power, all indicating reduced attention and vigilance. These findings thus suggest that positive HbO responses in the auditory cortex may not necessarily be a favourable result when investigating auditory perception using fNIRS. More generally, the results show that the interpretation of fNIRS HbO signals can be misleading and demonstrate the benefits of combined fNIRS-EEG analyses in resolving this issue.


Sign in / Sign up

Export Citation Format

Share Document