scholarly journals The acoustic features of speech sounds in a model of auditory processing: vowels and voiceless fricatives

1988 ◽  
Vol 16 (1) ◽  
pp. 77-91 ◽  
Author(s):  
Shihab Shamma
2007 ◽  
Vol 363 (1493) ◽  
pp. 1023-1035 ◽  
Author(s):  
Roy D Patterson ◽  
Ingrid S Johnsrude

In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds. Section 4 describes the general auditory mechanisms that we believe are applied to all communication sounds, and how functional neuroimaging is being used to map the brain networks associated with domain-general auditory processing. Section 5 discusses recent neuroimaging studies that explore where such general processes give way to those that are specific to human speech and language.


2001 ◽  
Vol 12 (3) ◽  
pp. 459-466 ◽  
Author(s):  
Maria Jaramillo ◽  
Titta Ilvonen ◽  
Teija Kujala ◽  
Paavo Alku ◽  
Mari Tervaniemi ◽  
...  

eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Connie Cheung ◽  
Liberty S Hamilton ◽  
Keith Johnson ◽  
Edward F Chang

In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information.


2016 ◽  
Vol 2 (3) ◽  
pp. 123-137
Author(s):  
Tatiana V. Shuiskaya ◽  
◽  
Svetlana V. Androsova ◽  

2018 ◽  
Author(s):  
Marcus Galle ◽  
Jamie Klein-Packard ◽  
Kayleen Schreiber ◽  
Bob McMurray

Speech unfolds over time and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: 1) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e. lexical representations) when sufficient information has arrived; and 2) an immediate integration strategy in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150-350 msec later), but also integration across different frequency bands, and compensation for contextual factors like coarticulation. Experiments employed eye-movements in the visual world paradigm and showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.


Author(s):  
Sachin . ◽  
Sanjay Munjal ◽  
Adarsh Kohli ◽  
Naresh Panda ◽  
Shantanu Arya

<p class="abstract"><strong>Background:</strong> Learning disabilities are characterized by significant impairments in acquisition of reading, spelling or arithmetic skills. A growing number of studies have used speech sounds to assess auditory processing to linguistic elements in children with learning disability. The present study seeks to report whether speech evoked Auditory Brainstem Responses can be used as a biological marker of deficient sound encoding in children with learning disability. The study aims to establish relationship between click evoked auditory brainstem responses (ABR) and speech evoked ABR in children with learning disability; to report whether speech evoked auditory brainstem responses can be used as a biological marker of deficient sound encoding in children with learning disability.</p><p class="abstract"><strong>Methods:</strong> Pure tone audiometry, immitance audiometery, click and speech evoked brainstem responses were obtained in 25 children diagnosed with learning disability and the data was compared with the responses in the control group.  </p><p class="abstract"><strong>Results:</strong> Statistical differences were seen in speech recognition threshold, speech discrimination scores, latencies and amplitude of speech evoked auditory brainstem responses between control and study group. This poor representation of significant components of speech sounds in children with learning disability could be due to synaptic efficacy distortion and poor synaptic transmission. Other reasons may be activation of fewer auditory nerve fibres in the auditory brainstem in response to speech stimulus.</p><p class="abstract"><strong>Conclusions:</strong> The speech evoked auditory brainstem responses can serve as an efficient tool in identifying underlying auditory processing difficulties in children with learning disability and can help in early intervention.</p><p class="abstract"> </p>


2011 ◽  
Vol 23 (12) ◽  
pp. 3874-3887 ◽  
Author(s):  
Julie Chobert ◽  
Céline Marie ◽  
Clément François ◽  
Daniele Schön ◽  
Mireille Besson

The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency, vowel duration, and VOT. Both the passive and the active processing of duration and VOT deviants were enhanced in musician compared with nonmusician children. Moreover, although no effect was found on the passive processing of frequency, active frequency discrimination was enhanced in musician children. These findings are discussed in terms of common processing of acoustic features in music and speech and of positive transfer of training from music to the more abstract phonological representations of speech units (syllables).


Sign in / Sign up

Export Citation Format

Share Document