speech stimulus
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 15)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Vol 263 (2) ◽  
pp. 4426-4434
Author(s):  
Masayuki Takada ◽  
Kanji Goto

In Japan, vehicle horns have been used as a means of communication between drivers, and frequently aroused psychological negative reactions in hearers. If horn sounds have acoustic features of speech, they possibly help communication between drivers, and improve hearers' negative impressions. To investigate such hypotheses, psychoacoustical experiments were conducted using synthesized horn sounds with acoustic characteristics of Japanese speech "abunai", which implies a dangerous situation. Spectral features and temporal envelopes were extracted from the speech stimulus and the similar one with swapped syllables, and were reflected in horn sounds. Two experiments were carried out to examine the effects of acoustic characteristics of horn sounds on the perceived quality and interpretations of the intention behind another driver's horn use. Stimuli with spectral characteristics of the speech and those of swapped syllables were evaluated as being less unpleasant and more safe than the original horn sound. On the other hand, many responses of 'caution' and 'danger' were obtained for the stimulus with spectral characteristics of the speech. Results suggested that the horn sound with spectral characteristics of the speech improved from the original horn sound in the perceived quality, and correctly communicated the intention behind another driver's horn use.


Author(s):  
Fabiana Aparecida Lemos ◽  
Aryelly Dayane da Silva Nunes ◽  
Carolina Karla de Souza Evangelista ◽  
Carles Escera ◽  
Karinna Veríssimo Meira Taveira ◽  
...  

Purpose The purpose of this study is to characterize parameters used for frequency-following response (FFR) acquisition in children up to 24 months of age through a systematic review. Method The study was registered in PROSPERO and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses' recommendations. Search was performed in six databases (LILACS, LIVIVO, PsycINFO, PubMed, Scopus, and Web of Science) and gray literature (Google Scholar, OpenGrey, ProQuest)as well as via manual searches in bibliographic references. Observational studies using speech stimuli to elicit the FFR in infants with normal hearing on the age range from 0 until 24 months were included. No restrictions regarding language and year of publication were applied. Risk of bias was assessed with the Joanna Briggs Institute Critical Appraisal Checklist. Data on stimulus, presentation rate, time window for analysis, number of sweeps, artifact rejection, online filters, stimulated ear, and examination condition were extracted. Results Four hundred fifty-nine studies were identified. After removing duplicates and reading titles and abstracts, 15 articles were included. Seven studies were classified as low risk of bias, seven as moderate risk, and one as high risk. Conclusions There is a consensus in the use of some acquisition parameters of the FFR with speech stimulus, such as the vertical mounting, the use of alternating polarity, a sampling rate of 20000 Hz, and the /da/ synthesized syllable of 40 ms in duration as the preferred stimulus. Although these parameters show some consensus, the results disclosed lack of a single established protocol for FFR acquisition with speech stimulus in infants in the investigated age range.


Author(s):  
I. Speck ◽  
T. Müller ◽  
T. F. Jakob ◽  
K. Wiebe ◽  
A. Aschendorff ◽  
...  

Abstract Background Previous research demonstrated benefits of adaptive digital microphone technologies (ADMTs) in adults with single-sided deafness (SSD) having a cochlear implant (CI). Children with SSD are especially affected by background noise because of their noise exposure in kindergarten and school. Purpose This article aims to evaluate possible effects of ADMT on speech recognition in background noise in children with SSD who use a CI. Study Sample Ten children between 5 and 11 years of age were included. Data Collection and Analysis Speech recognition in noise was assessed for one frontal distant and two lateral speakers. The speech stimulus was presented at a speech level of 65 dB(A) and noise at a level of 55 dB(A). For the presentation condition with one frontal speaker, four listening conditions were assessed: (1) normal-hearing (NH) ear and CI turned off; (2) NH ear and CI; (3) NH ear and CI with ADMT; and (4) NH ear with ADMT and CI. Listening conditions (2) to (4) were also tested for each lateral speaker. The frontal speaker was positioned directly in front of the participant, whereas the lateral speakers were positioned at angles of 90 degrees and –90 degrees to the participant's head. Results Children with SSD who use a CI significantly benefit from the application of ADMT in speech recognition in noise for frontal distant and for lateral speakers. Speech recognition improved significantly with ADMT at the CI and the NH ears. Conclusion Application of ADMT significantly improves speech recognition in noise in children with SSD who use a CI and can therefore be highly recommended. The decision of whether to apply ADMT at the CI NH ear or bilaterally should be made for each child individually.


Author(s):  
A K Neupane ◽  
S K Sinha ◽  
K Gururaj

Abstract Objective Binaural hearing is facilitated by neural interactions in the auditory pathway. Ageing results in impairment of localisation and listening in noisy situations without any significant hearing loss. The present study focused on comparing the binaural encoding of a speech stimulus at the subcortical level in middle-aged versus younger adults, based on speech-evoked auditory brainstem responses. Methods Thirty participants (15 young adults and 15 middle-aged adults) with normal hearing sensitivity (less than 15 dB HL) participated in the study. The speech-evoked auditory brainstem response was recorded monaurally and binaurally with a 40-ms /da/ stimulus. Fast Fourier transform analysis was utilised. Results An independent sample t-test revealed a significant difference between the two groups in fundamental frequency (F0) amplitude recorded with binaural stimulation. Conclusion The present study suggested that ageing results in degradation of F0 encoding, which is essential for the perception of speech in noise.


Author(s):  
Kevin D. Prinsloo ◽  
Edmund C. Lalor

AbstractIn recent years research on natural speech processing has benefited from recognizing that low frequency cortical activity tracks the amplitude envelope of natural speech. However, it remains unclear to what extent this tracking reflects speech-specific processing beyond the analysis of the stimulus acoustics. In the present study, we aimed to disentangle contributions to cortical envelope tracking that reflect general acoustic processing from those that are functionally related to processing speech. To do so, we recorded EEG from subjects as they listened to “auditory chimeras” – stimuli comprised of the temporal fine structure (TFS) of one speech stimulus modulated by the amplitude envelope (ENV) of another speech stimulus. By varying the number of frequency bands used in making the chimeras, we obtained some control over which speech stimulus was recognized by the listener. No matter which stimulus was recognized, envelope tracking was always strongest for the ENV stimulus, indicating a dominant contribution from acoustic processing. However, there was also a positive relationship between intelligibility and the tracking of the perceived speech, indicating a contribution from speech specific processing. These findings were supported by a follow-up analysis that assessed envelope tracking as a function of the (estimated) output of the cochlea rather than the original stimuli used in creating the chimeras. Finally, we sought to isolate the speech-specific contribution to envelope tracking using forward encoding models and found that indices of phonetic feature processing tracked reliably with intelligibility. Together these results show that cortical speech tracking is dominated by acoustic processing, but also reflects speech-specific processing.This work was supported by a Career Development Award from Science Foundation Ireland (CDA/15/3316) and a grant from the National Institute on Deafness and Other Communication Disorders (DC016297). The authors thank Dr. Aaron Nidiffer, Dr. Aisling O’Sullivan, Thomas Stoll and Lauren Szymula for assistance with data collection, and Dr. Nathaniel Zuk, Dr. Aaron Nidiffer, Dr. Aisling O’Sullivan for helpful comments on this manuscript.Significance StatementActivity in auditory cortex is known to dynamically track the energy fluctuations, or amplitude envelope, of speech. Measures of this tracking are now widely used in research on hearing and language and have had a substantial influence on theories of how auditory cortex parses and processes speech. But, how much of this speech tracking is actually driven by speech-specific processing rather than general acoustic processing is unclear, limiting its interpretability and its usefulness. Here, by merging two speech stimuli together to form so-called auditory chimeras, we show that EEG tracking of the speech envelope is dominated by acoustic processing, but also reflects linguistic analysis. This has important implications for theories of cortical speech tracking and for using measures of that tracking in applied research.


2020 ◽  
Author(s):  
Laura Gwilliams ◽  
Pascal Wallisch

Speech perception relies on the rapid resolution of uncertainty. Here we explore whether auditory experiences contribute to this process of ambiguity resolution. ~8000 participants were surveyed online for their (i) subjective percept of a speech stimulus with ambiguous formant allocation; (ii) demographic profile and auditory experiences. Both linguistic and non-linguistic auditory experiences significantly predict speech perception. Listeners were more likely to perceive the ambiguous stimulus in accordance with their own name, and were biased towards lower formant allocation as a function of being exposed to lower auditory frequencies in their environment. Overall, our results show that the subjective interpretation of an ambiguous stimulus in the auditory domain is determined by prior acoustic exposure, suggesting the operation of an exposure-dependent mechanism impacting sensitivity that resolves ambiguity in speech perception.


Author(s):  
Mohammad Jalilpour Monesi ◽  
Bernd Accou ◽  
Jair Montoya-Martinez ◽  
Tom Francart ◽  
Hugo Van Hamme
Keyword(s):  

2019 ◽  
Vol 8 (4) ◽  
Author(s):  
Farida B. Sitdikova ◽  
Guzel R. Eremeeva ◽  
Ekaterina V. Martynova

The article considers implicatures of utterances. An implicature is information complex which is literally (verbally) unexpressed and which can be elicited as a result of extracting the meaning using background knowledge of recipient, context and situation. Formation and understanding the meaning of an utterance is the process of extracting implied meaning which is formed by interaction of linguistic units with constituents of cognitive environment. An utterance therefore can be considered as a speech stimulus involving the knowledge from cognitive environment to form the meaning of the utterance. Extracting the meaning is (that is, eliciting an implicature) is important for communicion. The purpose of the research was to study various aspects of implicatures: the way of extracting the meaning, describing different types and obtaining statistical data of different types implicatures usage. In particular, our research demonstrates statistical prevalence of contextual implicatures. The results of the paper can be of some interest for experts in linguopragmatics and psycholinguisics.


Sign in / Sign up

Export Citation Format

Share Document