scholarly journals Separating the effects of acoustic and phonetic factors in linguistic processing with impoverished signals by adults and children

2012 ◽  
Vol 35 (2) ◽  
pp. 333-370 ◽  
Author(s):  
SUSAN NITTROUER ◽  
JOANNA H. LOWENSTEIN

ABSTRACTCochlear implants allow many individuals with profound hearing loss to understand spoken language, even though the impoverished signals provided by these devices poorly preserve acoustic attributes long believed to support recovery of phonetic structure. Consequently, questions may be raised regarding whether traditional psycholinguistic theories rely too heavily on phonetic segments to explain linguistic processing while ignoring potential roles of other forms of acoustic structure. This study tested that possibility. Adults and children (8 years old) performed two tasks: one involving explicit segmentation, phonemic awareness, and one involving a linguistic task thought to operate more efficiently with well-defined phonetic segments, short-term memory. Stimuli were unprocessed (UP) signals, amplitude envelopes (AE) analogous to implant signals, and unprocessed signals in noise (NOI) that provided a degraded signal for comparison. Adults’ results for short-term recall were similar for UP and NOI, but worse for AE stimuli. The phonemic awareness task revealed the opposite pattern across AE and NOI. Children's results for short-term recall showed similar decrements in performance for AE and NOI compared to UP, even though only NOI stimuli showed diminished results for segmentation. Conclusions were that perhaps traditional accounts are too focused on phonetic segments, something implant designers and clinicians need to consider.

2013 ◽  
Vol 34 (2) ◽  
pp. 179-192 ◽  
Author(s):  
Michael S. Harris ◽  
William G. Kronenberger ◽  
Sujuan Gao ◽  
Helena M. Hoen ◽  
Richard T. Miyamoto ◽  
...  

Author(s):  
Daniel R. Romano ◽  
William G. Kronenberger ◽  
Shirley C. Henning ◽  
Caitlin J. Montgomery ◽  
Allison M. Ditmars ◽  
...  

Purpose: Verbal working memory (VWM) delays are commonly found in prelingually deaf youth with cochlear implants (CIs), albeit with considerable interindividual variability. However, little is known about the neurocognitive information-processing mechanisms underlying these delays and how these mechanisms relate to spoken language outcomes. The goal of this study was to use error analysis of the letter–number sequencing (LNS) task to test the hypothesis that VWM delays in CI users are due, in part, to fragile, underspecified phonological representations in short-term memory. Method: Fifty-one CI users aged 7–22 years and 53 normal hearing (NH) peers completed a battery of speech, language, and neurocognitive tests. LNS raw scores and error profiles were compared between samples, and a hierarchical regression model was used to test for associations with measures of speech, language, and hearing. Results: Youth with CIs scored lower on the LNS test than NH peers and committed a significantly higher number of errors involving phonological confusions (recalling an incorrect letter/digit in place of a phonologically similar one). More phonological errors were associated with poorer performance on measures of nonword repetition and following spoken directions but not with hearing quality. Conclusions: Study findings support the hypothesis that poorer VWM in deaf children with CIs is due, in part, to fragile, underspecified phonological representations in short-term/working memory, which underlie spoken language delays. Programs aimed at strengthening phonological representations may improve VWM and spoken language outcomes in CI users.


2019 ◽  
Vol 34 (6) ◽  
pp. 913-913
Author(s):  
M Davis ◽  
J Moses ◽  
J Rivera ◽  
A Guerra ◽  
K Hakinson

Abstract Objective Examine whether performance on spoken language assessment measures may be associated with performance at different phases of verbal learning and recall tasks. Method The assessment records of 222 American Veterans with diverse neuropsychiatric conditions were analyzed using Exploratory Factor Analyses. There were no exclusion criteria. All participants completed the Visual Naming (VisNam), Sentence Repetition (SenRep), Controlled Word Association (COWA), and Token Tests of the Multilingual Aphasia Examination (MAE), and Benton Serial Digit Learning Test – 8 Digits (SDL8). Individual assessment instruments were factored using Principal Component Analyses (PCA). A three-factor solution of the SDL-8 was co-factored with the spoken language components of the MAE to identify common sources of variance. Results A three-factor solution of the SDL8 separated trials into three overlapping factors consisting of early (SDL8_Early), middle (SDL8_Middle), and late (SDL8_Late) trials. Co-factoring the three new scales with the verbal components of the MAE produced a five-factor model explaining 84.563% of the shared variance: 1) SDL8_Early loaded with SenRep, 2) SDL8_Middle loaded with SenRep, 3) SDL8_Late loaded with Token, 4) SDL8_Late loaded with COWA, and 5) VisNam alone formed the fifth factor. Conclusions The results suggest that rote repetition is largely associated with early trials and slightly associated with middle trials, while late trials are largely associated with auditory comprehension and slightly associated with verbal fluency. This may be indicative of a shift in use of spoken language abilities to accommodate increasing levels of complexity in presented verbal short-term memory tasks and thus reflective of a change on learning strategy to optimize performance.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 681 ◽  
Author(s):  
Praveen Edward James ◽  
Hou Kit Mun ◽  
Chockalingam Aravind Vaithilingam

The purpose of this work is to develop a spoken language processing system for smart device troubleshooting using human-machine interaction. This system combines a software Bidirectional Long Short Term Memory Cell (BLSTM)-based speech recognizer and a hardware LSTM-based language processor for Natural Language Processing (NLP) using the serial RS232 interface. Mel Frequency Cepstral Coefficient (MFCC)-based feature vectors from the speech signal are directly input into a BLSTM network. A dropout layer is added to the BLSTM layer to reduce over-fitting and improve robustness. The speech recognition component is a combination of an acoustic modeler, pronunciation dictionary, and a BLSTM network for generating query text, and executes in real time with an 81.5% Word Error Rate (WER) and average training time of 45 s. The language processor comprises a vectorizer, lookup dictionary, key encoder, Long Short Term Memory Cell (LSTM)-based training and prediction network, and dialogue manager, and transforms query intent to generate response text with a processing time of 0.59 s, 5% hardware utilization, and an F1 score of 95.2%. The proposed system has a 4.17% decrease in accuracy compared with existing systems. The existing systems use parallel processing and high-speed cache memories to perform additional training, which improves the accuracy. However, the performance of the language processor has a 36.7% decrease in processing time and 50% decrease in hardware utilization, making it suitable for troubleshooting smart devices.


1973 ◽  
Vol 25 (1) ◽  
pp. 22-40 ◽  
Author(s):  
Donald G. Mackay

This paper proposed a two-stage model to capture some basic relations between attention, comprehension and memory for sentences. According to the model, the first stage of linguistic processing is carried out in short-term memory (M1) and involves a superficial analysis of semantic and syntactic features of words. The second stage is carried out in long-term memory (M2) and involves application of transformational rules to the analyses of M1 so as to determine the deep or underlying relations among words and phrases. According to the theory, attention is an M2 process: preliminary analyses by M1 are carried out even for unattended inputs, but final analyses by M2 are only carried out for attended inputs. The theory was shown to be consistent with established facts concerning memory, attention and comprehension, and additional support for the theory was obtained in a series of dichotic listening experiments.


1984 ◽  
Vol 13 (2) ◽  
pp. 205-234 ◽  
Author(s):  
Koenraad Kuiper ◽  
Douglas Haggo

ABSTRACTA description of the verbal and nonverbal characteristics of the language of stock auctioneers and a comparison with oral poetry show that these auctioneers use an oral formulaic technique. It is suggested that this technique is a response to performance constraints which place a heavy load on short term memory. This hypothesis accounts for features of stock auction speech which are not recognized as characteristically oral formulaic as well as those which are. It also sheds light on two problems that have exercised students of oral literature: the effect of literacy and the role of memorization. These findings support the view that the difference between traditional oral formulaic and ordinary spoken language is one of degree, not kind. (Oral literature, register, stylistics, situational constraints, psychological constraints, formulae)


Sign in / Sign up

Export Citation Format

Share Document