scholarly journals Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words

2017 ◽  
Vol 28 (09) ◽  
pp. 799-809 ◽  
Author(s):  
Meredith Spratford ◽  
Hannah Hodson McLean ◽  
Ryan McCreery

AbstractAccess to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.Thirty-five children, aged 5–12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children’s use of high-frequency audibility in a manner that approximates how they learn language.

2013 ◽  
Vol 24 (10) ◽  
pp. 980-991 ◽  
Author(s):  
Kristi Oeding ◽  
Michael Valente

Background: In the past, bilateral contralateral routing of signals (BICROS) amplification incorporated omnidirectional microphones on the transmitter and receiver sides and some models utilized noise reduction (NR) on the receiver side. Little research has examined the performance of BICROS amplification in background noise. However, previous studies examining contralateral routing of signals (CROS) amplification have reported that the presence of background noise on the transmitter side negatively affected speech recognition. Recently, NR was introduced as a feature on the receiver and transmitter sides of BICROS amplification, which has the potential to decrease the impact of noise on the wanted speech signal by decreasing unwanted noise directed to the transmitter side. Purpose: The primary goal of this study was to examine differences in the reception threshold for sentences (RTS in dB) using the Hearing in Noise Test (HINT) in a diffuse listening environment between unaided and three aided BICROS conditions (no NR, mild NR, and maximum NR) in the Tandem 16 BICROS. A secondary goal was to examine real-world subjective impressions of the Tandem 16 BICROS compared to unaided. Research Design: A randomized block repeated measures single blind design was used to assess differences between no NR, mild NR, and maximum NR listening conditions. Study Sample: Twenty-one adult participants with asymmetric sensorineural hearing loss (ASNHL) and experience with BICROS amplification were recruited from Washington University in St. Louis School of Medicine. Data Collection and Analysis: Participants were fit with the National Acoustic Laboratories’ Nonlinear version 1 prescriptive target (NAL-NL1) with the Tandem 16 BICROS at the initial visit and then verified using real-ear insertion gain (REIG) measures. Participants acclimatized to the Tandem 16 BICROS for 4 wk before returning for final testing. Participants were tested utilizing HINT sentences examining differences in RTS between unaided and three aided listening conditions. Subjective benefit was determined via the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire between the Tandem 16 BICROS and unaided. A repeated measures analysis of variance (ANOVA) was utilized to analyze the results of the HINT and APHAB. Results: Results revealed no significant differences in the RTS between unaided, no NR, mild NR, and maximum NR. Subjective impressions using the APHAB revealed statistically and clinically significant benefit with the Tandem 16 BICROS compared to unaided for the Ease of Communication (EC), Background Noise (BN), and Reverberation (RV) subscales. Conclusions: The RTS was not significantly different between unaided, no NR, mild NR, and maximum NR. None of the three aided listening conditions were significantly different from unaided performance as has been reported for previous studies examining CROS hearing aids. Further, based on comments from participants and previous research studies with conventional hearing aids, manufacturers of BICROS amplification should consider incorporating directional microphones and independent volume controls on the receiver and transmitter sides to potentially provide further improvement in signal-to-noise ratio (SNR) for patients with ASNHL.


2014 ◽  
Vol 25 (10) ◽  
pp. 1022-1033 ◽  
Author(s):  
Andrew John ◽  
Jace Wolfe ◽  
Susan Scollie ◽  
Erin Schafer ◽  
Mary Hudson ◽  
...  

Background: Previous research has suggested that use of nonlinear frequency compression (NLFC) can improve audibility for high-frequency sounds and speech recognition of children with moderate to profound high-frequency hearing loss. Furthermore, previous studies have generally found no detriment associated with the use of NLFC. However, there have been no published studies examining the effect of NLFC on the performance of children with cookie-bite audiometric configurations. For this configuration of hearing loss, frequency-lowering processing will likely move high-frequency sounds to a lower frequency range at which a greater degree of hearing loss exists. Purpose: The purpose of this study was to evaluate and compare the effects of wideband amplification and NLFC on high-frequency audibility and speech recognition of children with cookie-bite audiometric configurations. Research Design: This study consisted of a within-participant design with repeated measures across test conditions. Study Sample: Seven children, ages 6–13 yr, with cookie-bite audiometric configurations and normal hearing or mild hearing loss at 6000 and 8000 Hz, were recruited. Intervention: Participants were fitted with Phonak Nios S H2O III behind-the-ear hearing aids and Oticon Safari 300 behind-the-ear hearing aids. Data Collection: The participants were evaluated after three 4-to 6-wk intervals: (1) Phonak Nios S H2O III without NLFC, (2) Phonak Nios S H2O III with NLFC, and (3) Oticon Safari 300 with wideband frequency response extending to 8000 Hz. The order in which each technology was used was counterbalanced across participants. High-frequency audibility was evaluated by assessing aided thresholds (dB SPL) for warble tones and the high-frequency phonemes /sh/ and /s/. Speech recognition in quiet was measured with the University of Western Ontario (UWO) Plurals Test, the UWO Distinctive Features Difference (DFD) Test, and the Phoneme Perception Test vowel-consonant-vowel nonsense syllable test. Sentence recognition in noise was evaluated with the Bamford-Kowal-Bench Speech-In-Noise (BKB-SIN) Test. Analysis: Repeated-measures analyses of variance were used to analyze the data collected in this study. The results across the three different conditions were compared. Results: No difference in performance across conditions was observed for detection of high-frequency warble tones and the speech sounds /sh/ and /s/. No significant difference was seen across conditions for speech recognition in quiet when measured with the UWO Plurals Test, the UWO-DFD Test, and the Phoneme Perception Test vowel-consonant-vowel nonsense syllable test. Finally, there were also no differences across conditions on the BKB-SIN Test. Conclusions: These results suggest that NLFC does not degrade or improve audibility for and recognition of high-frequency speech sounds as well as sentence recognition in noise when compared with wideband amplification for children with cookie-bite audiometric configurations.


2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2018 ◽  
Vol 29 (06) ◽  
pp. 520-532 ◽  
Author(s):  
Jonathan M. Vaisberg ◽  
Paula Folkeard ◽  
John Pumford ◽  
Philipp Narten ◽  
Susan Scollie

AbstractThe real-ear-to-coupler difference (RECD) is an ANSI standardized method for estimating ear canal sound pressure level (SPL) thresholds and assisting in the prediction of real-ear aided responses. It measures the difference in dB between the SPL produced in the ear canal and the SPL produced in an HA-1 2-cc coupler by the same sound source. Recent evidence demonstrates that extended high-frequency bandwidth, beyond the hearing aid bandwidth typically measured, is capable of providing additional clinical benefit. The industry has, in turn, moved toward developing hearing aids and verification equipment capable of producing and measuring extended high-frequency audible output. As a result, a revised RECD procedure conducted using a smaller, 0.4-cc coupler, known as the wideband-RECD (wRECD), has been introduced to facilitate extended high-frequency coupler-based measurements up to 12.5 kHz.This study aimed to (1) compare test–retest repeatability between the RECD and wRECD and (2) measure absolute agreement between the RECD and wRECD when both are referenced to a common coupler.RECDs and wRECDs were measured bilaterally in adult ears by calculating the dB difference in SPL between the ear canal and coupler responses. Real-ear probe microphone measures were completed twice per ear per participant for both foam-tip and customized earmold couplings using the Audioscan Verifit 1 and Verifit 2 fitting systems, followed by measurements in the respective couplers.Twenty-one adults (mean age = 67 yr, range = 19–78) with typical aural anatomy (as determined by measures of impedance and otoscopy) participated in this study, leading to a sample size of 42 ears.Repeatability within RECD and wRECD was assessed for each coupling configuration using a repeated-measures analysis of variance (ANOVA) with test–retest and frequency as within-participants factors. Repeatability between the RECD and wRECD was assessed within each configuration using a repeated-measures ANOVA with test–retest, frequency, and coupler type as within-participants factors. Agreement between the RECD and wRECD was assessed for each coupling configuration using a repeated-measures ANOVA with RECD value, coupler type, and frequency as within-participants factors. Post hoc comparisons with Bonferroni corrections were used when appropriate to locate the frequencies at which differences occurred. A 3-dB criterion was defined to locate differences of clinical significance.Average absolute test–retest differences were within ±3 dB within each coupler and coupling configuration, and between the RECD and wRECD. The RECD and wRECD were in absolute agreement following HA-1-referenced transforms, with most frequencies agreeing within ±1 dB, except at 0.2 kHz for the earmold, and 0.2–0.25 kHz for the foam tip, where the average RECD exceeded the average wRECD by slightly >3 dB.Test–retest repeatability of the RECD (up to 8 kHz) and wRECD (up to 12.5 kHz) is acceptable and similar to previously reported data. The RECD and wRECD are referenced to different couplers, but can be rendered comparable with a simple transform, producing values that are in accordance with the ANSI S3.46-2013 standard.


2009 ◽  
Vol 20 (08) ◽  
pp. 465-479 ◽  
Author(s):  
Francis Kuk ◽  
Denise Keenan ◽  
Petri Korhonen ◽  
Chi-chuen Lau

Background: Frequency transposition has gained renewed interest in recent years. This type of processing takes sounds in the unaidable high-frequency region and moves them to the lower frequency region. One concern is that the transposed sounds mask or distort the original low-frequency sounds and lead to a poorer performance. On the other hand, experience with transposition may allow the listeners to relearn the new auditory percepts and benefit from transposition. Purpose: The current study was designed to examine the effect of linear frequency transposition on consonant identification in quiet (50 dB SPL and 68 dB SPL) and in noise at three intervals—the initial fit, after one month of use (along with auditory training), and a further one month of use (without directed training) of transposition. Research Design: A single-blind, factorial repeated-measures design was used to study the effect of test conditions (three) and hearing aid setting/time interval (four) on consonant identification. Study Sample: Eight adults with a severe-to-profound high-frequency sensorineural hearing loss participated. Intervention: Participants were fit with the Widex m4-m behind-the-ear hearing aids binaurally in the frequency transposition mode, and their speech scores were measured initially. They wore the hearing aids home for one month and were instructed to complete a self-paced “bottom-up” training regimen. They returned after the training, and their speech performance was measured. They wore the hearing aids home for another month, but they were not instructed to complete any auditory training. Their speech performance was again measured at the end of the two-month trial. Data Collection and Analysis: Consonant performance was measured with a nonsense syllable test (ORCA-NST) that was developed at this facility (Office of Research in Clinical Amplification [Widex]). The test conditions included testing in quiet at 50 dB SPL and 68 dB SPL, and at 68 dB SPL in noise (SNR [signal-to-noise ratio] = +5). The hearing aid conditions included no transposition at initial fit (V1), transposition at initial fit (V2), transposition at one month post-fit (V3), and transposition at 2 months post-fit (V4). Identification scores were analyzed for each individual phoneme and phonemic class. Repeated-measures ANOVA were conducted using SPSS software to examine significant differences. Results: For all test conditions (50 dB SPL in quiet, 68 dB SPL in quiet, and 68 dB SPL in noise), a statistically significant difference (p < 0.05 level) was reached between the transposition condition measured at two months postfitting and the initial fitting (with and without transposition) for fricatives only. The difference between transposition and the no-transposition conditions at the 50 dB SPL condition was also significant for the initial and one-month intervals. Analysis of individual phonemes showed a decrease in the number of confusions and an increase in the number of correct identification over time. Conclusions: Linear frequency transposition improved fricative identification over time. Proper candidate selection with appropriate training is necessary to fully realize the potential benefit of this type of processing.


2015 ◽  
Vol 8 (12) ◽  
pp. 5157-5176 ◽  
Author(s):  
M. Iarlori ◽  
F. Madonna ◽  
V. Rizi ◽  
T. Trickl ◽  
A. Amodeo

Abstract. Since its establishment in 2000, EARLINET (European Aerosol Research Lidar NETwork) has provided, through its database, quantitative aerosol properties, such as aerosol backscatter and aerosol extinction coefficients, the latter only for stations able to retrieve it independently (from Raman or high-spectral-resolution lidars). These coefficients are stored in terms of vertical profiles, and the EARLINET database also includes the details of the range resolution of the vertical profiles. In fact, the algorithms used in the lidar data analysis often alter the spectral content of the data, mainly acting as low-pass filters to reduce the high-frequency noise. Data filtering is described by the digital signal processing (DSP) theory as a convolution sum: each filtered signal output at a given range is the result of a linear combination of several signal input data samples (relative to different ranges from the lidar receiver), and this could be seen as a loss of range resolution of the output signal. Low-pass filtering always introduces distortions in the lidar profile shape. Thus, both the removal of high frequency, i.e., the removal of details up to a certain spatial extension, and the spatial distortion produce a reduction of the range resolution. This paper discusses the determination of the effective resolution (ERes) of the vertical profiles of aerosol properties retrieved from lidar data. Large attention has been dedicated to providing an assessment of the impact of low-pass filtering on the effective range resolution in the retrieval procedure.


2021 ◽  
Vol 25 ◽  
pp. 233121652199913
Author(s):  
Paula Folkeard ◽  
Maaike Van Eeckhoutte ◽  
Suzanne Levy ◽  
Drew Dundas ◽  
Parvaneh Abbasalipour ◽  
...  

Direct drive hearing devices, which deliver a signal directly to the middle ear by vibrating the tympanic membrane via a lens placed in contact with the umbo, are designed to provide an extension of audible bandwidth, but there are few studies of the effects of these devices on preference, speech intelligibility, and loudness. The current study is the first to compare aided speech understanding between narrow and extended bandwidth conditions for listeners with hearing loss while fitted with a direct drive hearing aid system. The study also explored the effect of bandwidth on loudness perception and investigated subjective preference for bandwidth. Fifteen adult hearing aid users with symmetrical sensorineural hearing loss participated in a prospective, within-subjects, randomized single-blind repeated-measures study. Participants wore the direct drive hearing aids for 4 to 15 weeks (average 6 weeks) prior to outcome measurement. Outcome measures were completed in various bandwidth conditions achieved by reducing the gain of the device above 5000 Hz or by filtering the stimuli. Aided detection thresholds provided evidence of amplification to 10000 Hz. A significant improvement was found in high-frequency consonant detection and recognition, as well as for speech in noise performance in the full versus narrow bandwidth conditions. Subjective loudness ratings increased with provision of the full bandwidth available; however, real-world trials showed most participants were able to wear the full bandwidth hearing aids with only small adjustments to the prescription method. The majority of participants had either no preference or a preference for the full bandwidth setting.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hsin-Yi Wang ◽  
Men-Tzung Lo ◽  
Kun-Hui Chen ◽  
Susan Mandell ◽  
Wen-Kuei Chang ◽  
...  

Background: Induction of anesthesia with propofol is associated with a disturbance in hemodynamics, in part due to its effects on parasympathetic and sympathetic tone. The impact of propofol on autonomic function is unclear. In this study, we investigated in detail the changes in the cardiac autonomic nervous system (ANS) and peripheral sympathetic outflow that occur during the induction of anesthesia.Methods: Electrocardiography and pulse photoplethysmography (PPG) signals were recorded and analyzed from 30 s before to 120 s after propofol induction. The spectrogram was derived by continuous wavelet transform with the power of instantaneous high-frequency (HFi) and low-frequency (LFi) bands extracted at 1-s intervals. The wavelet-based parameters were then divided into the following segments: (1) baseline (30 s before administration of propofol), (2) early phase (first minute after administration of propofol), and (3) late phase (second minute after administration of propofol) and compared with the same time intervals of the Fourier-based spectrum [high-frequency (HF) and low-frequency (LF) bands]. Time-dependent effects were explored using fractional polynomials and repeated-measures analysis of variance.Results: Administration of propofol resulted in reductions in HFi and LFi and increases in the LFi/HFi ratio and PPG amplitude, which had a significant non-linear relationship. Significant between-group differences were found in the HFi, LFi, and LFi/HFi ratio and Fourier-based HF and LF after dividing the segments into baseline and early/late phases. On post hoc analysis, changes in HFi, LFi, and the LFi/HFi ratio were significant starting from the early phase. The corresponding effect size (partial eta squared) was &gt; 0.3, achieving power over 90%; however, significant decreases in HF and LF were observed only in the late phase. The PPG amplitude was increased significantly in both the early and late phases.Conclusion: Propofol induction results in significant immediate changes in ANS activity that include temporally relative elevation of cardiac sympathovagal balance and reduced sympathetic activity.Clinical Trial Registration: The study was approved by the Institutional Review Board of Taipei Veterans General Hospital (No. 2017-07-009CC) and is registered at ClinicalTrials.gov (https://clinicaltrials.gov/ct2/show/NCT03613961).


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A35-A35
Author(s):  
O Hanron ◽  
G Mason ◽  
J F Holmes ◽  
R M Spencer

Abstract Introduction Early childhood naps have been shown to support emotional memory consolidation, but this benefit only emerges the following day. It is unknown whether consolidation occurs during the nap itself, or if napping only prepares memories for overnight consolidation. In this study, we used a forced-choice recognition task to determine whether naps protect emotional memories against post-nap interference, which would indicate the occurrence of consolidation. Methods Preschool children (33–67 months; N=63) viewed neutral faces paired with negative or neutral descriptions. Following a nap or an equal interval awake (within-subjects, order counterbalanced, ~1 week apart), half of these participants (N=33) were presented with an interfering set of faces and descriptions, while the other half (N=30) did not receive interference. For all participants, recognition of the original faces was probed after encoding, after the nap or wake interval, and the next morning. Results To assess the influence of napping on changes in emotional memory, 2 (stimulus valence: negative vs. neutral) x 2 (condition: nap vs. wake) repeated-measures ANOVAs were performed. Recall of negative and neutral items did not immediately differ between the nap and wake conditions for the participants who received no interference. 24 hours later, these children trended towards recalling negative and neutral items better if they had napped the previous day (condition main effect: F(1,29)=3.539, p=0.070). In contrast, participants who received interference recalled fewer negative items than neutral items immediately following a nap (p=0.034), while this difference did not emerge following an interval awake. Conclusion Our results suggest that naps initially destabilize emotional memories rather than protecting them against interference. However, this initial destabilization may reflect the partial processing of memories during naps, perhaps allowing for enhanced long-term consolidation. Overall, our findings provide important insight into the mechanism of nap-dependent emotional processing. Support Supported by NIH R01 HL111695 and an Honors Research Grant from Commonwealth Honors College


Author(s):  
Bryan Levy ◽  
Ethan Hilton ◽  
Megan Tomko ◽  
Julie Linsey

Design problems are used to evaluate students’ abilities, the impact of various teaching approaches and of design methods. Design problems greatly vary in style and subject area in order to accommodate for a wide distribution of disciplines, cultures, and expertise. While design problems are occasionally reused between studies, new design problems are continuously created in order to account for the fact that a design problem cannot be used multiple times on an individual in order to effectively measure one’s abilities to perform design. More specifically, in repeated measures testing, students cannot receive the same design problem multiple times, for this would cause bias; therefore, multiple design problems are needed to allow for repeated measures testing. The nature and structure of these multiple design problems need to be similar or “equivalent” in order to accurately measure students’ abilities to perform in design. In this study, we examine four design problems: peanut shelling, corn husking, coconut harvesting, and a personal alarm clock. We determine whether these problems could be deemed equivalent for the purposes of evaluating student design performance through repeated measures testing. We implemented idea generation sessions using both between-subject and within-subjects approaches. Solutions were evaluated on quantity, quality, novelty, variety, and completeness metrics. The data implies that the Peanut and Corn problems are similar in nature and the Alarm and Coconut problems are also similar in nature; as such, these problem pairings may be used to test differences based on group means.


Sign in / Sign up

Export Citation Format

Share Document