envelope reconstruction
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Waldo Nogueira ◽  
Hanna Dolhopiatenko

Objectives: Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. Approach: CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoders using regularized least squares. The correlation coefficient between the reconstructed and the attended (ρ_(A_SIR )) and between the reconstructed and the unattended (ρ_(U_SIR )) speech stream at each SIR was computed. Main Results: Selective attention decoding in CI users is possible even if both speech streams are presented monaurally. A significant effect of SIR on the correlation coefficient to the attended signal ρ_(A_SIR ), as well as on the difference correlation coefficients ρ_(A_SIR )-ρ_(U_SIR ) and ρ_(A_SIR )-ρ_(U_(-SIR) ) was observed, but not on the unattended correlation coefficient ρ_(U_SIR ). Finally, the results show a significant correlation between speech understanding performance and the correlation coefficients ρ_(A_SIR-) ρ_(U_SIR ) or -ρ_(U_SIR ) across subjects. Moreover, the difference correlation coefficient ρ_(A_SIR )-ρ_(U_(-SIR) ), which is less affected by the CI electrical artifact, presented a correlation trend with speech understanding performance. Significance: Selective attention decoding in CI users is possible, however care needs to be taken with the CI artifact and the speech material used to train the decoders. Even if only a small correlation trend between selective attention decoding and speech understanding was observed, these results are important for future development of objective speech understanding measures for CI users.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009358
Author(s):  
Nathaniel J. Zuk ◽  
Jeremy W. Murphy ◽  
Richard B. Reilly ◽  
Edmund C. Lalor

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.


2021 ◽  
Vol 15 ◽  
Author(s):  
Giovanni M. Di Liberto ◽  
Guilhem Marion ◽  
Shihab A. Shamma

Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRFenv). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.


2021 ◽  
Author(s):  
Nathaniel J Zuk ◽  
Jeremy W Murphy ◽  
Richard B Reilly ◽  
Edmund C Lalor

AbstractThe human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the processing of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, speech envelope tracking at low frequencies, below 1 Hz, was uniquely associated with increased weighting over parietal channels. Our results highlight the importance of low-frequency speech tracking and its origin from speech-specific processing in the brain.


2020 ◽  
Author(s):  
Eline Verschueren ◽  
Jonas Vanthornhout ◽  
Tom Francart

ABSTRACTObjectivesThe last years there has been significant interest in attempting to recover the temporal envelope of a speech signal from the neural response to investigate neural speech processing. The research focus is now broadening from neural speech processing in normal-hearing listeners towards hearing-impaired listeners. When testing hearing-impaired listeners speech has to be amplified to resemble the effect of a hearing aid and compensate peripheral hearing loss. Until today, it is not known with certainty how or if neural speech tracking is influenced by sound amplification. As these higher intensities could influence the outcome, we investigated the influence of stimulus intensity on neural speech tracking.DesignWe recorded the electroencephalogram (EEG) of 20 normal-hearing participants while they listened to a narrated story. The story was presented at intensities from 10 to 80 dB A. To investigate the brain responses, we analyzed neural tracking of the speech envelope by reconstructing the envelope from EEG using a linear decoder and by correlating the reconstructed with the actual envelope. We investigated the delta (0.5-4 Hz) and the theta (4-8 Hz) band for each intensity. We also investigated the latencies and amplitudes of the responses in more detail using temporal response functions which are the estimated linear response functions between the stimulus envelope and the EEG.ResultsNeural envelope tracking is dependent on stimulus intensity in both the TRF and envelope reconstruction analysis. However, provided that the decoder is applied on data of the same stimulus intensity as it was trained on, envelope reconstruction is robust to stimulus intensity. In addition, neural envelope tracking in the delta (but not theta) band seems to relate to speech intelligibility. Similar to the linear decoder analysis, TRF amplitudes and latencies are dependent on stimulus intensity: The amplitude of peak 1 (30-50 ms) increases and the latency of peak 2 (140-160 ms) decreases with increasing stimulus intensity.ConclusionAlthough brain responses are influenced by stimulus intensity, neural envelope tracking is robust to stimulus intensity when using the same intensity to test and train the decoder. Therefore we can assume that intensity is not a confound when testing hearing-impaired participants with amplified speech using the linear decoder approach. In addition, neural envelope tracking in the delta band appears to be correlated with speech intelligibility, showing the potential of neural envelope tracking as an objective measure of speech intelligibility.


Open fractures of the lower limb are increasingly common in older patients in whom surgical reconstruction is complicated by poor-quality bone and soft tissues, and whose complex healthcare needs are exacerbated by frailty and the presence of multiple co-morbidities. These challenges are likely to increase as the Office for National Statistics predicts that the number of people aged 75 and over in the UK will rise from 5.2 million in 2014 to 9.9 million in 2039. The majority of open fragility fractures of the lower limb occur in the tibia and ankle of older women as a result of a fall from standing. Despite the low-energy mechanism there is a high incidence of Gustilo–Anderson III (predominantly IIIA) injuries. This reflects the frailty of this patient group and the combined effects that osteoporosis and skin ageing have upon the quality of the bone and integrity of the surrounding soft tissue envelope. Reconstruction is complicated by higher rates of malunion, non-union, necessity for amputation, and mortality as compared with younger patients with similar injuries. These patients may have complex ongoing healthcare needs requiring additional support that influence safe delivery of the established ‘best practice’ surgical interventions.


2018 ◽  
Author(s):  
Ben Somers ◽  
Eline Verschueren ◽  
Tom Francart

AbstractObjectiveWhen listening to speech, the brain tracks the speech envelope. It is possible to reconstruct this envelope from EEG recordings. However, in people who hear using a cochlear implant (CI), the artifacts caused by electrical stimulation of the auditory nerve contaminate the EEG. This causes the decoder to produce an artifact-dominated reconstruction, which does not reflect the neural signal processing. The objective of this study is to develop and validate a method for assessing the neural tracking of speech envelope in CI users.ApproachTo obtain EEG recordings free of stimulus artifacts, the electrical stimulation is periodically in-terrupted. During these stimulation gaps, artifact-free EEG can be sampled and used to train a linear envelope decoder. Different recording conditions were used to characterize the artifacts and their influence on the envelope reconstruction.Main resultsThe present study demonstrates for the first time that neural tracking of the speech envelope can be measured in response to ongoing electrical stimulation. The responses were validated to be truly neural and not affected by stimulus artifact.SignificanceBesides applications in audiology and neuroscience, the characterization and elimination of stimulus artifacts will enable future EEG studies involving continuous speech in CI users. Measures of neural tracking of the speech envelope reflect interesting properties of the listener’s perception of speech, such as speech intelligibility or attentional state. Successful decoding of neural envelope tracking will open new possibilities to investigate the neural mechanisms of speech perception with a CI.


Author(s):  
L. Díaz-Vilariño ◽  
E. Verbree ◽  
S. Zlatanova ◽  
A. Diakité

Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. <br><br> For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.


Sign in / Sign up

Export Citation Format

Share Document