Spatio-temporal filter adjustment from evaluative feedback for a retina implant

Author(s):  
Michael Becker ◽  
Rolf Eckmiller
2020 ◽  
Author(s):  
Malte Wöstmann ◽  
Burkhard Maess ◽  
Jonas Obleser

AbstractThe deployment of neural alpha (8-12 Hz) lateralization in service of spatial attention is well-established: Alpha power increases in the cortical hemisphere ipsilateral to the attended hemifield, and decreases in the contralateral hemisphere, respectively. Much less is known about humans’ ability to deploy such alpha lateralization in time, and to thus exploit alpha power as a spatio-temporal filter. Here we show that spatially lateralized alpha power does signify - beyond the direction of spatial attention - the distribution of attention in time and thereby qualifies as a spatio-temporal attentional filter. Participants (N = 20) selectively listened to spoken numbers presented on one side (left vs right), while competing numbers were presented on the other side. Key to our hypothesis, temporal foreknowledge was manipulated via a visual cue, which was either instructive and indicated the to-be-probed number position (70% valid) or neutral. Temporal foreknowledge did guide participants’ attention, as they recognized numbers from the to-be-attended side more accurately following valid cues. In the magnetoencephalogram (MEG), spatial attention to the left versus right side induced lateralization of alpha power in all temporal cueing conditions. Modulation of alpha lateralization at the 0.8-Hz presentation rate of spoken numbers was stronger following instructive compared to neutral temporal cues. Critically, we found stronger modulation of lateralized alpha power specifically at the onsets of temporally cued numbers. These results suggest that the precisely timed hemispheric lateralization of alpha power qualifies as a spatio-temporal attentional filter mechanism susceptible to top-down behavioural goals.


2018 ◽  
Author(s):  
Ali Aroudi ◽  
Bojana Mirkovic ◽  
Maarten De Vos ◽  
Simon Doclo

AbstractRecently, a least-squares-based method has been proposed to decode auditory attention from single-trial EEG recordings for an acoustic scenario with two competing speakers. This method aims at reconstructing the attended speech envelope from the EEG recordings using a trained spatio-temporal filter. While the performance of this method has been mainly studied for noiseless and anechoic acoustic conditions, it is important to fully understand its performance in realistic noisy and reverberant acoustic conditions. In this paper, we investigate auditory attention decoding (AAD) using EEG recordings for different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In particular, we investigate the impact of different acoustic conditions for AAD filter training and for decoding. In addition, we investigate the influence on the decoding performance of the different acoustic components (i.e. reverberation, background noise and interfering speaker) in the reference signals used for decoding and the training signals used for computing the filters. First, we found that for all considered acoustic conditions it is possible to decode auditory attention with a decoding performance larger than 90%, even when the acoustic conditions for AAD filter training and for decoding are different. Second, when using reference signals affected by reverberation and/or background noise, a comparable decoding performance as when using clean reference signals can be obtained. In contrast, when using reference signals affected by the interfering speaker, the decoding performance significantly decreases. Third, the experimental results indicate that it is even feasible to use training signals affected by reverberation, background noise and/or the interfering speaker for computing the filters.


Sign in / Sign up

Export Citation Format

Share Document