signal presentation
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 6)

H-INDEX

10
(FIVE YEARS 1)

2020 ◽  
Vol 20 (5) ◽  
pp. 20-37
Author(s):  
Kiril M. Alexiev ◽  
Teodor G. Toshkov ◽  
Dimiter P. Prodanov

AbstractTraditionally, the engineers analyze signals in the time domain and in the frequency domain. These signal representations discover different signal characteristics and in many cases, the exploration of a single signal presentation is not sufficient. In the present paper, a new self-similar decomposition of digital signals is proposed. Unlike some well-known approaches, the newly proposed method for signal decomposition and description does not use pre-selected templates such as sine waves, wavelets, etc. It is realized in time domain but at the same time, it contains information about frequency signal characteristics. Good multiscale characteristics of the algorithm being proposed are demonstrated in a series of examples. It can be used for compact signal presentation, restoration of distorted signals, event detection, localization, etc. The method is also suitable for description of highly repetitive continuous and digital signals.


2020 ◽  
Vol 105 ◽  
pp. 121-130 ◽  
Author(s):  
Ronel Z. Samuel ◽  
Pedro Lei ◽  
Kihoon Nam ◽  
Olga J. Baker ◽  
Stelios T. Andreadis

2020 ◽  
Vol 24 ◽  
pp. 233121652094582
Author(s):  
Kai Siedenburg ◽  
Saskia Röttges ◽  
Kirsten C. Wagener ◽  
Volker Hohmann

It is well known that hearing loss compromises auditory scene analysis abilities, as is usually manifested in difficulties of understanding speech in noise. Remarkably little is known about auditory scene analysis of hearing-impaired (HI) listeners when it comes to musical sounds. Specifically, it is unclear to which extent HI listeners are able to hear out a melody or an instrument from a musical mixture. Here, we tested a group of younger normal-hearing (yNH) and older HI (oHI) listeners with moderate hearing loss in their ability to match short melodies and instruments presented as part of mixtures. Four-tone sequences were used in conjunction with a simple musical accompaniment that acted as a masker (cello/piano dyads or spectrally matched noise). In each trial, a signal-masker mixture was presented, followed by two different versions of the signal alone. Listeners indicated which signal version was part of the mixture. Signal versions differed either in terms of the sequential order of the pitch sequence or in terms of timbre (flute vs. trumpet). Signal-to-masker thresholds were measured by varying the signal presentation level in an adaptive two-down/one-up procedure. We observed that thresholds of oHI listeners were elevated by on average 10 dB compared with that of yNH listeners. In contrast to yNH listeners, oHI listeners did not show evidence of listening in dips of the masker. Musical training of participants was associated with a lowering of thresholds. These results may indicate detrimental effects of hearing loss on central aspects of musical scene perception.


2018 ◽  
Author(s):  
Nikolai Smetanin ◽  
Mikhail A. Lebedev ◽  
Alexei Ossadtchi

ABSTRACTNeurofeedback (NFB) is a real-time paradigm, where subjects monitor their own brain activity presented to them via one of the sensory modalities: visual, auditory or tactile. NFB has been proposed as an approach to treat neurological conditions and augment brain functions. In many applications, especially in the automatic learning scenario it is important to decrease NFB latency, so that appropriate brain mechanisms can be efficiently engaged. To this end, we propose a novel algorithm that significantly reduces feedback signal presentation in the electroencephalographic (EEG) NFB paradigm. The algorithm is based on the least squares optimization of the finite impulse response (FIR) filter weights and analytic signal reconstruction. In this approach, the trade-off between NFB latency and the accuracy of EEG envelope estimation can be achieved depending on the application needs. Moreover, the algorithm allows to implement predictive NFB by setting latency to negative values while maintaining acceptable envelope estimation accuracy. As such, our algorithm offers significant improvements in cases where subjects need to detect neural events as soon as possible and even in advance.


2008 ◽  
Vol 61 (11) ◽  
pp. 1658-1668 ◽  
Author(s):  
Lee Hogarth ◽  
Anthony Dickinson ◽  
Alison Austin ◽  
Craig Brown ◽  
Theodora Duka

Three localized, visual pattern stimuli were trained as predictive signals of auditory outcomes. One signal partially predicted an aversive noise in Experiment 1 and a neutral tone in Experiment 2, whereas the other signals consistently predicted either the occurrence or absence of the noise. The expectation of the noise was measured during each signal presentation, and only participants for whom this expectation demonstrated contingency knowledge showed differential attention to the signals. Importantly, when attention was measured by visual fixations, the contingency-aware group attended more to the partially predictive signal than to the consistent predictors in both experiments. This profile of visual attention supports the Pearce and Hall (1980) theory of the role of attention in associative learning.


Sign in / Sign up

Export Citation Format

Share Document