frequency resolution
Recently Published Documents


TOTAL DOCUMENTS

733
(FIVE YEARS 176)

H-INDEX

34
(FIVE YEARS 5)

2022 ◽  
Vol 12 (2) ◽  
pp. 832
Author(s):  
Han Li ◽  
Kean Chen ◽  
Lei Wang ◽  
Jianben Liu ◽  
Baoquan Wan ◽  
...  

Thanks to the development of deep learning, various sound source separation networks have been proposed and made significant progress. However, the study on the underlying separation mechanisms is still in its infancy. In this study, deep networks are explained from the perspective of auditory perception mechanisms. For separating two arbitrary sound sources from monaural recordings, three different networks with different parameters are trained and achieve excellent performances. The networks’ output can obtain an average scale-invariant signal-to-distortion ratio improvement (SI-SDRi) higher than 10 dB, comparable with the human performance to separate natural sources. More importantly, the most intuitive principle—proximity—is explored through simultaneous and sequential organization experiments. Results show that regardless of network structures and parameters, the proximity principle is learned spontaneously by all networks. If components are proximate in frequency or time, they are not easily separated by networks. Moreover, the frequency resolution at low frequencies is better than at high frequencies. These behavior characteristics of all three networks are highly consistent with those of the human auditory system, which implies that the learned proximity principle is not accidental, but the optimal strategy selected by networks and humans when facing the same task. The emergence of the auditory-like separation mechanisms provides the possibility to develop a universal system that can be adapted to all sources and scenes.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 234
Author(s):  
Mauro D’Arco ◽  
Ettore Napoli ◽  
Efstratios Zacharelos ◽  
Leopoldo Angrisani ◽  
Antonio Giuseppe Maria Strollo

The time-base used by digital storage oscilloscopes allows limited selections of the sample rate, namely constrained to a few integer submultiples of the maximum sample rate. This limitation offers the advantage of simplifying the data transfer from the analog-to-digital converter to the acquisition memory, and of assuring stability performances, expressed in terms of absolute jitter, that are independent of the chosen sample rate. On the counterpart, it prevents an optimal usage of the memory resources of the oscilloscope and compels to post processing operations in several applications. A time-base that allows selecting the sample rate with very fine frequency resolution, in particular as a rational submultiple of the maximum rate, is proposed. The proposal addresses the oscilloscopes with time-interleaved converters, that require a dedicated and multifaceted approach with respect to architectures where a single monolithic converter is in charge of signal digitization. The proposed time-base allows selecting with fine frequency resolution sample rate values up to 200 GHz and beyond, still assuring jitter performances independent of the sample rate selection.


2021 ◽  
Vol 12 ◽  
Author(s):  
Simon J. Houtman ◽  
Hanna C. A. Lammertse ◽  
Annemiek A. van Berkel ◽  
Ganna Balagura ◽  
Elena Gardella ◽  
...  

STXBP1 syndrome is a rare neurodevelopmental disorder caused by heterozygous variants in the STXBP1 gene and is characterized by psychomotor delay, early-onset developmental delay, and epileptic encephalopathy. Pathogenic STXBP1 variants are thought to alter excitation-inhibition (E/I) balance at the synaptic level, which could impact neuronal network dynamics; however, this has not been investigated yet. Here, we present the first EEG study of patients with STXBP1 syndrome to quantify the impact of the synaptic E/I dysregulation on ongoing brain activity. We used high-frequency-resolution analyses of classical and recently developed methods known to be sensitive to E/I balance. EEG was recorded during eyes-open rest in children with STXBP1 syndrome (n = 14) and age-matched typically developing children (n = 50). Brain-wide abnormalities were observed in each of the four resting-state measures assessed here: (i) slowing of activity and increased low-frequency power in the range 1.75–4.63 Hz, (ii) increased long-range temporal correlations in the 11–18 Hz range, (iii) a decrease of our recently introduced measure of functional E/I ratio in a similar frequency range (12–24 Hz), and (iv) a larger exponent of the 1/f-like aperiodic component of the power spectrum. Overall, these findings indicate that large-scale brain activity in STXBP1 syndrome exhibits inhibition-dominated dynamics, which may be compensatory to counteract local circuitry imbalances expected to shift E/I balance toward excitation, as observed in preclinical models. We argue that quantitative EEG investigations in STXBP1 and other neurodevelopmental disorders are a crucial step to understand large-scale functional consequences of synaptic E/I perturbations.


2021 ◽  
Vol 19 ◽  
pp. 179-184 ◽  
Author(s):  
Christian Schiffer ◽  
Andreas R. Diewald

Abstract. Radar signal processing is a promising tool for vital sign monitoring. For contactless observation of breathing and heart rate a precise measurement of the distance between radar antenna and the patient's skin is required. This results in the need to detect small movements in the range of 0.5 mm and below. Such small changes in distance are hard to be measured with a limited radar bandwidth when relying on the frequency based range detection alone. In order to enhance the relative distance resolution a precise measurement of the observed signal's phase is required. Due to radar reflections from surfaces in close proximity to the main area of interest the desired signal of the radar reflection can get superposed. For superposing signals with little separation in frequency domain the main lobes of their discrete Fourier transform (DFT) merge into a single lobe, so that their peaks cannot be differentiated. This paper evaluates a method for reconstructing the phase and amplitude of such superimposed signals.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Noore Zahra

Motivation. In Parkinson’s disease, disturbances in gait initiation are of particular interest as they affect postural adjustments and movement disorders which may lead to falling. This falling down may be dangerous and at times life threatening, thus becoming a major concern for the patient and the clinician. These gait abnormalities are due to dependencies of movement on the motor system. Paroxysmal dyskinesia (commonly termed as freezing of gait) is one of the extreme cases of motor blocks. Since the last two decades, automated methods for monitoring motor activities, their data analysis, and algorithm techniques have been subjects of research for Parkinson’s disease (PD). This research will be of help to clinicians in prescribing a drug regimen. Problem Statement. Development of a system based on an algorithm for automatic detection of the freezing of gait (FOG) and other postural adjustments, with the help of wearable sensor’s data and to provide a quantitative approach for assessing the intensity of PD by analyzing frequency components associated with different motor movements and gait. Methodology. This paper presents a novel wavelet energy distribution approach to distinguish between walking, standing, and FOG. Data from the acceleration sensor is taken as input. After preprocessing, discrete wavelet transform (DWT) is applied on the data which shows its entire frequency spectrum. In the next step, energy is computed for the decomposed level of interest. Results. Systems detected FOG and other gait postures and showed time-frequency range by examining differentiated decomposed signals by DWT. Energy distribution and PSD graph proved the accuracy of the system. Validation is done by the LOSO method which shows 90% accuracy for the proposed method. Conclusion. Observations of the clinical trials validate the proposed technique. In comparison to the previous techniques reported in literature, it is seen that the proposed method shows improvement in time and frequency resolution as well as processing time.


2021 ◽  
Author(s):  
◽  
Jiawen Chua

<p>In most real-time systems, particularly for applications involving system identification, latency is a critical issue. These applications include, but are not limited to, blind source separation (BSS), beamforming, speech dereverberation, acoustic echo cancellation and channel equalization. The system latency consists of an algorithmic delay and an estimation computational time. The latter can be avoided by using a multi-thread system, which runs the estimation process and the processing procedure simultaneously. The former, which consists of a delay of one window length, is usually unavoidable for the frequency-domain approaches. For frequency-domain approaches, a block of data is acquired by using a window, transformed and processed in the frequency domain, and recovered back to the time domain by using an overlap-add technique.  In the frequency domain, the convolutive model, which is usually used to describe the process of a linear time-invariant (LTI) system, can be represented by a series of multiplicative models to facilitate estimation. To implement frequency-domain approaches in real-time applications, the short-time Fourier transform (STFT) is commonly used. The window used in the STFT must be at least twice the room impulse response which is long, so that the multiplicative model is sufficiently accurate. The delay constraint caused by the associated blockwise processing window length makes most the frequency-domain approaches inapplicable for real-time systems.  This thesis aims to design a BSS system that can be used in a real-time scenario with minimal latency. Existing BSS approaches can be integrated into our system to perform source separation with low delay without affecting the separation performance. The second goal is to design a BSS system that can perform source separation in a non-stationary environment.  We first introduce a subspace approach to directly estimate the separation parameters in the low-frequency-resolution time-frequency (LFRTF) domain. In the LFRTF domain, a shorter window is used to reduce the algorithmic delay of the system during the signal acquisition, e.g., the window length is shorter than the room impulse response. The subspace method facilitates the deconvolution of a convolutive mixture to a new instantaneous mixture and simplifies the estimation process.  Second, we propose an alternative approach to address the algorithmic latency problem. The alternative method enables us to obtain the separation parameters in the LFRTF domain based on parameters estimated in the high-frequency-resolution time-frequency (HFRTF) domain, where the window length is longer than the room impulse response, without affecting the separation performance.  The thesis also provides a solution to address the BSS problem in a non-stationary environment. We utilize the ``meta-information" that is obtained from previous BSS operations to facilitate the separation in the future without performing the entire BSS process again. Repeating a BSS process can be computationally expensive. Most conventional BSS algorithms require sufficient signal samples to perform analysis and this prolongs the estimation delay. By utilizing information from the entire spectrum, our method enables us to update the separation parameters with only a single snapshot of observation data. Hence, our method minimizes the estimation period, reduces the redundancy and improves the efficacy of the system.  The final contribution of the thesis is a non-iterative method for impulse response shortening. This method allows us to use a shorter representation to approximate the long impulse response. It further improves the computational efficiency of the algorithm and yet achieves satisfactory performance.</p>


2021 ◽  
Author(s):  
◽  
Jiawen Chua

<p>In most real-time systems, particularly for applications involving system identification, latency is a critical issue. These applications include, but are not limited to, blind source separation (BSS), beamforming, speech dereverberation, acoustic echo cancellation and channel equalization. The system latency consists of an algorithmic delay and an estimation computational time. The latter can be avoided by using a multi-thread system, which runs the estimation process and the processing procedure simultaneously. The former, which consists of a delay of one window length, is usually unavoidable for the frequency-domain approaches. For frequency-domain approaches, a block of data is acquired by using a window, transformed and processed in the frequency domain, and recovered back to the time domain by using an overlap-add technique.  In the frequency domain, the convolutive model, which is usually used to describe the process of a linear time-invariant (LTI) system, can be represented by a series of multiplicative models to facilitate estimation. To implement frequency-domain approaches in real-time applications, the short-time Fourier transform (STFT) is commonly used. The window used in the STFT must be at least twice the room impulse response which is long, so that the multiplicative model is sufficiently accurate. The delay constraint caused by the associated blockwise processing window length makes most the frequency-domain approaches inapplicable for real-time systems.  This thesis aims to design a BSS system that can be used in a real-time scenario with minimal latency. Existing BSS approaches can be integrated into our system to perform source separation with low delay without affecting the separation performance. The second goal is to design a BSS system that can perform source separation in a non-stationary environment.  We first introduce a subspace approach to directly estimate the separation parameters in the low-frequency-resolution time-frequency (LFRTF) domain. In the LFRTF domain, a shorter window is used to reduce the algorithmic delay of the system during the signal acquisition, e.g., the window length is shorter than the room impulse response. The subspace method facilitates the deconvolution of a convolutive mixture to a new instantaneous mixture and simplifies the estimation process.  Second, we propose an alternative approach to address the algorithmic latency problem. The alternative method enables us to obtain the separation parameters in the LFRTF domain based on parameters estimated in the high-frequency-resolution time-frequency (HFRTF) domain, where the window length is longer than the room impulse response, without affecting the separation performance.  The thesis also provides a solution to address the BSS problem in a non-stationary environment. We utilize the ``meta-information" that is obtained from previous BSS operations to facilitate the separation in the future without performing the entire BSS process again. Repeating a BSS process can be computationally expensive. Most conventional BSS algorithms require sufficient signal samples to perform analysis and this prolongs the estimation delay. By utilizing information from the entire spectrum, our method enables us to update the separation parameters with only a single snapshot of observation data. Hence, our method minimizes the estimation period, reduces the redundancy and improves the efficacy of the system.  The final contribution of the thesis is a non-iterative method for impulse response shortening. This method allows us to use a shorter representation to approximate the long impulse response. It further improves the computational efficiency of the algorithm and yet achieves satisfactory performance.</p>


2021 ◽  
pp. 1-81
Author(s):  
Xiaokai Wang ◽  
Zhizhou Huo ◽  
Dawei Liu ◽  
Weiwei Xu ◽  
Wenchao Chen

Common-reflection-point (CRP) gather is one extensive-used prestack seismic data type. However, CRP suffers more noise than poststack seismic dataset. The events in the CRP gather are always flat, and the effective signals from neighboring traces in the CRP gather have similar forms not only in the time domain but also in the time-frequency domain. Therefore, we firstly use the synchrosqueezing wavelet transform (SSWT) to decompose seismic traces to the time-frequency domain, as the SSWT has better time-frequency resolution and reconstruction properties. Then we propose to use the similarity of neighboring traces to smooth and threshold the SSWT coefficients in the time-frequency domain. Finally, we used the modified SSWT coefficients to reconstruct the denoised traces for the CRP gather. Synthetic and field data examples show that our proposed method can effectively attenuate random noise with a better attenuation performance than the commonly-used principal component analysis, FX filter, and the continuous wavelet transform method.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Maohua Xiao ◽  
Wei Zhang ◽  
Kai Wen ◽  
Yue Zhu ◽  
Yilidaer Yiliyasi

AbstractIn the process of Wavelet Analysis, only the low-frequency signals are re-decomposed, and the high-frequency signals are no longer decomposed, resulting in a decrease in frequency resolution with increasing frequency. Therefore, in this paper, firstly, Wavelet Packet Decomposition is used for feature extraction of vibration signals, which makes up for the shortcomings of Wavelet Analysis in extracting fault features of nonlinear vibration signals, and different energy values in different frequency bands are obtained by Wavelet Packet Decomposition. The features are visualized by the K-Means clustering method, and the results show that the extracted energy features can accurately distinguish the different states of the bearing. Then a fault diagnosis model based on BP Neural Network optimized by Beetle Algorithm is proposed to identify the bearing faults. Compared with the Particle Swarm Algorithm, Beetle Algorithm can quickly find the error extreme value, which greatly reduces the training time of the model. At last, two experiments are conducted, which show that the accuracy of the model can reach more than 95%, and the model has a certain anti-interference ability.


Sign in / Sign up

Export Citation Format

Share Document