Estimation of Dead Time Using Correlation Analysis in the Time-Frequency Domain

1997 ◽  
Vol 30 (11) ◽  
pp. 47-52
Author(s):  
Yasuhiko Hiraide ◽  
Hirohiko Kazato
Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6891
Author(s):  
Tomasz Boczar ◽  
Dariusz Zmarzły ◽  
Michał Kozioł ◽  
Daria Wotzka

The study reported in this paper is concerned with areas related to developing methods of measuring, processing and analyzing infrasound noise caused by operation of wind farms. The paper contains the results of the correlation analysis of infrasound signals generated by a wind turbine with a rated capacity of 2 MW recorded by three independent measurement setups comprising identical components and characterized by the same technical parameters. The measurements of infrasound signals utilized a dedicated measurement system called INFRA, which was developed and built by KFB ACOUSTICS Sp. z o.o. In particular, the scope of the paper includes the results of correlation analysis in the time domain, which was carried out using the autocovariance function separately for each of the three measuring setups. Moreover, the courses of the cross-correlation function were calculated separately for each of the potential combinations of infrasound range recorded by the three measuring setups. In the second stage, a correlation analysis of the recorded infrasound signals in the frequency domain was performed, using the coherence function. In the next step, infrasound signals recorded in three setups were subjected to time-frequency transformations. In this part, the waveforms of the scalograms were determined by means of continuous wavelet transform. Wavelet coherence waveforms were calculated in order to determine the level of the correlation of the obtained dependencies in the time-frequency domain. The summary contains the results derived from using correlation analysis methods in the time, frequency and time-frequency domains.


Author(s):  
Wentao Xie ◽  
Qian Zhang ◽  
Jin Zhang

Smart eyewear (e.g., AR glasses) is considered to be the next big breakthrough for wearable devices. The interaction of state-of-the-art smart eyewear mostly relies on the touchpad which is obtrusive and not user-friendly. In this work, we propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense UFAs. There are two main challenges in designing the system. The first challenge is that the system is in a severe multipath environment and the received signal could have large attenuation due to the frequency-selective fading which will degrade the system's performance. To overcome this challenge, we design an Orthogonal Frequency Division Multiplexing (OFDM)-based channel state information (CSI) estimation scheme that is able to measure the phase changes caused by a facial action while mitigating the frequency-selective fading. The second challenge is that because the skin deformation caused by a facial action is tiny, the received signal has very small variations. Thus, it is hard to derive useful information directly from the received signal. To resolve this challenge, we apply a time-frequency analysis to derive the time-frequency domain signal from the CSI. We show that the derived time-frequency domain signal contains distinct patterns for different UFAs. Furthermore, we design a Convolutional Neural Network (CNN) to extract high-level features from the time-frequency patterns and classify the features into six UFAs, namely, cheek-raiser, brow-raiser, brow-lower, wink, blink and neutral. We evaluate the performance of our system through experiments on data collected from 26 subjects. The experimental result shows that our system can recognize the six UFAs with an average F1-score of 0.92.


2021 ◽  
Vol 11 (3) ◽  
pp. 1084
Author(s):  
Peng Wu ◽  
Ailan Che

The sand-filling method has been widely used in immersed tube tunnel engineering. However, for the problem of monitoring during the sand-filling process, the traditional methods can be inadequate for evaluating the state of sand deposits in real-time. Based on the high efficiency of elastic wave monitoring, and the superiority of the backpropagation (BP) neural network on solving nonlinear problems, a spatiotemporal monitoring and evaluation method is proposed for the filling performance of foundation cushion. Elastic wave data were collected during the sand-filling process, and the waveform, frequency spectrum, and time–frequency features were analysed. The feature parameters of the elastic wave were characterized by the time domain, frequency domain, and time-frequency domain. By analysing the changes of feature parameters with the sand-filling process, the feature parameters exhibited dynamic and strong nonlinearity. The data of elastic wave feature parameters and the corresponding sand-filling state were trained to establish the evaluation model using the BP neural network. The accuracy of the trained network model reached 93%. The side holes and middle holes were classified and analysed, revealing the characteristics of the dynamic expansion of the sand deposit along the diffusion radius. The evaluation results are consistent with the pressure gauge monitoring data, indicating the effectiveness of the evaluation and monitoring model for the spatiotemporal performance of sand deposits. For the sand-filling and grouting engineering, the machine-learning method could offer a better solution for spatiotemporal monitoring and evaluation in a complex environment.


Sign in / Sign up

Export Citation Format

Share Document