scholarly journals EEMD and Multiscale PCA-Based Signal Denoising Method and Its Application to Seismic P-Phase Arrival Picking

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5271
Author(s):  
Kang Peng ◽  
Hongyang Guo ◽  
Xueyi Shang

Signal denoising is one of the most important issues in signal processing, and various techniques have been proposed to address this issue. A combined method involving wavelet decomposition and multiscale principal component analysis (MSPCA) has been proposed and exhibits a strong signal denoising performance. This technique takes advantage of several signals that have similar noises to conduct denoising; however, noises are usually quite different between signals, and wavelet decomposition has limited adaptive decomposition abilities for complex signals. To address this issue, we propose a signal denoising method based on ensemble empirical mode decomposition (EEMD) and MSPCA. The proposed method can conduct MSPCA-based denoising for a single signal compared with the former MSPCA-based denoising methods. The main steps of the proposed denoising method are as follows: First, EEMD is used for adaptive decomposition of a signal, and the variance contribution rate is selected to remove components with high-frequency noises. Subsequently, the Hankel matrix is constructed on each component to obtain a higher order matrix, and the main score and load vectors of the PCA are adopted to denoise the Hankel matrix. Next, the PCA-denoised component is denoised using soft thresholding. Finally, the stacking of PCA- and soft thresholding-denoised components is treated as the final denoised signal. Synthetic tests demonstrate that the EEMD-MSPCA-based method can provide good signal denoising results and is superior to the low-pass filter, wavelet reconstruction, EEMD reconstruction, Hankel–SVD, EEMD-Hankel–SVD, and wavelet-MSPCA-based denoising methods. Moreover, the proposed method in combination with the AIC picking method shows good prospects for processing microseismic waves.

2021 ◽  
Vol 23 (09) ◽  
pp. 1326-1338
Author(s):  
M Krishnaveni ◽  
◽  
P Subashini ◽  
TT Dhivyaprabha ◽  
◽  
...  

Articulation disorder is referred as difficulty occurs in the pronunciation of specific speech sounds. An irregular coordination of the movement of tongue, lips, palate, jaw, respiratory system, vocal tract, height of the larynx, air flow through nasal leads to the incorrect production of speech sounds. The objective of this paper is to propose a computational model based on Recurrent Neural Network (RNN) algorithm to categorize the phonological patterns of Tamil speech articulation disorder signals into four predefined groups, namely, substitution, omission, distortion and addition. The methodology of the proposed work is described as follows. (1) List of articulation disorder test words suggested by Speech Language Pathologists (SLPs) is selected for this experimental study. (2) Real time speech signals that comprise of Tamil vowels (Uyir eluthukkal) and consonants (Meiyeluthukkal) are collected from people with articulation disorder. (3) Acoustic noise and weak signals are eliminated by applying Low pass filter to acquire the filtered speech signal. (4) Mel-Frequency Cepstral Coefficients (MFCCs) technique is implemented to extract the prominent features from denoised signals. (5) Principal Component Analysis (PCA) method is employed to choose fine-tune feature subset. (6) The refined features are employed to calibrate RNN model for classification. Results show that RNN model achieves 90.25% classification accuracy when compared to other artificial neural network algorithms.


2010 ◽  
Vol 143-144 ◽  
pp. 527-532
Author(s):  
Wei Du ◽  
Quan Liu

This paper presents a novel and fast scheme for signal denoising by using Empirical mode decomposition (EMD). The EMD involves the adaptive decomposition of signal into a series of oscillating components, Intrinsic mode functions(IMFs), by means of a decomposition process called sifting algorithm. The basic principle of the method is to reconstruct the signal with IMFs previously selected and thresholded. The denoising method is applied to four simulated signals with different noise levels and the results compared to Wavelets, EMD-Hard and EMD-Soft methods.


2019 ◽  
Vol 10 (1) ◽  
pp. 249 ◽  
Author(s):  
Diego Renza ◽  
Jaisson Vargas ◽  
Dora M. Ballesteros

The verification of the integrity and authenticity of multimedia content is an essential task in the forensic field, in order to make digital evidence admissible. The main objective is to establish whether the multimedia content has been manipulated with significant changes to its content, such as the removal of noise (e.g., a gunshot) that could clarify the facts of a crime. In this project we propose a method to generate a summary value for audio recordings, known as hash. Our method is robust, which means that if the audio has been modified slightly (without changing its significant content) with perceptual manipulations such as MPEG-4 AAC, the hash value of the new audio is very similar to that of the original audio; on the contrary, if the audio is altered and its content changes, for example with a low pass filter, the new hash value moves away from the original value. The method starts with the application of MFCC (Mel-frequency cepstrum coefficients) and the reduction of dimensions through the analysis of main components (principal component analysis, PCA). The reduced data is encrypted using as inputs two values from a particular binarization system using Collatz conjecture as the basis. Finally, a robust 96-bit code is obtained, which varies little when perceptual modifications are made to the signal such as compression or amplitude modification. According to experimental tests, the BER (bit error rate) between the hash value of the original audio recording and the manipulated audio recording is low for perceptual manipulations, i.e., 0% for FLAC and re-quantization, 1% in average for volume (−6 dB gain), less than 5% in average for MPEG-4 and resampling (using the FIR anti-aliasing filter); but more than 25% for non-perceptual manipulations such as low pass filtering (3 kHz, fifth order), additive noise, cutting and copy-move.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6806
Author(s):  
Sana Alshboul ◽  
Mohammad Fraiwan

Several studies have shown the importance of proper chewing and the effect of chewing speed on the human health in terms of caloric intake and even cognitive functions. This study aims at designing algorithms for determining the chew count from video recordings of subjects consuming food items. A novel algorithm based on image and signal processing techniques has been developed to continuously capture the area of interest from the video clips, determine facial landmarks, generate the chewing signal, and process the signal with two methods: low pass filter, and discrete wavelet decomposition. Peak detection was used to determine the chew count from the output of the processed chewing signal. The system was tested using recordings from 100 subjects at three different chewing speeds (i.e., slow, normal, and fast) without any constraints on gender, skin color, facial hair, or ambience. The low pass filter algorithm achieved the best mean absolute percentage error of 6.48%, 7.76%, and 8.38% for the slow, normal, and fast chewing speeds, respectively. The performance was also evaluated using the Bland-Altman plot, which showed that most of the points lie within the lines of agreement. However, the algorithm needs improvement for faster chewing, but it surpasses the performance of the relevant literature. This research provides a reliable and accurate method for determining the chew count. The proposed methods facilitate the study of the chewing behavior in natural settings without any cumbersome hardware that may affect the results. This work can facilitate research into chewing behavior while using smart devices.


2012 ◽  
Vol 214 ◽  
pp. 148-153
Author(s):  
F.C. You ◽  
Y. Zhang

In order to overcome the discontinuance of the hard thresholding function and the defect of slashing singularity more seriously in the soft thresholding function, and improve the denoising effect and detect the transformer partial discharge signal more accurately, this paper puts forward an improved wavelet threshold denoising method through analyzing the interference noise of transformer partial discharge signals and studying various wavelet threshold denoising method, especially the wavelet threshold denoising method that overcomes the shortcomings of the hard and soft threshold. Simulation results show that the denoising effect of the method has been greatly improved than the traditional hard and soft threshold method. This method can be widely used in practical transformer partial discharge signal denoising.


2013 ◽  
Vol 66 (6) ◽  
pp. 837-858 ◽  
Author(s):  
Yalong Ban ◽  
Quan Zhang ◽  
Xiaoji Niu ◽  
Wenfei Guo ◽  
Hongping Zhang ◽  
...  

This paper has made a comprehensive investigation of the contribution of inertial measurement unit (IMU) signal denoising in terms of navigation accuracy, through theoretical analysis, simulations and real tests. Analysis shows that the integral step in the inertial navigation system (INS) algorithm is essentially equivalent to a super low-pass filter (LPF), whose filtering strength is related to the integral time of the INS. Therefore the contribution of the IMU denoising filter is almost completely overshadowed by the effect of the integral step for normal navigation cases. The theoretical analysis result was further verified by the simulations with an example of inertial angle estimation and by real tests of INS and GPS/INS systems. Results showed that the IMU signal denoising cannot bring observable improvement to INS or GPS/INS systems. This conclusion is strictly valid in the condition that the equivalent cut-off frequency of the integral step (which equals the reciprocal of the INS working alone time) is lower than the cut-off frequency of the denoising filter, which is the usual case for INS applications (except for some static data processing such as the stationary alignment of INS).


Author(s):  
Feng Miao ◽  
Rongzhen Zhao

Noise cancellation is one of the most successful applications of the wavelet transform. Its basic idea is to compare wavelet decomposition coefficients with the given thresholds and only keep those bigger ones and set those smaller ones to zero and then do wavelet reconstruction with those new coefficients. It is most likely for this method to treat some useful weak components as noise and eliminate them. Based on the cyclostationary property of vibration signals of rotating machines, a novel wavelet noise cancellation method is proposed. A numerical signal and an experimental signal of rubbing fault are used to test and compare the performances of the new method and the conventional wavelet based denoising method provided by MATLAB. The results show that the new noise cancellation method can efficaciously suppress the noise component at all frequency bands and has better denoising performance than the conventional one.


2018 ◽  
Vol 22 (1) ◽  
pp. 112-125 ◽  
Author(s):  
Yong Peng ◽  
Hao Wu ◽  
Qin Fang ◽  
Ziming Gong

Deceleration time histories of the 25.3 mm diameter, 428 g projectile penetration/perforation into 41 MPa reinforced concrete slabs with thicknesses of 100, 200, and 300 mm, are discussed. An ultra-high g small-caliber deceleration data recorder with a diameter of 18 mm is employed to digitize and record the acceleration during launch in the barrel, as well as the deceleration during penetration or perforation into targets. The accelerometer mounted in the data recorder measures rigid-body projectile deceleration as well as structural vibrations. To validate these complex signals, a validation approach for the accuracy of the recorded deceleration time data is proposed based on frequency characteristic analyses and signal integrations, and three sets of whole-range deceleration time data are validated. As the deceleration of the rigid-body projectile is the main concern, a signal processing approach is further given to obtain the rigid-body deceleration data, that is, using a low-pass filter to remove the high-frequency responses associated with vibrations of the projectile case and the internal supporting structure. The first valley frequency from the spectrum analysis is determined to be the critical cutoff frequency. To verify the accuracies of the theoretical model and the numerical simulation in predicting projectile motion time histories, theoretical projectile penetration/perforation deceleration time models are given and numerical simulations are performed. The predicted projectile time histories consist well with the validated deceleration time test data, as do their corresponding velocity and displacement time curves.


2019 ◽  
Vol 11 (22) ◽  
pp. 82-92
Author(s):  
Raaid N. Hassan

This paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method gives better performance, especially in image fine structure preservation, compared with other general denoising algorithms.


Sign in / Sign up

Export Citation Format

Share Document