Automatic classification methods for detecting drowsiness using wavelet packet transform extracted time-domain features from single-channel EEG signal

2021 ◽  
Vol 347 ◽  
pp. 108927
Author(s):  
Venkata Phanikrishna B ◽  
Suchismitha Chinara
2021 ◽  
Vol 3 (1) ◽  
pp. 031-036
Author(s):  
S. A. GOROVOY ◽  
◽  
V. I. SKOROKHODOV ◽  
D. I. PLOTNIKOV ◽  
◽  
...  

This paper deals with the analysis of interharmonics, which are due to the presence of a nonlinear load. The tool for the analysis was a mathematical apparatus - wavelet packet transform. Which has a number of advantages over the traditional Fourier transform. A simulation model was developed in Simulink to simulate a non-stationary non-sinusoidal mode. The use of the wavelet packet transform will allow to determine the mode parameters with high accuracy from the obtained wavelet coefficients. It also makes it possible to obtain information, both in the frequency domain of the signal and in the time domain.


Author(s):  
Montri Phothisonothai ◽  
◽  
Pinit Kumhom ◽  
Kosin Chamnongthai

Background noises interfere with communication devices such as mobile telephone, digital hearing aid, etc. Therefore noise reduction (NR) part for limiting the effect of these noises is important. The paper proposes a noise reduction method based on the soft decision-making by the fuzzy inference system (FIS). The different characteristics of noises frequently occurring are used for creating the fuzzy decision rule base of the FIS. The FIS have two input parameters: the average energy and the difference of the average energy. The analysis of the FIS is done in the domain of the perceptual wavelet packet transform (PWPT) that is the human’s psychoacoustic model. The output of the FIS is used to modify the PWPT coefficients in such a way that it is more likely that the noise components are reduced while the speech signal is enhanced. The enhanced speech signal is the result of the inverse perceptual wavelet packet transform (IPWPT) of the modified coefficients. The experiment results show that the proposed method gives lower distortion than do the conventional methods especially when the input signal-to-noise ratio (SNR) is low; e.g. at SRN at 0dB the proposed method improves the output SNR level up to 4.18dB.


2017 ◽  
Vol 229 (3) ◽  
pp. 1275-1295 ◽  
Author(s):  
N. Jamia ◽  
P. Rajendran ◽  
S. El-Borgi ◽  
M. I. Friswell

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ajay Kumar Maddirala ◽  
Kalyana C Veluvolu

AbstractIn recent years, the usage of portable electroencephalogram (EEG) devices are becoming popular for both clinical and non-clinical applications. In order to provide more comfort to the subject and measure the EEG signals for several hours, these devices usually consists of fewer EEG channels or even with a single EEG channel. However, electrooculogram (EOG) signal, also known as eye-blink artifact, produced by involuntary movement of eyelids, always contaminate the EEG signals. Very few techniques are available to remove these artifacts from single channel EEG and most of these techniques modify the uncontaminated regions of the EEG signal. In this paper, we developed a new framework that combines unsupervised machine learning algorithm (k-means) and singular spectrum analysis (SSA) technique to remove eye blink artifact without modifying actual EEG signal. The novelty of the work lies in the extraction of the eye-blink artifact based on the time-domain features of the EEG signal and the unsupervised machine learning algorithm. The extracted eye-blink artifact is further processed by the SSA method and finally subtracted from the contaminated single channel EEG signal to obtain the corrected EEG signal. Results with synthetic and real EEG signals demonstrate the superiority of the proposed method over the existing methods. Moreover, the frequency based measures [the power spectrum ratio ($$\Gamma $$ Γ ) and the mean absolute error (MAE)] also show that the proposed method does not modify the uncontaminated regions of the EEG signal while removing the eye-blink artifact.


Sign in / Sign up

Export Citation Format

Share Document