A novel Electroencephalogram (EEG) data compression technique

Author(s):  
Hakan Gurkan ◽  
Umit Guz ◽  
B. Siddik Yarman
2010 ◽  
Vol 24 (5) ◽  
pp. 487-493
Author(s):  
Yiming Ouyang ◽  
Xi'e Huang ◽  
Huaguo Liang ◽  
Baosheng Zou

Author(s):  
Sean Tanabe ◽  
Maggie Parker ◽  
Richard Lennertz ◽  
Robert A Pearce ◽  
Matthew I Banks ◽  
...  

Abstract Delirium is associated with electroencephalogram (EEG) slowing and impairments in connectivity. We hypothesized that delirium would be accompanied by a reduction in the available cortical information (i.e. there is less information processing occurring), as measured by a surrogate, Lempil-Ziv Complexity (LZC), a measure of time-domain complexity. Two ongoing perioperative cohort studies (NCT03124303, NCT02926417) contributed EEG data from 91 patients before and after surgery; 89 participants were used in the analyses. After cleaning and filtering (0.1-50Hz), the perioperative change in LZC and LZC normalized (LZCn) to a phase-shuffled distribution were calculated. The primary outcome was the correlation of within-patient paired changes in delirium severity (Delirium Rating Scale-98 [DRS]) and LZC. Scalp-wide threshold free cluster enhancement was employed for multiple comparison correction. LZC negatively correlated with DRS in a scalp-wide manner (peak channel r 2=0.199, p<0.001). This whole brain effect remained for LZCn, though the correlations were weaker (peak channel r 2=0.076, p=0.010). Delirium diagnosis was similarly associated with decreases in LZC (peak channel p<0.001). For LZCn, the topological significance was constrained to the midline posterior regions (peak channel p=0.006). We found a negative correlation of LZC in the posterior and temporal regions with monocyte chemoattractant protein-1 (peak channel r 2=0.264, p<0.001, n=47) but not for LZCn. Complexity of the EEG signal fades proportionately to delirium severity implying reduced cortical information. Peripheral inflammation, as assessed by monocyte chemoattractant protein-1, does not entirely account for this effect, suggesting that additional pathogenic mechanisms are involved.


This paper proposes an improved data compression technique compared to existing Lempel-Ziv-Welch (LZW) algorithm. LZW is a dictionary-updation based compression technique which stores elements from the data in the form of codes and uses them when those strings recur again. When the dictionary gets full, every element in the dictionary are removed in order to update dictionary with new entry. Therefore, the conventional method doesn’t consider frequently used strings and removes all the entry. This method is not an effective compression when the data to be compressed are large and when there are more frequently occurring string. This paper presents two new methods which are an improvement for the existing LZW compression algorithm. In this method, when the dictionary gets full, the elements that haven’t been used earlier are removed rather than removing every element of the dictionary which happens in the existing LZW algorithm. This is achieved by adding a flag to every element of the dictionary. Whenever an element is used the flag is set high. Thus, when the dictionary gets full, the dictionary entries where the flag was set high are kept and others are discarded. In the first method, the entries are discarded abruptly, whereas in the second method the unused elements are removed once at a time. Therefore, the second method gives enough time for the nascent elements of the dictionary. These techniques all fetch similar results when data set is small. This happens due to the fact that difference in the way they handle the dictionary when it’s full. Thus these improvements fetch better results only when a relatively large data is used. When all the three techniques' models were used to compare a data set with yields best case scenario, the compression ratios of conventional LZW is small compared to improved LZW method-1 and which in turn is small compared to improved LZW method-2.


2021 ◽  
Vol 102 ◽  
pp. 04013
Author(s):  
Md. Atiqur Rahman ◽  
Mohamed Hamada

Modern daily life activities produced lots of information for the advancement of telecommunication. It is a challenging issue to store them on a digital device or transmit it over the Internet, leading to the necessity for data compression. Thus, research on data compression to solve the issue has become a topic of great interest to researchers. Moreover, the size of compressed data is generally smaller than its original. As a result, data compression saves storage and increases transmission speed. In this article, we propose a text compression technique using GPT-2 language model and Huffman coding. In this proposed method, Burrows-Wheeler transform and a list of keys are used to reduce the original text file’s length. Finally, we apply GPT-2 language mode and then Huffman coding for encoding. This proposed method is compared with the state-of-the-art techniques used for text compression. Finally, we show that the proposed method demonstrates a gain in compression ratio compared to the other state-of-the-art methods.


2021 ◽  
pp. 17-25
Author(s):  
Mahmud Alosta ◽  
◽  
◽  
Alireza Souri

In recent years, a massive amount of genomic DNA sequences are being created which leads to the development of new storing and archiving methods. There is a major challenge to process, store or transmit the huge volume of DNA sequences data. To lessen the number of bits needed to store and transmit data, data compression (DC) techniques are proposed. Recently, DC becomes more popular, and large number of techniques is proposed with applications in several domains. In this paper, a lossless compression technique named Arithmetic coding is employed to compress DNA sequences. In order to validate the performance of the proposed model, the artificial genome dataset is used and the results are investigated interms of different evaluation parameters. Experiments were performed on artificial datasets and the compression performance of Arithmetic coding is compared to Huffman coding, LZW coding, and LZMA techniques. From simulation results, it is clear that the Arithmetic coding achieves significantly better compression with a compression ratio of 0.261 at the bit rate of 2.16 bpc.


Author(s):  
Wei-Yen Hsu

In this chapter, a practical artifact removal Brain-Computer Interface (BCI) system for single-trial Electroencephalogram (EEG) data is proposed for applications in neuroprosthetics. Independent Component Analysis (ICA) combined with the use of a correlation coefficient is proposed to remove the EOG artifacts automatically, which can further improve classification accuracy. The features are then extracted from wavelet transform data by means of the proposed modified fractal dimension. Finally, Support Vector Machine (SVM) is used for the classification. When compared with the results obtained without using the EOG signal elimination, the proposed BCI system achieves promising results that will be effectively applied in neuroprosthetics.


Sign in / Sign up

Export Citation Format

Share Document