scholarly journals Exception Handling Method Based on Event from Look-Up Table Applying Stream-Based Lossless Data Compression

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 240
Author(s):  
Shinichi Yamagiwa ◽  
Koichi Marumo ◽  
Suzukaze Kuwabara

It is getting popular to implement an environment where communications are performed remotely among IoT edge devices, such as sensory devices and the cloud servers due to applying, for example, artificial intelligence algorithms to the system. In such situations that handle big data, lossless data compression is one of the solutions to reduce the big data. In particular, the stream-based data compression technology is focused on such systems to compress infinitely continuous data stream with very small delay. However, during the continuous data compression process, it is not able to insert an exception code among the compressed data without any additional mechanisms, such as data framing and the packeting technique, as used in networking technologies. The exception code indicates configurations for the compressor/decompressor and/or its peripheral logics. Then, it is used in real time for the configuration of parameters against those components. To implement the exception code, data compression algorithm must include a mechanism to distinguish original data before compression and the exception code clearly. However, the conventional algorithms do not include such mechanism. This paper proposes novel methods to implement the exception code in data compression that uses look-up table, called the exception symbol. Additionally, we describe implementation details of the method by applying it to algorithms of stream-based data compression. Because some of the proposed mechanisms need to reserve entries in the table, we also discuss the effect against data compression performance according to experimental evaluations.

Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression. In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 159 ◽  
Author(s):  
Shinichi Yamagiwa ◽  
Eisaku Hayakawa ◽  
Koichi Marumo

Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.


Author(s):  
Ramya. S ◽  
Gokula Krishnan. V

Big data has reached a maturity that leads it into a productive phase. This means that most of the main issues with big data have been addressed to a degree that storage has become interesting for full commercial exploitation. However, concerns over data compression still prevent many users from migrating data to remote storage. Client-side data compression in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Compression is actively used by a number of backup providers as well as various services. Unfortunately, compressed data is pseudorandom and thus cannot be deduplicated: as a consequence, current schemes have to entirely sacrifice storage efficiency. In this system, present a scheme that permits a more fine-grained trade-off. And present a novel idea that differentiates data according to their popularity. Based on this idea, design a compression scheme that guarantees semantic storage preservation for unpopular data and provides scalable data storage and bandwidth benefits for popular data. We can implement variable data chunk similarity algorithm for analyze the chunks data and store the original data with compressed format. And also includes the encryption algorithm to secure the data. Finally, can use the backup recover system at the time of blocking and also analyze frequent login access system.


2021 ◽  
pp. 391-410
Author(s):  
Shinichi Yamagiwa

AbstractIn this chapter, we introduce aspects of applying data-compression techniques. First, we study the background of recent communication data paths. The focus of this chapter is a fast lossless data-compression mechanism that handles data streams completely. A data stream comprises continuous data with no termination of the massive data generated by sources such as movies and sensors. In this chapter, we introduce LCA-SLT and LCA-DLT, which accept the data streams, as well as several implementations of these stream-based compression techniques. We also show optimization techniques for optimal implementation in hardware.


Author(s):  
Yu Zhang ◽  
Yan-Ge Wang ◽  
Yan-Ping Bai ◽  
Yong-Zhen Li ◽  
Zhao-Yong Lv ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1521
Author(s):  
Jihoon Lee ◽  
Seungwook Yoon ◽  
Euiseok Hwang

With the development of the internet of things (IoT), the power grid has become intelligent using massive IoT sensors, such as smart meters. Generally, installed smart meters can collect large amounts of data to improve grid visibility and situational awareness. However, the limited storage and communication capacities can restrain their infrastructure in the IoT environment. To alleviate these problems, efficient and various compression techniques are required. Deep learning-based compression techniques such as auto-encoders (AEs) have recently been deployed for this purpose. However, the compression performance of the existing models can be limited when the spectral properties of high-frequency sampled power data are widely varying over time. This paper proposes an AE compression model, based on a frequency selection method, which improves the reconstruction quality while maintaining the compression ratio (CR). For efficient data compression, the proposed method selectively applies customized compression models, depending on the spectral properties of the corresponding time windows. The framework of the proposed method involves two primary steps: (i) division of the power data into a series of time windows with specified spectral properties (high-frequency, medium-frequency, and low-frequency dominance) and (ii) separate training and selective application of the AE models, which prepares them for the power data compression that best suits the characteristics of each frequency. In simulations on the Dutch residential energy dataset, the frequency-selective AE model shows significantly higher reconstruction performance than the existing model with the same CR. In addition, the proposed model reduces the computational complexity involved in the analysis of the learning process.


Author(s):  
Ying Wang ◽  
Yiding Liu ◽  
Minna Xia

Big data is featured by multiple sources and heterogeneity. Based on the big data platform of Hadoop and spark, a hybrid analysis on forest fire is built in this study. This platform combines the big data analysis and processing technology, and learns from the research results of different technical fields, such as forest fire monitoring. In this system, HDFS of Hadoop is used to store all kinds of data, spark module is used to provide various big data analysis methods, and visualization tools are used to realize the visualization of analysis results, such as Echarts, ArcGIS and unity3d. Finally, an experiment for forest fire point detection is designed so as to corroborate the feasibility and effectiveness, and provide some meaningful guidance for the follow-up research and the establishment of forest fire monitoring and visualized early warning big data platform. However, there are two shortcomings in this experiment: more data types should be selected. At the same time, if the original data can be converted to XML format, the compatibility is better. It is expected that the above problems can be solved in the follow-up research.


2018 ◽  
Vol 4 (12) ◽  
pp. 142 ◽  
Author(s):  
Hongda Shen ◽  
Zhuocheng Jiang ◽  
W. Pan

Hyperspectral imaging (HSI) technology has been used for various remote sensing applications due to its excellent capability of monitoring regions-of-interest over a period of time. However, the large data volume of four-dimensional multitemporal hyperspectral imagery demands massive data compression techniques. While conventional 3D hyperspectral data compression methods exploit only spatial and spectral correlations, we propose a simple yet effective predictive lossless compression algorithm that can achieve significant gains on compression efficiency, by also taking into account temporal correlations inherent in the multitemporal data. We present an information theoretic analysis to estimate potential compression performance gain with varying configurations of context vectors. Extensive simulation results demonstrate the effectiveness of the proposed algorithm. We also provide in-depth discussions on how to construct the context vectors in the prediction model for both multitemporal HSI and conventional 3D HSI data.


Sign in / Sign up

Export Citation Format

Share Document