Data Compression Techniques and Standards

Author(s):  
Phillip K.C. Tse

In the previous chapter, we see that the performance of a storage system depends on the amount of data being retrieved. The size of multimedia objects are however very large in size. Thus, the performance of the storage system can be enhanced if the object sizes are reduced. Therefore, multimedia objects are always compressed when they are stored. In addition, the performance of most subsystems depends on the amount of data being processed. Since multimedia objects are large in size, their accessing times are long. Thus, multimedia objects are always kept in their compressed form when they are being stored, retrieved, and processed. We shall describe the commonly used compression techniques and compression standards in this chapter. We first describe the general compression model in the next section. Then, we explain the techniques in compressing textual data. This is followed by the image compression techniques. In particular, we shall explain the JPEG2000 compression with details. Lastly, we explain the MPEG2 video compression standard. These compression techniques are helpful to understand the multimedia data being stored and retrieved.

2021 ◽  
Vol 10 (1) ◽  
pp. 22-28
Author(s):  
S. Karthigai Selvam ◽  
S. Selvam

In recent days, the data are transformed in the form of multimedia data such as images, graphics, audio and video. Multimedia data require a huge amount of storage capacity and transmission bandwidth. Consequently, data compression is used for reducing the data redundancy and serves more storage of data. In this paper, addresses the problem (demerits) of the lossy compression of images. This proposed method is deals on SVD Power Method that overcomes the demerits of Python SVD function. In our experimental result shows superiority of proposed compression method over those of Python SVD function and some various compression techniques. In addition, the proposed method also provides different degrees of error flexibility, which give minimum of execution of time and a better image compression.


2020 ◽  
Vol 34 (08) ◽  
pp. 2050061
Author(s):  
Shraddha Pandit ◽  
Piyush Kumar Shukla ◽  
Akhilesh Tiwari ◽  
Prashant Kumar Shukla ◽  
Manish Maheshwari ◽  
...  

Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.


Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression. In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods.


Connectivity ◽  
2020 ◽  
Vol 146 (4) ◽  
Author(s):  
G. Ya. Kis ◽  
◽  
V. M. Cherevyk ◽  

The article describes the current state of data transfer protocols and methods of image and video compression through the use of artificial neural networks, namely convolutional multilayer networks and deep structured learning. Based on recent publications, a comparative analysis of the performance of classical compression methods and methods based on neural networks was performed. The most effective are those compression methods which are based on decorrelation transforms, namely discrete cosine (JPEG standard) and wavelet (JPEG-2000 standard) transforms. The transform coefficients have a well-understood physical content of spatial frequencies and can be further quantized for a more optimal representation of components that are less important for human perception. The HEVC standard guarantees a more efficient image compression scheme that further takes advantage of the similarity of adjacent blocks and uses interpolation (intracoding). Based on the HEVC standard, the BPG (better portable graphics) format was developed to be used on the Internet as an alternative to JPEG, which is much more efficient than other standards. An overview of the current state of open standards, provided in the article, gives an explanation of what properties of neural networks can be applied to image compression. There are two approaches towards the compression using neural networks: in case of the first approach neural network is used as a part of an existing algorithm (hybrid coding), and in case of the second approach the neural network gives a concise representation of the data (compression network). The final conclusions were made as regards to the application of these algorithms in H.265 protocol (HEVC) and the possibility of creating a new protocol which is completely based on the neural network. Protocols using neural network show better results during image compression, but are currently hard to be subjected to standardization in order to obtain the expected result in case of different network architects. We may expect and predict an increase in the need for video transmission in the future, which will bump into the imitating nature of classical approaches. At the same time, the development of specialized processors for parallel data processing and implementation of neural networks is currently underway. These two factors indicate that neural networks must be embedded into the industrial data standards.


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
Yu. I. Katkov ◽  
◽  
O. S. Zvenigorodsky ◽  
O. V. Zinchenko ◽  
V. V. Onyshchenko ◽  
...  

The article is devoted to the topical issue of finding new effective and improving existing widespread compression methods in order to reduce computational complexity and improve the quality of image-renewable image compression images, is important for the introduction of cloud technologies. The article presents a problem To increase the efficiency of cloud storage, it is necessary to determine methods for reducing the information redundancy of digital images by fractal compression of video content, to make recommendations on the possibilities of applying these methods to solve various practical problems. The necessity of storing high-quality video information in new HDTV formats 2k, 4k, 8k in cloud storage to meet the existing needs of users has been substantiated. It is shown that when processing and transmitting high quality video information there is a problem of reducing the redundancy of video data (image compression) provided that the desired image quality is preserved, restored by the user. It has been shown that in cloud storage the emergence of such a problem is historically due to the contradiction between consumer requirements for image quality and the necessary volumes and ways to reduce redundancy of video data, which are transmitted over communication channels and processed in data center servers. The solution to this problem is traditionally rooted in the search for effective technologies for compressing, archiving and compressing video information. An analysis of video compression methods and digital video compression technology has been performed, which reduces the amount of data used to represent the video stream. Approaches to image compression in cloud storage under conditions of preservation or a slight reduction in the amount of data that provide the user with the specified quality of the restored image are shown. Classification of special compression methods without loss and with information loss is provided. Based on the analysis, it is concluded that it is advisable to use special methods of compression with loss of information to store high quality video information in the new formats HDTV 2k, 4k, 8k in cloud storage. The application of video image processing and their encoding and compression on the basis of fractal image compression is substantiated. Recommendations for the implementation of these methods are given.


2020 ◽  
Vol 1 (1) ◽  
pp. 1
Author(s):  
Shaohua Wan ◽  
Xinfang Zhang ◽  
Tongyang Wang ◽  
Songfu Lu ◽  
Xiaoyang Yu

Author(s):  
Shiguo Lian

Since the beginning of 1990s, some multimedia standards (Joan, Didier, & Chad, 2003) related to image compression, video compression, or audio compression have been published and widely used. These compression methods reduce media data’s volumes, and save the storage space or transmission bandwidth. After the middle of 1990s, network technology has been rapidly developed and widely spread, which increases the network bandwidth. With the development of network technology and multimedia (image, audio, video, etc.) technology, multimedia data are used more and more widely. In some applications related to politics, economics, militaries, entertainment, or education, multimedia content security becomes important and urgent. Some sensitive data need to be protected against unauthorized users. For example, only the customers paying for a TV program can watch the program online, while other customers cannot watch the content, only the administrator can update (delete, insert, copy, etc.) the TV program in the database, while others cannot modify the content, the TV program released over Internet can be traced, and so forth. Multimedia content protection technology protects multimedia data against the threats coming from unauthorized users, especially in network environment. Generally, protected properties include the confidentiality, integrity, ownership, and so forth. The confidentiality defines that only the authorized users can access the multimedia content, while others cannot know multimedia content. The integrity tells whether media data are modified or not. The ownership shows media data’s owner information that is used to authenticate or trace the distributor. During the past decade, various technologies have been proposed to protect media data, which are introduced in this chapter. Additionally, the threats to multimedia data are presented, the existing protection methods are compared, and some future trends are proposed.


Author(s):  
Manjunath Ramachandra

If a large data transactions are to happen in the supply chain over the web, the resources would be strained and lead to choking of the network apart from the increased transfer costs. To use the available resources over the internet effectively, the data is often compressed before transfer. This chapter provides the different methods and levels of data compression. A separate section is devoted for multimedia data compression where a certain losses in the data is tolerable during compression due to the limitations of human perception.


2020 ◽  
Vol 20 (02) ◽  
pp. 2050007
Author(s):  
Poorva Girishwaingankar ◽  
Sangeeta Milind Joshi

This paper proposes a compression algorithm using octonary repetition tree (ORT) based on run length encoding (RLE). Generally, RLE is one type of lossless data compression method which has duplication problem as a major issue due to the usage of code word or flag. Hence, ORT is offered instead of using a flag or code word to overcome this issue. This method gives better performance by means of compression ratio, i.e. 99.75%. But, the functioning of ORT is not good in terms of compression speed. For that reason, physical- next generation secure computing (PHY-NGSC) is hybridized with ORT to raise the compression speed. It uses an MPI-open MP programming paradigm on ORT to improve the compression speed of encoder. The planned work achieves multiple levels of parallelism within an image such as MPI and open MP for parallelism across a group of pictures level and slice level, respectively. At the same time, wide range of data compression like multimedia, executive files and documents are possible in the proposed method. The performance of the proposed work is compared with other methods like accordian RLE, context adaptive variable length coding (CAVLC) and context-based arithmetic coding (CBAC) through the implementation in Matlab working platform.


2009 ◽  
Vol 25 (13) ◽  
pp. 1575-1586 ◽  
Author(s):  
R. Giancarlo ◽  
D. Scaturro ◽  
F. Utro

Sign in / Sign up

Export Citation Format

Share Document