Adaptive Vector Quantization with Creation and Reduction Grounded in the Equinumber Principle

Author(s):  
Michiharu Maeda ◽  
◽  
Noritaka Shigei ◽  
Hiromi Miyajima ◽  

This paper concerns the constitution of unit structures in neural networks for adaptive vector quantization. Partition errors are mutually equivalent when the number of inputs in a partition space is mutually equal, and average distortion is asymptotically minimized. This is termed the equinumber principle, in which two types of adaptive vector quantization are presented to avoid the initial dependence of reference vectors. Conventional techniques, such as structural learning with forgetting, have the same number of output units from start to finish. Our approach explicitly changes the number of output units to reach a predetermined number without neighboring relations equalling the numbers of inputs in a partition space. First, output units are sequentially created based on the equinumber principle in the learning process. Second, output units are sequentially deleted to reach a prespecified number. Experimental results demonstrate the effectiveness of these techniques in average distortion. These approaches are applied to image data and their feasibility was confirmed in image coding.

Author(s):  
Michiharu Maeda ◽  
◽  
Noritaka Shigei ◽  
Hiromi Miyajima ◽  
Kenichi Suzaki ◽  
...  

Two reductions in competitive learning founded on distortion standards are discussed from the viewpoint of generating necessary and appropriate reference vectors under the condition of their predetermined number. The first approach is termed the segmental reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially eliminated to reach their prespecified number based on the partition error criterion. The second approach is termed the general reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially erased based on the average distortion criterion. Experimental results demonstrate the effectiveness of our approaches compared to conventional techniques in average distortion. The two approaches are applied to image coding to determine their feasibility in quality and computation time.


2021 ◽  
Vol 17 (14) ◽  
pp. 135-153
Author(s):  
Haval Tariq Sadeeq ◽  
Thamer Hassan Hameed ◽  
Abdo Sulaiman Abdi ◽  
Ayman Nashwan Abdulfatah

Computer images consist of huge data and thus require more memory space. The compressed image requires less memory space and less transmission time. Imaging and video coding technology in recent years has evolved steadily. However, the image data growth rate is far above the compression ratio growth, Considering image and video acquisition system popularization. It is generally accepted, in particular that further improvement of coding efficiency within the conventional hybrid coding system is increasingly challenged. A new and exciting image compression solution is also offered by the deep convolution neural network (CNN), which in recent years has resumed the neural network and achieved significant success both in artificial intelligent fields and in signal processing. In this paper we include a systematic, detailed and current analysis of image compression techniques based on the neural network. Images are applied to the evolution and growth of compression methods based on the neural networks. In particular, the end-to-end frames based on neural networks are reviewed, revealing fascinating explorations of frameworks/standards for next-generation image coding. The most important studies are highlighted and future trends even envisaged in relation to image coding topics using neural networks.


2008 ◽  
Vol 25 (6) ◽  
pp. 1041-1047 ◽  
Author(s):  
Bormin Huang ◽  
Alok Ahuja ◽  
Hung-Lung Huang

Abstract Contemporary and future high spectral resolution sounders represent a significant technical advancement for environmental and meteorological prediction and monitoring. Given their large volume of spectral observations, the use of robust data compression techniques will be beneficial to data transmission and storage. In this paper, a novel adaptive vector quantization (VQ)-based linear prediction (AVQLP) method for lossless compression of high spectral resolution sounder data is proposed. The AVQLP method optimally adjusts the quantization codebook sizes to yield the maximum compression on prediction residuals and side information. The method outperforms the state-of-the-art compression methods [Joint Photographic Experts Group (JPEG)-LS, JPEG2000 Parts 1 and 2, Consultative Committee for Space Data Systems (CCSDS) Image Data Compression (IDC) 5/3, Context-Based Adaptive Lossless Image Coding (CALIC), and 3D Set Partitioning in Hierarchical Trees (SPIHT)] and achieves a new high in lossless compression for the standard test set of 10 NASA Atmospheric Infrared Sounder (AIRS) granules. It also compares favorably in terms of computational efficiency and compression gain to recently reported adaptive clustering methods for lossless compression of high spectral resolution data. Given its superior compression performance, the AVQLP method is well suited to ground operation of high spectral resolution satellite data compression for rebroadcast and archiving purposes.


1996 ◽  
Author(s):  
Suryalakshmi Pemmaraju ◽  
Sunanda Mitra ◽  
L. Rodney Long ◽  
George R. Thoma ◽  
Yao-Yang Shieh ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document