compression efficiency
Recently Published Documents


TOTAL DOCUMENTS

152
(FIVE YEARS 52)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 355 ◽  
pp. 01001
Author(s):  
Pan Jin ◽  
Jin Feng

Suppose that curved compression surface of inlet consists of segments. Two curved surfaces formed by equal compression angles of the micro-element segments and a slight increase in the compression angle of the micro-element segments are designed respectively. The numerical simulation method is used to compare the performance of two curved surfaces with the reference three-wedge compression surface. Select NASA classic test data, in order to determine the turbulence model and calculation method chosen by the numerical simulation Fluent software. The results show: the configuration of the segment compression angle deeply affects compression efficiency of the curved surface compression system. Pressure gradient distribution on the compression surface with constant compression angles segments is nearly constant along the incoming flow direction, and the curved compression surface easily resist the separation of the boundary layer compared to three-wedges compression surface. The approximate calculation method of the bending shock profile is given.


2021 ◽  
pp. 1-65
Author(s):  
Dale Zhou ◽  
Christopher W. Lynn ◽  
Zaixu Cui ◽  
Rastko Ciric ◽  
Graham L. Baum ◽  
...  

Abstract In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8–23 years), we analyze structural networks derived from diffusion weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior—beyond the conventional network efficiency metric—for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding, and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.


2021 ◽  
Vol 7 (12) ◽  
pp. 268
Author(s):  
Ryota Motomura ◽  
Shoko Imaizumi ◽  
Hitoshi Kiya

In this paper, we propose a new framework for reversible data hiding in encrypted images, where both the hiding capacity and lossless compression efficiency are flexibly controlled. There exist two main purposes; one is to provide highly efficient lossless compression under a required hiding capacity, while the other is to enable us to extract an embedded payload from a decrypted image. The proposed method can decrypt marked encrypted images without data extraction and derive marked images. An original image is arbitrarily divided into two regions. Two different methods for reversible data hiding in encrypted images (RDH-EI) are used in our method, and each one is used for either region. Consequently, one region can be decrypted without data extraction and also losslessly compressed using image coding standards even after the processing. The other region possesses a significantly high hiding rate, around 1 bpp. Experimental results show the effectiveness of the proposed method in terms of hiding capacity and lossless compression efficiency.


2021 ◽  
Vol 18 (3) ◽  
pp. 194-208
Author(s):  
F.M. Dahunsi ◽  
O. A. Somefun ◽  
A.A. Ponnle ◽  
K.B. Adedeji

In recent years, the electric grid has experienced increasing deployment, use, and integration of smart meters and energy monitors. These devices transmit big time-series load data representing consumed electrical energy for load monitoring. However, load monitoring presents reactive issues concerning efficient processing, transmission, and storage. To promote improved efficiency and sustainability of the smart grid, one approach to manage this challenge is applying data-compression techniques. The subject of compressing electrical energy data (EED) has received quite an active interest in the past decade to date. However, a quick grasp of the range of appropriate compression techniques remains somewhat a bottleneck to researchers and developers starting in this domain. In this context, this paper reviews the compression techniques and methods (lossy and lossless) adopted for load  monitoring. Selected top-performing compression techniques metrics were discussed, such as compression efficiency, low reconstruction error, and encoding-decoding speed. Additionally reviewed is the relation between electrical energy, data, and sound compression. This review will motivate further interest in developing standard codecs for the compression of electrical energy data that matches that of other domains.


Algorithms ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 320
Author(s):  
Héctor Migallón ◽  
Otoniel López-Granado ◽  
Miguel O. Martínez-Rach ◽  
Vicente Galiano ◽  
Manuel P. Malumbres

The proportion of video traffic on the internet is expected to reach 82% by 2022, mainly due to the increasing number of consumers and the emergence of new video formats with more demanding features (depth, resolution, multiview, 360, etc.). Efforts are therefore being made to constantly improve video compression standards to minimize the necessary bandwidth while retaining high video quality levels. In this context, the Joint Collaborative Team on Video Coding has been analyzing new video coding technologies to improve the compression efficiency with respect to the HEVC video coding standard. A software package known as the Joint Exploration Test Model has been proposed to implement and evaluate new video coding tools. In this work, we present parallel versions of the JEM encoder that are particularly suited for shared memory platforms, and can significantly reduce its huge computational complexity. The proposed parallel algorithms are shown to achieve high levels of parallel efficiency. In particular, in the All Intra coding mode, the best of our proposed parallel versions achieves an average efficiency value of 93.4%. They also had high levels of scalability, as shown by the inclusion of an automatic load balancing mechanism.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yixin Yang ◽  
Zhiqang Xiang ◽  
Jianbo Li

When using the current method to compress the low frame rate video animation video, there is no frame rate compensation for the video image, which cannot eliminate the artifacts generated in the compression process, resulting in low definition, poor quality, and low compression efficiency of the compressed low frame rate video animation video. In the context of new media, the linear function model is introduced to study the frame rate video animation video compression algorithm. In this paper, an adaptive detachable convolutional network is used to estimate the offset of low frame rate video animation using local convolution. According to the estimation results, the video frames are compensated to eliminate the artifacts of low frame rate video animation. After the frame rate compensation, the low frame rate video animation video is divided into blocks, the CS value of the image block is measured, the linear estimation of the image block is carried out by using the linear function model, and the compression of the low frame rate video animation video is completed according to the best linear estimation result. The experimental results show that the low frame rate video and animation video compressed by the proposed algorithm have high definition, high compression quality under different compression ratios, and high compression efficiency under different compression ratios.


2021 ◽  
Vol 16 (2) ◽  
pp. 1-8
Author(s):  
Giovane Gomes Silva ◽  
Ícaro Gonçalves Siqueira ◽  
Mateus Grellert ◽  
Claudio Machado Diniz

The new Versatile Video Coding (VVC) standard was recently developed to improve compression efficiency of previous video coding standards and to support new applications. This was achieved at the cost of an increase in the computational complexity of the encoder algorithms, which leads to the need to develop hardware accelerators and to apply approximate computing techniques to achieve the performance and power dissipation required for systems that encode video. This work proposes the implementation of an approximate hardware architecture for interpolation filters defined in the VVC standard targeting real-time processing of high resolution videos. The architecture is able to process up to 2560x1600 pixels videos at 30 fps with power dissipation of 23.9 mW when operating at a frequency of 522 MHz, with an average compression efficiency degradation of only 0.41% compared to default VVC video encoder software configuration.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Soulef Bouaafia ◽  
Randa Khemiri ◽  
Seifeddine Messaoud ◽  
Fatma Elzahra Sayadi

Future Video Coding (FVC) is a modern standard in the field of video coding that offers much higher compression efficiency than the HEVC standard. FVC was developed by the Joint Video Exploration Team (JVET), formed through collaboration between the ISO/IEC MPEG and ITU-T VCEG. New tools emerging with the FVC bring in super resolution implementation schemes that are being recommended for Ultra-High-Definition (UHD) video coding in both SDR and HDR images. However, a new flexible block structure is adopted in the FVC standard, which is named quadtree plus binary tree (QTBT) in order to enhance compression efficiency. In this paper, we provide a fast FVC algorithm to achieve better performance and to reduce encoding complexity. First, we evaluate the FVC profiles under All Intra, Low-Delay P, and Random Access to determine which coding components consume the most time. Second, a fast FVC mode decision is proposed to reduce encoding computational complexity. Then, a comparison between three configurations, namely, Random Access, Low-Delay B, and Low-Delay P, is proposed, in terms of Bitrate, PSNR, and encoding time. Compared to previous works, the experimental results prove that the time saving reaches 13% with a decrease in the Bitrate of about 0.6% and in the PSNR of 0.01 to 0.2 dB.


Author(s):  
Mohd Waseem Siddiqui ◽  
Nishith Kumar Das ◽  
R. K. Sahoo

An experimental investigation was carried out to study the performance evaluation of Modified Low-Temperature Cascade (MLTC) system, based on two-stage cascade type refrigeration system using the combination of R404A/R23 refrigerants. This system was developed using chilled water (CHW) in the condenser of high-temperature circuit (HTC) and pre-cooler (PC) in the low-temperature circuit (LTC). Isentropic compression efficiency is computed in this work and used here as an important parameter. Performance of MLTC system was compared with or without the introduction of PC into LTC. System’s coefficient of performance (COP) has also been compared with using CHW, cooling tower water (CTW), normal water (NW) into the HTC condenser. It has also been shown that COPs of the system are significantly affected by slight variation in the LTC and HTC evaporating temperatures. Presented parameters and comparisons are likely to help in developing a low-temperature (LT) refrigeration system with higher efficiency for industrial and other applications.


2021 ◽  
Vol 45 (1) ◽  
pp. 329-349
Author(s):  
Branslav Mados ◽  
Zuzana Bilanová ◽  
Ján Hurtuk

Lossless data compression algorithms can use statistical redundancy to represent data using a fewer number of bits in comparison to the original uncompressed data. Run-Length Encoding (RLE) is one of the simplest lossless compression algorithms in terms of understanding its principles and software implementation, as well as in terms of temporal and spatial complexity. If this principle is applied to individual bits of original uncompressed data without respecting the byte boundaries, this approach is referred to as bit-level Run-Length Encoding. Lightweight algorithm for lossless data compression proposed in this paper optimizes bit-level RLE data compression, uses special encoding of repeating data blocks, and, if necessary, combines it with delta data transformation or representation of data in its original form intending to increase compression efficiency compared to a conventional bit-level RLE approach. The advantage of the algorithm proposed in this paper is in its low time and memory consumption which are basic features of RLE, along with the simultaneous increase of compression ratio, compared to the classical bit-level RLE approach.


Sign in / Sign up

Export Citation Format

Share Document